1. Introduction
Irregular cavities are one of the main parts produced by equipment manufacturing enterprises, and their volume is a key indicator of product production quality. Accurate and efficient measurement of the irregular cavity volume ensures various industrial production performance indicators and production quality [1]. However, it is difficult to use traditional methods to measure volume in many applications, such as in automobile engine combustion chambers, liquid storage tanks, supercharging devices, and vacuuming devices [2,3]. The traditional ways use the water injection process to measure irregular volume. These methods rely on hand-crafted techniques, which are labor intensive, have low measurement efficiency, and cause significant errors.
1.1. Related Work
Besides the traditional water-injection-based methods [4], the laser measurement method, orthogonal double-grating method [5], air pressure method [6], audio measurement method [7], and ultrasonic measurement method [8] have been reported by recent studies. These measurement methods have the following shortcomings: (1) complex hardware system, (2) high technical difficulty and low measurement efficiency, (3) significant error, and (4) complicated operation. These limitations hinder the volume measurement methods applied in the actual production. This paper provides new technologies for intelligent non-destructive measurements of the volume of irregular cavity components. The proposed technologies can significantly improve the volume measurement accuracy of irregular cavity components and alleviate the influence of human factors.
Recently, inspired by the success of the neural networks, many machine-learning--powered methods have been proposed for irregular volume measurement, such as in [9,10]. The neural network includes two processes, forward propagation and back propagation. Forward propagation aims to output the predicted value and the loss value [11]. In contrast, back propagation is based on the gradient descent algorithm and assigns the loss value to each neuron, changing each neuron’s corresponding weight, threshold, and loss [12]. However, in the real-world application of irregular volume measurement, the back-propagation techniques in most existing methods are very time-consuming and require a large amount of data. These methods with “garbage in” may face gradient disappearance and explosion problems. Furthermore, these methods explore learning rates and other hyperparameters based on the prior human experience [13,14].
More recent, how to balance training efficiency and measurement accuracy remains a challenge for most existing neural networks. This is because insufficient samples collected in the real world cannot meet the training requirements of the deep neural networks. The Hilbert–Schmidt independence criterion (HSIC) is proposed for training deep neural networks, and current studies have reported the extensions [15,16]. These methods show that the HSIC is comparable to cross-entropy-based back-propagation methods on popular classification datasets. These systems aim to make outputs different from classification labels. A single layer is used to train with SGD (without back propagation) to reformat the information, further improving the training efficiency of the model. HSIC-based methods successfully avoid the vanishing and exploding gradients of back propagation and achieve a fast convergence speed, strong generalization ability, and simple calculations [17].
1.2. Contributions
To solve the above problems, especially the challenges in real-world applications, we analyze the structural characteristics of irregular cavities and propose neural networks based on micro-compressed air for volume measurement. We design a corresponding measurement system and achieve the fast and accurate measurement of irregular cavity volume based on the proposed neural networks. We design the neural network based on fully connected neural networks (FCNNs), resulting in the advantages of a high training efficiency, low requirements for input data, and no restrictions. Powered by the FCNNs and the HSIC, the proposed method achieves leading performance in real-world volume measurement applications.
The main contributions of the paper are presented as follows:
(1). We design a micro-compressed air method to collect parameters related to the irregular cavity volume. To ensure that the read-to-measure parts are not damaged, the closed atmospheric air in the irregular cavity parts is slightly compressed, and the measurement parameters are collected.
(2). We propose a method to analyze the main controlling factors affecting the volume detection of irregular cavity parts. We screen seven main characteristic parameters: pressure, temperature, humidity, gas equilibration time, etc. We carried out linear and nonlinear correlation analysis, feature selection, and normalization processing for the characteristic parameters. On this basis, we establish an irregular cavity volume measurement model based on FCNNs and the HSIC.
(3). During the training process, we propose a new training scheme based on the HSIC. This method solves the challenges existing in the traditional BP-based methods. This method can reduce the error as much as possible and make the predicted value closer to the ground truth.
(4). We conduct extensive experiments to evaluate the proposed neural network. We build a dataset for irregular volume measurement. The samples are collected in real-world applications. The results show the effectiveness and outperformance of the proposed method.
The paper is organized as follows. Section 2 presents a volume measurement method based on the micro-compressed air. Section 3 details the technology of the proposed neural network. Section 4 presents the experimental results and discussion. Section 5 concludes the paper and gives future study directions.
2. Preliminary
According to the ideal gas equation of state [18], the mass of the gas is conserved when the gas of a fixed group is in equilibrium. Its pressure, volume, and temperature have the following relationship:
(1)
where P is the pressure of the gas (Pa), V is the volume of the gas (mL), Z is the compression coefficient of the gas (dimension is 1), m is the mass of the gas (mol), T is the temperature of the gas (K), and R is the gas constant (R =8.31 J/(mol·K)).The detection principle is shown in Figure 1.
Experimental process:
(1). Under the environment of normal temperature and pressure, the parts of the irregular cavity to be tested are filled with air of normal pressure;
(2). Seal the air in the irregular cavity components to be tested. Record the ambient atmospheric pressure , the stable differential pressure of the gas in the cavity of the component to be tested, and the temperature ;
(3). The precision piston is controlled to extend completely into the cavity of the part to be tested, and the gas is slightly compressed. The volume of the piston completely entering the irregular cavity part to be tested is recorded as ;
(4). After thermodynamic equilibrium is achieved, experimental data are recorded including the ambient atmospheric pressure , the stable differential pressure of the gas in the parts, and the temperature ;
As shown in Figure 1, under normal pressure, the air inside the irregular cavity is referred to as , the pressure is +, and the temperature is T1. After micro-compressing the seal and setting the volume V0, the air volume is , the pressure is , and the temperature is . The state equation after micro-compression is as follows:
(2)
where is the volume of the irregular cavity part to be measured. It can be deduced from the above formula:(3)
According to Equations (2) and (3), the volume of the component to be tested is directly related to the atmospheric pressure, and the pressure and temperature before and after the micro-compression of the air inside the component. In addition, Table 1 and Figure 2 illustrate that the volume is also related to the time it takes for the air in the component to reach equilibrium after being compressed.
Table 1 and Figure 2 show that the detection volume is closer to the actual value as the equilibration time becomes longer. However, the testing time should not be too long. If the test time is prolonged, the pressure of the gas in the irregular cavity components is easily affected by temperature changes. In addition, the gas temperature and pressure inside the container tend to increase with the temperature of the environment, which makes the data change constantly. Therefore, the equilibration time was taken as 30 s.
Experiments show that environmental factors such as humidity and atmospheric pressure also bring some deviations to the gas state equation, just like temperature. This has a specific influence on the accuracy and stability of cavity volume detection. There is a complex relationship between these parameters and the cavity volume of the component under test, which is a nonlinear problem. As shown in Table 2, this paper uses atmospheric pressure, humidity, and other characteristic indicators as the input characteristic parameters of the volume prediction model of irregular cavity components, according to the experimental results.
3. Method
A neural network is a mathematical or computational model that mimics the structure and function of biological nerves [19]. It is used to estimate or approximate a function. A fully connected neural network is one of the connection methods, including input, hidden, and output layers [20,21,22,23,24]. An FCNN has a solid nonlinear fitting ability and can approach reality with high fitting accuracy [25]. However, its gradient descent algorithm is time-consuming and memory-intensive due to the constant search for suitable hyperparameters. The training time is longer, and the generalization effect is poor. This paper used the HSIC algorithm to replace the gradient back-propagation algorithm of the neural network. Compared with the traditional gradient back-propagation algorithm, its convergence speed and accuracy were significantly improved, and it had a strong generalization ability. At the same time, the amount of computation and memory footprint was greatly reduced. Therefore, this paper proposes a volume prediction model for irregular cavity parts based on FCNN and HSIC algorithms.
3.1. Preprocessing of Feature Data for Volume Prediction of Irregular Cavity Parts
To improve the training efficiency and prediction accuracy of the network, it is necessary to normalize the original data. Predicted values were denormalized to compare experimental results. There are two main methods of data normalization: maximum value normalization and mean-variance normalization [26]. The calculation formula of the maximum value normalization is shown as follows:
(4)
The principle of maximum value normalization is that all data are mapped between 0–1, suitable for cases where the data distribution has apparent boundaries. It is susceptible to outliers, which contribute to the overall skewness of the data. The calculation formula of the mean-variance normalization is shown as follows:
(5)
where is the standard deviation of all sample data. Mean-variance normalization methods can adjust the data to a distribution with a mean of 0 and a variance of 1. It is not easily affected by outliers and is suitable for situations where the data distribution has no apparent boundaries and there are outliers. In this study, the maximum-minimum processing method is chosen. Since the value of “0” in the traditional method tends to have a more significant impact on the results, this paper improved the normalization method of the maximum and minimum values. The original data of the irregular cavity component volume was preprocessed, and the model’s predicted value was reversely preprocessed. The calculation formula of the data preprocessing method is shown as follows:(6)
The calculation formula of the reverse preprocessing method is shown as follows:
(7)
where is the original data of the irregular cavity component volume, is the normalized data of various types, and and are the maximum and minimum values of various features of the original data, respectively.3.2. Establishment of the Volume Prediction Model of Irregular Cavity Components with Fully Connected Neural Network
Similar to the BP neural network structure, the FCNN performs a weighted summation of the input components and selects the corresponding activation function. Before constructing the volume prediction model, the basic parameters of the network are determined according to the characteristics of the network, such as activation function, number of neurons, number of network layers, learning rate, training step size, number of training times, and optimizer. The number of neurons has a significant impact on the learning and fitting capabilities of the model. Too few neurons and hidden layers make the model not capable enough to learn to mine the hidden features of the volume information of the parts. On the contrary, too many numbers make the model redundant, making it difficult to train or even overfit. Therefore, the basic parameters of the neural network need to be adapted to the research object.
Compared with the traditional neural network, the FCNN emphasizes the depth of the model. There are usually multiple hidden layers. This paper constructed a neural network with one input layer and one output layer, and its internal structure is shown in Figure 3. The selected 7 feature indicators helped determine the number of input neurons of the model to be 7 to match the preprocessed data. According to the Kolmogorov theorem and Hecht-Nielsen theory, the number of layers and nodes of the hidden layer is determined by the trial-and-error method. The final number of hidden layers was 5, and nodes were 32, 64, 128, 64, and 32. The model’s output is the predicted value of the volume of the irregular cavity components. The neurons in the output layer were set to 1, and the neural network topology was 7, 32, 64, 128, 64, 32, and 1.
The neural network training function adopts a gradient optimization method with an adaptive learning rate momentum factor. The activation function between the input and hidden layers is a sigmoid function, and the activation function between the hidden layer and output layer is a tanh function. The activation function between hidden layers is a leaky linear rectified unit (LeakyReLU) with a negative gradient. The initial model training step size was set to 10,000, and the amount of data (batch size) fed into the network for training was set to 32.
The mean absolute error (MAE) is used to test the gap between the model’s prediction and the actual value based on the loss function [27,28]. The mean absolute error (MAE) is the loss function generated by the running of the model, and the calculation formula is shown in Equation (8):
(8)
where and are the actual volume value and the predicted value of the model, respectively, in milliliters, and n represents batch size.3.3. HSIC Bottleneck Method
In this paper, the HSIC bottleneck method was adopted to replace the gradient backpropagation algorithm of the FCNN to avoid the problems of inefficiency and poor generalization of FCNNs [24]. The loss function has to be built to maximize the mutual information between the output and the label and minimize the mutual dependence between the input and output. In this way, the output can be predicted with the fewest input features, making the features of the hidden layer more efficient. This improved neural network training method is beneficial in preventing overfitting and improving generalization.
Traditional neural network training methods require using mutual information theory in the information bottleneck (IB). The IB principles encapsulate the concept of minimum sufficient statistics. It expresses the relationship between the information required for an optimally balanced prediction output and the information retained about the input. The optimal solution can be obtained by:
(9)
where and represent the input and label respectively, represents the output in the hidden layer, represents the Lagrangian multiplier, represents the mutual information between and , and represents the mutual information between and . It can be seen from the formula that the IB mainly retains the output information of the labels in the hidden layer when compressing the input data features.In practice, the IB is difficult to calculate due to a number of reasons. If the input signal is continuous, the mutual information is infinite unless a noise signal is added to the network. Therefore, many algorithms bin the input data, which does not expand the data to high dimensions. However, this will result in different results due to different binning rules. Additional influencing factors are the differences between discrete and continuous data, and between discrete data and differential entropy. This study uses the HSIC instead of mutual information, as in the IB principles. Unlike mutual information estimation, the HSIC adopts a robust calculation method for time complexity , where l represents the number of input data.
The HSIC bottleneck is formed as follows. We introduce the cross-covariance operator in RKHS and defined HSIC as the Hilbert–Schmidt norm of the cross-covariance operator. Let and be two random variables, and extract samples from the probability density functions of and . Define two nonlinear maps . and represent the RKHS of and , respectively, and the corresponding kernel functions of and are:
(10)
(11)
The cross-covariance operator for and is defined as:
(12)
where represents the tensor product, and , , and represent mathematical expectations. HSIC is defined as the Hilbert–Schmidt norm squared:(13)
where represents the joint mathematical expectation of and . For a pair of data , the empirical estimate of HSIC is:(14)
where represents the trace operation of the matrix, represent the kernel matrix of and respectively, is the centering matrix, and is the all-one vector.In an FCNN composed of hidden layers, the dimension of the output matrix is , where , and represents the number of units in the hidden layer. The size of the hidden layer output matrix of each batch is , where is the batch size. When applying IB principles to calculate the objective function, HSIC is used instead of mutual information:
(15)
where M is the input data, N is the label data, and is the Lagrangian multiplier. According to Equation (14), the terms of the HSIC in Equation (15) can be obtained as:(16)
(17)
Equations (15)–(17) show that the optimal output Z_i finds a balance between redundant information independent of the input and maximum correlation with the output. Ideally, when Equation (14) converges, the information needed to predict the labels is preserved, eliminating redundant information that leads to overfitting.
4. Experiments
4.1. Experimental Settings
The project was built under TensorFlow and Keras frameworks. For fair comparison and comprehensive evaluation, we followed the popular experimental settings. First, we set the same parameters for the methods based on traditional BP training and non-BP training. Next, we tested the convergence speed and final prediction results under different learning rates. Finally, we compared the proposed model with state-of-the-art models using the same hyperparameter settings to analyze the performance.
This paper aimed to improve the performance of the proposed model in real-world applications. We designed a highly adaptable irregular cavity volume database by analyzing data types. We set the characteristic parameters that affect the volume of the irregular cavity to be measured as the training of the model, and the output variable is volume. In the real-world production process, we collected 2718 sets of samples, shuffled the order, and took 2000 sets as the training set, 360 sets as the validation set, and the remaining 358 sets as the test set.
4.2. Ablation Studies
To evaluate the contribution of the proposed different components to the system, we compared the results of different methods, including FCNN with HSIC, FCNN without HSIC, and SVM. According to the popular training setting, the Lagrangian multiplier of β in HSIC was set to 80.
Many ablation experiments are conducted under different parameter settings. We found the best scheme to integrate the non-BP algorithm into the proposed framework. The experiments set a batch size of 32 and a learning rate of 0.006. The proposed HSIC can train each layer individually, enabling individual optimization and parallel computation of each layer without passing forward gradients. To evaluate the proposed non-BP algorithm, we evaluated the accuracy and loss values of the proposed method and traditional BP training methods, as shown in Figure 4 and Figure 5.
Figure 4 shows that the traditional BP-based method did not reach convergence at 10,000 iterations, whereas the proposed method achieved convergence at 3000 iterations. The accuracy of the proposed method reached 0.9912, which is higher than that of the BP-based network. Figure 5 shows the variation of the loss function of the proposed method (MAE, red) and the traditional BP-based method (green) with the results of each iteration. It can be seen that the proposed algorithm achieved convergence at 3000 iterations and had higher accuracy. The proposed neural network performed better than traditional methods for irregular volume measurements.
During the training process, as the number of training steps increases, the MAE value output by the model drops sharply, and the change curves of the loss functions of the training set and the validation set are shown in Figure 6.
Figure 6 shows the results of the training set loss function (MAE, blue) and validation set loss function (red) of the fully connected neural network and HSIC model with each iteration. During the training process, the loss value kept decreasing as the training progressed, especially in the first 2500 iterations where the loss value decreased rapidly. Then, the decreasing trend slowed down significantly. At 3000 iterations, the model reached a convergence state, and the loss value was 0.135. Ultimately, the training and validation loss functions were almost equal, remaining stable at 5.0 × 10−3. The loss value showed a slow upward trend in the validation set at the 4300th iteration and finally stabilized after 5000 iterations. In addition, it can be seen that the model stops after 10,000 iterations, indicating that the comprehensive performance of the model is better.
As shown in Table 3 and Table 4, when the learning rate was 0.006, the value of MAE was the smallest, and when 6 layers of neural networks were selected, the amount of calculation and the value of MAE were the smallest.
Many nonlinear functions are used as activation functions in deep neural networks. We tested the performance using the elu, tanh, and relu functions as activation functions, as shown in Figure 7. It can be seen that the results were almost the same when using the tanh and the elu functions as activation functions. Although the accuracy curve was stable, the accuracy could not improve significantly as the number of iterations increased. The model was stuck in a local optimum, which was not as good as the results obtained with the ReLU function. When the ReLU function was used as the activation function, the accuracy curve was flat when it was stable, and the accuracy was significantly higher than the other two groups.
The learning rate directly affects how fast the model can converge to a local minimum (i.e., achieve the best accuracy). The larger the learning rate, the faster the neural network learns. The network may become stuck in a local optimum if the learning rate is too low. However, the loss stops falling beyond the extreme value and repeatedly oscillates at a specific position. We tested the training performance of the proposed algorithm at different learning rates, and the results are shown in Figure 8. The models converged at almost the same rate when the learning rates were 2.0 × 10−3 and 6.0 × 10−3. The model performed best when the learning rate was set to 6.0 × 10−3. The model converged significantly slower when the learning rate was set to 5.0 × 10−4. Therefore, for better accuracy and latency, we determined the optimal learning rate to be 6.0 × 10−3.
4.3. Comparison and Application
We compared the proposed method with the FCNN [29] and SVM [30] on the collected dataset. The learning rate was set to 6.0 × 10−3, and the batch size was 32, as shown in Figure 9. It is obvious that the proposed method converged faster than other methods. Compared with BP-based methods, the proposed method powered by the HSIC separated hidden signals in individual neuron representations, suggesting that the HSIC helps the distribution of extracted features achieve more independence and more accessible association with their labels.
The running time of the proposed techniques and the final results on the test set are shown in Table 5. The results show that the proposed model balanced accuracy and efficiency.
5. Conclusions
In this paper, to solve the challenges in the manual detection of irregular cavity volume measurements, i.e., manual detection is labor-intensive, low efficiency, and easy to corrode, we proposed neural networks for measuring irregular cavity volume based on micro-compressed air. A new dataset was established for the irregular cavity data collected in the production environment after data processing to improve the detection accuracy and reduce the influence of external environmental factors. The proposed method was motivated by FCNN and HSIC. After training, testing, and validation, we analyzed many experimental results. The experimental results show that the proposed method has a good effect on predicting irregular cavity volume, and the curve of accuracy and loss value is stable. The proposed model finally converged stably and achieved an accuracy of 0.9912 on the validation set. The proposed model has certain practicability and provides a new reference for studying irregular cavity volume measurement. The proposed method can achieve batch-to-batch consistency and product stability in industrial production. The results show that the proposed technologies have a good application prospect.
In a future study, we will introduce a feature fusion scheme to extract more discriminative features. This will enable the proposed system to reduce measurement errors. Next, we will design new loss functions to achieve a balance of fast convergence and improved accuracy.
Conceptualization, H.G. and Y.J.; methodology, X.Z.; software, X.Z.; validation, W.Y., B.L. and Z.L.; formal analysis, X.Z.; investigation, X.Z.; resources, Y.J.; data curation, X.Z.; writing—original draft preparation, X.Z.; writing—review and editing, H.G.; visualization, X.Z.; supervision, Y.J.; project administration, Y.J.; funding acquisition, Y.J. All authors have read and agreed to the published version of the manuscript.
We exclude these statement because the study did not require ethical approval.
Not applicable.
The data used to support the findings of this study are included within the article.
The authors declare no conflict of interest.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figure 1. The volume detection principle schematic of irregular cavity components.
Figure 3. The volume prediction model structure of fully connected neural network.
Figure 4. Accuracy curve: comparison of proposed non-BP and BP algorithm in the training process.
Figure 5. Loss curve: comparison of proposed non-BP and BP algorithm in the training process.
Effect of gas equilibration time on volume detection, where V0 is the actual value of the volume, V-15 is the volume value detected when the gas equilibration time is 15 s, and V-20 and V-30 are also similar.
Group | V0 |
V-15 |
V-20 |
V-30 |
---|---|---|---|---|
1 | 2326 | 2326.16 | 2323.62 | 2327.63 |
2 | 2326 | 2318.77 | 2322.04 | 2326.60 |
3 | 2326 | 2320.37 | 2321.49 | 2325.88 |
4 | 2326 | 2321.07 | 2322.45 | 2325.59 |
5 | 2326 | 2321.78 | 2321.84 | 2326.32 |
6 | 2326 | 2319.85 | 2322.10 | 2325.80 |
7 | 2326 | 2322.14 | 2321.98 | 2325.99 |
8 | 2326 | 2321.99 | 2322.51 | 2325.66 |
9 | 2326 | 2324.02 | 2323.65 | 2326.01 |
10 | 2326 | 2319.89 | 2321.21 | 2325.45 |
Input characteristic parameters of irregular cavity components.
Number | Feature Parameters |
---|---|
1 | Atmospheric pressure |
2 | Atmospheric humidity |
3 | Temperature before micro-compression |
4 | Stable differential pressure before micro-compression |
5 | Temperature after micro-compression |
6 | Stable differential pressure after micro-compression |
7 | Gas equilibration time |
Values of the MAE.
lr | 0.0005 | 0.001 | 0.002 | 0.003 | 0.004 | 0.005 | 0.006 | 0.007 | 0.008 |
MAE | 0.011 | 0.010 | 0.009 | 0.007 | 0.006 | 0.006 | 0.005 | 0.006 | 0.006 |
MAE under different settings.
Layers | Parameters | MAE |
---|---|---|
4 | 4096 | 1.112 |
6 | 20,480 | 0.005 |
8 | 86,016 | 0.005 |
Comparison of different methods on the collected dataset.
Method | Per Step CPU Time | Accuracy in Test Set |
---|---|---|
SVM | 0.326125 s | 0.76 |
FCNN | 0.576233 s | 0.85 |
Proposed | 0.176329 s | 0.99 |
References
1. Hao, G.U.; Jin-liang, F.E.; Yao-yu, Z.H.; Cun-liang, C.A.; Si-qi, L.I. High Precision Measurement of Cartridge Volume. Acta Armamentarii; 2015; 36, 758.
2. Strelnikova, E.; Gnitko, V.; Krutchenko, D.; Naumemko, Y. Free and forced vibrations of liquid storage tanks with baffles. Mod. Technol. Eng.; 2018; 3, pp. 15-52.
3. Rogovyi, A. Energy performances of the vortex chamber supercharger. Energy; 2018; 163, pp. 52-60. [DOI: https://dx.doi.org/10.1016/j.energy.2018.08.075]
4. Sun, Y.; Xue, Z.; Hashimoto, T.; Zhang, Y. Optically quantifying spatiotemporal responses of water injection-induced strain via downhole distributed fiber optics sensing. Fuel; 2021; 283, 118948. [DOI: https://dx.doi.org/10.1016/j.fuel.2020.118948]
5. Song, H.; Wang, Q.; Liu, M.; Cai, Q. A novel fiber Bragg grating vibration sensor based on orthogonal flexure hinge structure. IEEE Sens. J.; 2020; 20, pp. 5277-5285. [DOI: https://dx.doi.org/10.1109/JSEN.2020.2969559]
6. Bai, Y.; Zeng, J.; Huang, J.; Yan, Z.; Wu, Y.; Li, K.; Wu, Q.; Liang, D. Air pressure measurement of circular thin plate using optical fiber multimode interferometer. Measurement; 2021; 182, 109784. [DOI: https://dx.doi.org/10.1016/j.measurement.2021.109784]
7. Cripe, J.; Aggarwal, N.; Lanza, R.; Libson, A.; Singh, R.; Heu, P.; Follman, D.; Cole, G.D.; Mavalvala, N.; Corbitt, T. Measurement of quantum back action in the audio band at room temperature. Nature; 2019; 568, pp. 364-367. [DOI: https://dx.doi.org/10.1038/s41586-019-1051-4]
8. Cao, R.; Zhang, S.; Banthia, N.; Zhang, Y.; Zhang, Z. Interpreting the early-age reaction process of alkali-activated slag by using combined embedded ultrasonic measurement, thermal analysis, XRD, FTIR and SEM. Compos. Part B Eng.; 2020; 186, 107840. [DOI: https://dx.doi.org/10.1016/j.compositesb.2020.107840]
9. Sun, Y.; Yang, T.; Cheng, X.; Qin, Y. Volume measurement of moving irregular objects using linear laser and camera. Proceedings of the 2018 IEEE 8th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER); Tianjin, China, 19–23 July 2018; pp. 1288-1293.
10. Kalantari, D.; Jafari, H.; Kaveh, M.; Szymanek, M.; Asghari, A.; Marczuk, A.; Khalife, E. Development of a machine vision system for the determination of some of the physical properties of very irregular small biomaterials. Int. Agrophys.; 2022; 36, pp. 27-35. [DOI: https://dx.doi.org/10.31545/intagr/145920]
11. Arulmurugan, R.; Anandakumar, H. Early detection of lung cancer using wavelet feature descriptor and feed forward back propagation neural networks classifier. Computational Vision and Bio Inspired Computing; Springer: Cham, Switzerland, 2018; pp. 103-110.
12. Lyu, Z.; Yu, Y.; Samali, B.; Rashidi, M.; Mohammadi, M.; Nguyen, T.N.; Nguyen, A. Back-propagation neural network optimized by K-fold cross-validation for prediction of torsional strength of reinforced Concrete beam. Materials; 2022; 15, 1477. [DOI: https://dx.doi.org/10.3390/ma15041477]
13. Wang, X.; An, S.; Xu, Y.; Hou, H.; Chen, F.; Yang, Y.; Zhang, S.; Liu, R. A back propagation neural network model optimized by mind evolutionary algorithm for estimating Cd, Cr, and Pb concentrations in soils using Vis-NIR diffuse reflectance spectroscopy. Appl. Sci.; 2019; 10, 51. [DOI: https://dx.doi.org/10.3390/app10010051]
14. Shaik, N.B.; Pedapati, S.R.; Taqvi, S.A.; Othman, A.R.; Dzubir, F.A. A feed-forward back propagation neural network approach to predict the life condition of crude oil pipeline. Processes; 2020; 8, 661. [DOI: https://dx.doi.org/10.3390/pr8060661]
15. Jiang, H.; Tian, H.; Hua, Y.; Tang, B. Research on control of intelligent vehicle human-simulated steering system based on HSIC. Appl. Sci.; 2019; 9, 905. [DOI: https://dx.doi.org/10.3390/app9050905]
16. Ahmad, M.; Mazzara, M.; Distefano, S. Regularized cnn feature hierarchy for hyperspectral image classification. Remote Sens.; 2021; 13, 2275. [DOI: https://dx.doi.org/10.3390/rs13122275]
17. Ma, W.D.; Lewis, J.P.; Kleijn, W.B. The HSIC bottleneck: Deep learning without back-propagation. Proceedings of the AAAI Conference on Artificial Intelligence 2020; New York, NY, USA, 7–12 February 2020; Volume 34, pp. 5085-5092.
18. Kuzevičová, Ž.; Gergeľová, M.; Kuzevič, Š.; Palková, J. Spatial interpolation and calculation of the volume an irregular solid. Int. J. Eng.; 2014; 4, 8269.
19. Ghimire, D.; Kil, D.; Kim, S.H. A Survey on Efficient Convolutional Neural Networks and Hardware Acceleration. Electronics; 2022; 11, 945. [DOI: https://dx.doi.org/10.3390/electronics11060945]
20. Wang, H.; Shi, H.; Lin, K.; Qin, C.; Zhao, L.; Huang, Y.; Liu, C. A high-precision arrhythmia classification method based on dual fully connected neural network. Biomed. Signal Process. Control; 2020; 58, 101874. [DOI: https://dx.doi.org/10.1016/j.bspc.2020.101874]
21. Ganju, K.; Wang, Q.; Yang, W.; Gunter, C.A.; Borisov, N. Property inference attacks on fully connected neural networks using permutation invariant representations. Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security; Toronto, ON, Canada, 15–19 October 2018; pp. 619-633.
22. Aspri, M.; Tsagkatakis, G.; Tsakalides, P. Distributed Training and Inference of Deep Learning Models for Multi-Modal Land Cover Classification. Remote Sens.; 2020; 12, 2670. [DOI: https://dx.doi.org/10.3390/rs12172670]
23. Aspri, M.; Tsagkatakis, G.; Panousopoulou, A.; Tsakalides, P. On Realizing Distributed Deep Neural Networks: An Astrophysics Case Study. Proceedings of the 2019 27th European Signal Processing Conference (EUSIPCO); A Coruna, Spain, 2–6 September 2019; pp. 1-5.
24. Tsagkatakis, G.; Aidini, A.; Fotiadou, K.; Giannopoulos, M.; Pentari, A.; Tsakalides, P. Survey of deep-learning approaches for remote sensing observation enhancement. Sensors; 2019; 19, 3929. [DOI: https://dx.doi.org/10.3390/s19183929]
25. Kobayashi, K.; Bolatkan, A.; Shiina, S.; Hamamoto, R. Fully-connected neural networks with reduced parameterization for predicting histological types of lung cancer from somatic mutations. Biomolecules; 2020; 10, 1249. [DOI: https://dx.doi.org/10.3390/biom10091249]
26. Singh, D.; Singh, B. Investigating the impact of data normalization on classification performance. Appl. Soft Comput.; 2020; 97, 105524. [DOI: https://dx.doi.org/10.1016/j.asoc.2019.105524]
27. Tomczyk, K.; Piekarczyk, M.; Sokal, G. Radial Basis Functions Intended to Determine the Upper Bound of Absolute Dynamic Error at the Output of Voltage-Mode Accelerometers. Sensors; 2019; 19, 4154. [DOI: https://dx.doi.org/10.3390/s19194154] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/31557918]
28. Tomczyk, K. Polynomial approximation of the maximum dynamic error generated by measurement systems. Prz. Elektrotech; 2019; 95, pp. 124-127. [DOI: https://dx.doi.org/10.15199/48.2019.06.22]
29. Yuan, C.; Chen, J.; Chen, M.; Gu, W. A Lightweight CNN Using HSIC Fine-Tuning for Fingerprint Liveness Detection. Proceedings of the Chinese Conference on Biometric Recognition 2021; Shanghai, China, 10–12 September 2021; Springer: Cham, Switzerland, 2021; pp. 240-247.
30. Yue, J.; Xu, K.J.; Liu, W.; Zhang, J.G.; Fang, Z.Y.; Zhang, L.; Xu, H.R. SVM based measurement method and implementation of gas-liquid two-phase flow for CMF. Measurement; 2019; 145, pp. 160-171. [DOI: https://dx.doi.org/10.1016/j.measurement.2019.05.051]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Irregular cavity volume measurement is a critical step in industrial production. This technology is used in a wide variety of applications. Traditional studies, such as waterflooding-based methods, have suffered from the following shortcomings, i.e., significant measurement error, low efficiency, complicated operation, and corrosion of devices. Recently, neural networks based on the air compression principle have been proposed to achieve irregular cavity volume measurement. However, the balance between data quality, network computation speed, convergence, and measurement accuracy is still underexplored. In this paper, we propose novel neural networks to achieve accurate measurement of irregular cavity volume. First, we propose a measurement method based on the air compression principle to analyze seven key parameters comprehensively. Moreover, we integrate the Hilbert–Schmidt independence criterion (HSIC) into fully connected neural networks (FCNNs) to build a trainable framework. This enables the proposed method to achieve power-efficient training. We evaluate the proposed neural network in the real world and compare it with typical procedures. The results show that the proposed method achieves the top performance for measurement accuracy and efficiency.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 School of Automobile and Traffic, Shenyang Ligong University, Shenyang 110159, China;
2 School of Information Science and Engineering, Shenyang Ligong University, Shenyang 110159, China
3 School of Automation and Electrical Engineering, Shenyang Ligong University, Shenyang 110159, China;
4 School of Equipment Engineering, Shenyang Ligong University, Shenyang 110159, China;