Content area
In this study, a novel approach was proposed for predicting the interfacial gap in copper overlap joints by using deep learning and multi-sensor fusion. In this method, an image sensor, a spectrometer, and optical sensors tomography (OCT) sensors were used to develop and validate deep learning models under various gap conditions. The results revealed that the variation in melt pool dimensions, changes in keyhole behavior, intensity variations at specific wavelengths, and keyhole depth derived from the OCT data could be used to accurately predict the interfacial gap. Among the proposed models, a binary gap classification model achieved the highest accuracy of 98.8%. The spectrometer was the most effective sensor in this study, whereas the image and OCT sensors provided complementary data. The best performance was achieved by fusing all three sensors, which emphasizes the importance of sensor fusion for precise gap prediction. This study provides valuable insights into improving weld quality assessment and optimizing manufacturing processes.
Full text
1. Introduction
Optical coherence tomography (OCT) is increasingly recognized as a versatile technique for inline quality monitoring throughout different welding stages—pre-, in-, and post-process. In the pre-process stage, it enables efficient tracking of joint seam alignment [1]. During welding, OCT provides real-time monitoring of critical parameters such as melt pool dynamics [2] and keyhole depth [3,4,5]. In the post-process stage, it supports inspection of the solidified weld geometry [6] to ensure overall weld quality. Moreover, by scanning the OCT laser beam relative to the processing laser beam, OCT enables simultaneous measurement of keyhole depth and melt pool geometry, delivering highly precise in situ monitoring coaxial with the process laser beam [7].
OCT has been applied to enable in situ monitoring and closed-loop control of the laser welding process [5,8,9,10]. It has been successfully employed to measure weld penetration depth and keyhole geometry in real time with high spatial resolution. Blecher et al. [3] monitored keyhole depth using OCT and validated the accuracy of the measured data by comparing it with longitudinal metallographic cross-sections of various metal alloys. Fetzer et al. [11] correlated keyhole depth data from high-speed X-ray imaging with inline OCT measurements, improving accuracy through noise-reduction filters calibrated against X-ray ground truth. Kogel-Hollacher et al. [12] introduced a short-coherence interferometry-based technology for in situ keyhole depth measurements during laser processing, achieving deviations within ±10 µm under typical conditions. He et al. [13] developed a real-time laser welding depth inspection system based on spectral-domain OCT combined with HDBSCAN clustering, achieving an average depth measurement error below 5%. Schmoeller et al. [4,5] enhanced inline weld depth prediction and control by applying an intelligent algorithm to OCT signals within a keyhole, reducing the mean deviation between estimated and actual weld depths from 30% [4] to 1.7% [5]. Will et al. [2] employed an OCT sensor in laser welding at laser powers between 3000 W and 6000 W and welding speeds ranging from 6 m/min to 80 m/min, classifying weld stability based on measured surface topography with a resolution of approximately 0.08 mm. Collectively, these studies demonstrate that OCT provides significant advantages in terms of depth resolution, real-time feedback, and adaptability to different materials and process dynamics. These findings strongly support OCT as a robust and scalable sensing platform for advanced laser welding applications.
Despite its advantages, OCT faces limitations under the complex and dynamic conditions of laser welding. Its reliability declines particularly with highly reflective materials such as aluminum and copper, where beam scattering and weak signal return hinder accurate measurement. To address this, machine learning (ML) models have been increasingly adopted for real-time classification and regression tasks, as they can learn the discrepancy between predicted and actual values. For improved accuracy and robustness, ML models are often integrated with multi-sensor configurations, combining OCT with acoustic signals [14,15], thermo-cameras [16,17], image sensors [18], spectrometers [19,20], and photodiodes [21,22,23].
To address the limitations of OCT, recent studies have increasingly explored sensor fusion approaches supported by machine learning. Brezan et al. [22] demonstrated that integrating photodiodes with OCT achieved 87% accuracy in weld quality classification. Cao et al. [15] proposed a monitoring method for aluminum laser welding that combined acoustic and photodiode signals with convolutional neural network (CNN) algorithms, achieving 94.34% accuracy. Cai et al. [24] employed a high-speed camera with an image-fusion method based on two-dimensional CNN models to recognize the penetration state with an accuracy of up to 99.86%. Kim et al. [25] demonstrated the fusion of an image sensor and a spectrometer to estimate penetration depth during laser welding of DP780 steel, achieving a mean absolute error of 0.049 mm.
Copper welding, particularly of thin foils, has become increasingly important in electric vehicle battery modules, power electronics, and thermal management systems, where reliable joining is essential for both electrical and mechanical performance [26]. These joints are typically formed as non-visible overlap configurations (e.g., busbar-to-tab, tab-to-tab, or tab-to-collector plate). However, deformation of materials and tolerances in the jigging system can lead to the formation of interfacial gaps. Even a small gap can cause insufficient fusion, pore formation, or reduced joint strength, highlighting the necessity of in situ gap monitoring.
In this study, we propose a deep learning-based approach to predict interfacial gaps during overlap laser welding of copper sheets using a multi-sensor fusion framework. To overcome the inherent limitations of individual sensors, our sensing architecture integrates image, spectral, and geometric data obtained from an image sensor, a spectrometer, and an OCT sensor. The analysis focuses on the relative contribution of each modality to identify the most informative features for interfacial gap detection. Based on these insights, we establish a data-driven framework to enable intelligent, scalable weld quality monitoring in copper laser welding.
2. Experimental Setup and Data Preparation
2.1. Experimental Setup
The base materials used in the experiments were C1100 copper (thickness: 0.2 mm, Cu 99.98%) and C1020 copper (thickness: 1.0 mm, Cu 99.959%). All specimens were machined to dimensions of 50 mm × 150 mm (Figure 1). Laser welding was performed in an overlap joint configuration, with the C1100 sheet placed on top of the C1020 sheet.
To introduce interfacial gap conditions, feeler gauges with thicknesses ranging from 0.02 to 0.1 mm were inserted between the upper and lower sheets. These gauges were positioned symmetrically on either side of the weld line, 2.5 mm from the centerline, and extended 8 mm along the welding direction (Figure 1).
The laser beam was generated by a fiber laser (YLS-6000, IPG Photonics, Oxford, MA, USA) and delivered through a 200 μm optical fiber to a focusing optic (D30, IPG Photonics, Oxford, MA, USA) with a focal length of 200 mm. At the focal position, the measured beam diameter was 270 μm. Welding was conducted at laser powers of 2.0 kW, 2.25 kW, and 2.5 kW, while maintaining a constant travel speed of 9 m/min (Table 1). Two weldments were produced for each experimental condition. The laser beam was applied with a 10° push angle, and no shielding gas was used during welding (Figure 1).
To extract features related to the interfacial gap, three sensors—a CMOS camera (UI-6140CP-M-GL.Rev.2, IDS, Obersulm, Germany), a spectrometer (HR-4000, Ocean Optics, Dunedin, FL, USA), and an OCT sensor (LDD-700, IPG Photonics, Oxford, MA, USA)—were coaxially mounted on the focusing optic (Figure 2). The sensors were synchronized via a trigger system, allowing simultaneous acquisition of all data streams.
CMOS camera images were captured at a resolution of 202 × 472 pixels and a frame rate of 500 Hz, with an integration time of 0.829 ms. The images were filtered using a band-pass filter with a full width at half maximum (FWHM) of 10 nm and a maximum transmission loss of 15%. A 980 nm diode laser with an output power of 55 W was used as the illumination source to enhance image visibility near the molten pool. The illumination beam was projected onto the welds from the front side of the welding direction, forming an angle of approximately 60° with the process laser. Due to the incidence angle and working distance, the illuminated region on the specimen exhibited an elliptical shape, with minor and major axes of approximately 17 mm and 30 mm, respectively.
The spectrometer measured light intensity over a wavelength range of 194–1127 nm, with a sampling rate of 100 Hz, an optical resolution of 0.47 nm, and an integration time of 10 ms. The OCT system, based on the Michelson interferometer principle and operating in the 800–900 nm wavelength range, enabled measurements of keyhole depth, bead height, and bead width. The system specifications include an axial (vertical) resolution of 20–50 µm over a 6 mm axial field of view, and a lateral resolution of approximately 15 µm with a lateral scanning range of 10 mm. With a maximum sampling frequency of 250 kHz, the OCT system provided high-speed acquisition of geometric feature data.
2.2. OCT Performance Evaluation
To detect deviations in gap thickness, the accuracy of the OCT sensor must be finer than the thickness of the foil used. Since the resolution of the OCT sensor is highly dependent on the optical system, the accuracy and limitations of the OCT sensor were evaluated prior to the laser welding trials. To verify the resolution of the OCT system, gauge blocks were prepared with thickness intervals of 0.02 mm, ranging from 1.00 to 1.10 mm (1.00, 1.02, 1.04, 1.06, 1.08, and 1.10 mm). These gauge blocks, which have dimensions of 30 mm in width and 9 mm in length, were sequentially arranged as illustrated in Figure 3. Surface height measurements were conducted at travel speeds of 1 and 5 m/min using a linear motion stage, and each test was repeated three times to ensure measurement reliability. As depicted in Figure 3, the OCT sensor was utilized in tow modes depending on the measurement purpose: point measurement for detecting keyhole depth (OCTP) and line measurement (OCTL) for evaluating average bead height. The OCTL measurement was performed 3 mm away from the OCTP location, with a sampling interval of 10 μm. A reference point was located 10 mm away from the OCTP location. Since OCTP detects the keyhole depth located below the reference surface, its values are recorded as negative, whereas OCTL measured the bead height above the reference surface, resulting in positive values. The raw OCT data were processed using a moving-average smoothing method with a window size of 500 µm to reduce signal noise and extract geometric features. The OCT signal, originally sampled at 135 kHz, was downsampled to 500 Hz by averaging every 270 consecutive data points.
The as-measured scanning height of the gauge block was presented in Figure 4. Figure 4a displayed the raw data obtained using the OCT system, revealing that the OCT sensor did not accurately detect the surfaces of the gauge blocks. Although the actual specimen heights consisted of six distinct levels, the measured heights did not show distinct separation. In Figure 4a, it should be noted that the very low height measurements at the block interfaces originated from the slight gap between the gauge blocks. Figure 4b,c summarized the measured heights at travel speeds of 1 m/min and 5 m/min, respectively. At 1 m/min, percent errors of less than 1% were observed in all conditions except for the 1.06 mm condition, while at 5 m/min, errors exceeding 1% occurred in half of the conditions (three out of six). As the travel speed increased, the average percent error rose from 0.51% to 1.03%.
Figure 5 shows the modified height data obtained by applying a moving-average smoothing with a window size of 500 μm to the raw OCT sensing data. After moving-average smoothing, the height of the gauge blocks was detected more accurately, and the percent error decreased compared to the non-smoothed data shown in Figure 4. Although the percent error still increased as increase in travel speed, the shape of the gauge blocks was detected more reliably. Additionally, the deviation was significantly reduced, and the number of errors exceeding 1% decreased. However, the average percent error tended to increase after smoothing. Specifically, the percent errors after smoothing were 0.68% at 1 m/min and 1.02% at 5 m/min.
The OCT sensor was well-suited for measuring geometrical features such as the keyhole; however, its resolution was constrained by the detection methodology and optical system. The optical sensor installed in the OCT system has a pixel size of 45 μm in both horizontal and vertical directions, which limited its ability to detect fine gaps smaller than 0.04 mm. As a result, the OCT system alone was insufficient to resolve the 20 μm height differences in the gauge blocks and the gap variations introduced during the experiment. These limitations highlight the need for improved sensing strategies.
2.3. Data Preparation
2.3.1. Multi-Sensor Data Acquisition
The CMOS image sensor was installed coaxially to observe the keyhole, molten pool, and weld bead. The CMOS images were resampled to minimize the effect of weld pool fluctuation by averaging every 100 images. As shown in Figure 6, a longer molten pool was observed when a gap existed at the interface compared to the condition without a gap. The difference in molten pool length between gap and no-gap condition increased with laser power. As the power varied from 2.0 to 2.5 kW, the deviation increased from 0.0 mm to 0.19 mm. However, the width of molten pool and the keyhole diameter remained constant at 1.15 mm and 0.37 mm, regardless of the gap or laser power.
The spectral intensity corresponding to material emission (visible range), process laser beam reflection (around 1070 nm), illumination laser reflection (around 980 nm), and OCT laser reflection (around 980 nm) was detected, as shown in Figure 7a. Both wavelength and intensity changed significantly depending on the interfacial gap size, showing strong correlations at specific wavelengths. In the visible range, the copper spectral peak at 723.39 nm had a correlation coefficient of 0.079, whereas in the IR range, the process laser wavelength at 1071.68 nm exhibited a much higher correlation coefficient of 0.6412 (Figure 7b). The spectral intensity at 1071.68 nm varied markedly with the gap condition, while that at 723.39 nm remained almost constant.
The OCT sensing data for keyhole depth and bead height measurements were presumed to be suitable due to the negligible differences related to the size of the interfacial gap, as shown in Figure 8. Correlation analysis revealed that the bead height exhibited a correlation coefficient of −0.276, whereas the keyhole depth showed a coefficient of 0.068, indicating that the relationship between bead height and gap was statistically more significant.
2.3.2. Input Data Pre-Processing
For the development of the prediction model, data collected from each sensor were normalized in terms of sampling frequency. Coaxial images were acquired at 500 Hz and used in their raw form. Spectrometer signals, consisting of 3648 wavelength channels, were collected at 100 Hz and upsampled to 500 Hz using a Fourier-based interpolation method [27]. OCT signals, originally sampled at 135 kHz, were downsampled to 500 Hz by applying averaging to the time-series data. As shown in Figure 9a,b, the transformed spectrometer and OCT data preserved the key characteristics of the original signals. The final input for the time-series prediction model consisted of a 472 × 202 pixel coaxial image, 3648 spectrometer features, and a single OCT signal. Python v3.8.8 was used for data processing, and all codes were developed and executed in Jupyter Notebook v6.0.0.
2.3.3. Output Data Labeling
The interfacial gap during welding was precisely configured using feeler gauges. To verify the actual gap after welding, specimens were randomly sectioned (Figure 10) and the measured gap heights were compared to the nominal filler gauge values. Across three measured coupons per gap, the average deviation between the set and measured gaps was 6.67 μm.
Two classification models were developed to predict the interfacial gap: a binary classification model (B2 model) and a multi-class classification model (MC model). In the B2 model, the threshold was defined as 0.02 mm. Gaps of 0 mm were labeled as the “no gap” class, while all other cases were labeled as the “gap” class. In the MC model, six discrete classes were defined according to the preset gap sizes (Figure 10).
3. Deep Learning Model Structure
A B2 and a MC model were developed to predict the interfacial gap. Both models, shown in Figure 11, used a 2D-CNN for image data, which was composed of two blocks of convolution, batch normalization, and max pooling layers. For the spectrometer data, a 1D-CNN architecture was used, consisting of three similar blocks. The OCT data was incorporated into the network as a scalar input. Depending on the OCT data used for training, the models were named the OCTP-MC model (using keyhole depth data) and the OCTL-MC model (using bead height data).
Each data modality provided complementary information to the network: the 2D-CNN extracted spatial features of the molten pool from coaxial images, while the 1D-CNN processed wavelength-dependent emission profiles from the spectrometer. The scalar OCT input supplied geometric information corresponding to either keyhole depth or bead height. After concatenation, the combined features enabled the model to learn correlations between optical emissions and geometric variations. The B2 model classified the presence or absence of a gap, whereas the multi-class (MC) model predicted six discrete gap intervals.
The outputs from the 2D-CNN (image), the 1D-CNN (spectrometer), and the scalar OCT input were flattened and concatenated. This combined feature vector was then fed into a fully connected network (FCN) with four dense layers. To prevent overfitting, kernel regularizers with parameters of L1 = 0.001 and L2 = 0.001 were applied to two of the dense layers. The rectified linear unit (ReLU) function was used as the activation function for all hidden nodes. For the output nodes, the sigmoid function was used in the B2 model, while the softmax function was used in the MC model.
4. Results
4.1. Model Training
A total of 13,680 data points were collected through data pre-processing. The dataset was randomly divided into training, validation, and test sets in proportions of 70% (8820 points), 15% (1890 points), and 15% (1890 points), respectively. For the B2 model, a binary crossentropy function was used as the loss function, whereas a categorical crossentropy function was applied for the MC model [28]. The Adam optimizer was employed with a learning rate of 0.001, β1 = 0.9, β2 = 0.999, and ε = 10−8 [29]. Training was conducted over 200 epochs with a mini-batch size of 32. The training error decreased sharply during the initial epochs and converged after approximately the 25th epoch (Figure 12). No overfitting was observed in any of the models.
The accuracy of all trained models exceeded 89%, as shown in Figure 13 and Table 2. Although the training accuracy approached 100% across all three models, the validation accuracy remained at lower levels. For the test dataset, the B2 model achieved the highest performance with an accuracy of 98.84% (Figure 13).
Among the MC classification models, the OCTP-MC model exhibited lower accuracy compared to the OCTL-MC model. The test accuracy of the OCTP-MC model was 81.11%, whereas that of the OCTL-MC model reached 89.47%. Evaluation metrics, including precision and recall for the B2 and MC models, are presented in Table 3, Table 4 and Table 5. Precision is defined as the proportion of predicted “Gap” instances that were actually “Gap,” while recall refers to the proportion of true “Gap” instances correctly identified by the model.
This study aimed to determine the criteria for predicting interfacial gap sizes based on the characteristics of each sensor corresponding to various gap sizes. To achieve this, precision and recall, which are performance evaluation metrics derived from the confusion matrix, were compared. The B2 model, which performed binary classification based on the presence or absence of an interfacial gap, achieved the highest precision and recall (Table 3). Among the MC models, the OCTL-MC model outperformed the OCTP-MC model across all gap sizes (Table 4 and Table 5).
4.2. Evaluation of Sensor Performance
To evaluate the influence of each sensor in the OCTL-MC model—which demonstrated the highest accuracy among the MC classification models—training was performed using different sensor combinations (Figure 14 and Table 6). Among the single-sensor cases, the spectrometer alone yielded the highest accuracy (81.64%), whereas the OCT sensor alone performed poorly, with an accuracy of only 24.40%. For two-sensor combinations, CMOS + OCT achieved 61.38% accuracy, while spectrometer + OCT reached 85.40%. These results indicate that adding the OCT sensor to either CMOS or spectrometer improves classification performance compared to using CMOS or spectrometer alone. The CMOS + spectrometer combination produced the best result among all two-sensor configurations, with an accuracy of 87.72%.
5. Discussion
Based on the evaluation of each sensor’s performance, the spectrometer alone achieved the highest accuracy among the single-sensor models, recording 81.64%. The CMOS sensor reached 60.69%, while the OCT sensor, when used individually, exhibited a substantially lower accuracy of 24.40%, indicating limited capability in classifying interfacial gaps.
The relatively low accuracy of the CMOS sensor can be attributed to the limited sensitivity of image-based features to gap-induced changes. Although the presence of an interfacial gap slightly elongated the molten pool, the overall variation was minimal. This was primarily due to the high welding speed and the intrinsic properties of copper, including its low IR absorptivity and high thermal conductivity, which restricted molten pool expansion and maintained a nearly constant pool width regardless of the gap. Consequently, the visual features extracted from CMOS images provided insufficient variation to effectively distinguish gap conditions.
The spectrometer, by contrast, delivered the most informative signal among the three sensors, exhibiting strong correlations with gap size at specific wavelength regions. Correlation coefficients exceeded 0.6 in the process laser band (1070 nm) and 0.5 in the illumination laser band (980 nm), whereas the material emission band (600–700 nm) showed a correlation below 0.1. The latter result is likely attributable to the Cu-Cu laser welds employed in this study; previous studies [30] have reported higher correlations in this band when heterogeneous materials were employed. Notably, high correlation coefficients were observed around 975 nm (illumination laser) and 1070 nm (process laser), suggesting that a photodetector equipped with an appropriate narrowband optical filter could provide a simpler and faster means to achieve comparable performance in interfacial gap detection.
When used in a sole sensor, the OCT sensor failed to classify gaps effectively, as neither point nor line data were sufficient. However, in combination with other sensors, OCTL provided a more informative feature than OCTP. This outcome can be explained by the physical phenomenon wherein the molten pool sags downward due to the presence of an interfacial gap, leading to a reduction in bead height. As shown in Figure 10, bead height decreases in the presence of a gap. Under normal laser welding conditions, bead height remains positive, but in the case of a 0.1 mm gap, the molten pool sagged into the gap, resulting in negative bead height.
6. Conclusions
This study proposed a deep learning-based approach for predicting interfacial gaps during overlap joint laser welding of Cu–Cu materials by leveraging a multi-sensor fusion framework. To address the inherent limitations of individual sensors, a high-dimensional data modeling strategy was adopted by integrating image signals (CMOS), spectral signals (spectrometer), and scalar signals (OCT). A total of 13,680 datasets were collected, and both binary (B2) and multi-class (MC) classification models were trained to evaluate the contribution of each sensor and the effectiveness of various sensor combinations. The following conclusions were drawn: (i). The B2 classification model outperformed other models, achieving the highest test accuracy of 98.84%, indicating strong potential for reliable interfacial gap detection in Cu–Cu welding. Among the MC models, the OCTL-MC model, which used bead height instead of keyhole depth, showed superior performance (89.47%). (ii). The spectrometer was identified as the most influential sensor, achieving the highest standalone accuracy (84.64%) and exhibiting a strong correlation with gap size in specific wavelength bands (correlation coefficient of 0.6412 at 1070 nm). In contrast, the CMOS sensor exhibited relatively lower sensitivity to gap size, owing to the minimal variation in molten pool size under high-speed copper welding. (iii). The OCT sensor alone was insufficient for effective classification, but when combined with other sensors—particularly through the OCTL approach—it significantly improved model performance. OCTL exhibited a stronger correlation with gap size than OCTP, reflecting the physical behavior of the molten pool sagging into the interfacial gap. (iv). Key features for interfacial gap prediction included molten pool length (from CMOS), spectral intensity variations (from the spectrometer), and bead height changes (from OCT). The deep learning model effectively extracted these features from raw sensor signals, enabling accurate classification based on Cu–Cu welding behavior.
In summary, this study demonstrates that sensor fusion and data-driven modeling are highly effective for detecting interfacial gaps in overlap joint Cu-Cu laser welds. The proposed framework provides a foundation for real-time weld quality monitoring and can be extended to applications such as dissimilar metal welding or adaptive process control.
H.K.: Investigation, Formal analysis, Software, Writing—original draft. C.K.: Validation, Data curation, Writing—review and editing, Supervision. M.K.: Conceptualization, Methodology, Writing—review and editing, Funding acquisition. All authors have read and agreed to the published version of the manuscript.
The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.
The author declares no conflicts of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1 Experimental setup.
Figure 2 (a) Age of equipment and (b) schematic diagram of coaxial process monitoring devices.
Figure 3 Schematic of OCT evaluation setup using gauge blocks, illustrating the relative OCT beam position OCTP and OCTL: (a) skewed 3D view and (b) top view.
Figure 4 (a) Height profile measured by OCT sensor for gauge blocks with varying heights, and plots of height deviations and errors at different speeds: (b) 1 m/min and (c) 5 m/min.
Figure 5 (a) Moving-averaged height profile for gauge blocks with varying heights, and plots of height deviations and errors at different speeds: (b) 1 m/min and (c) 5 m/min.
Figure 6 Re-sampled CMOS images according to gap size and laser power.
Figure 7 (a) Spectral signal profiles, (b) correlation coefficient related to the gap, and (c) spectral intensity at specific wavelengths (laser power = 2.5 kW and welding speed = 9 m/min).
Figure 8 Geometrical feature measured by (a) OCTP and (b) OCTL depending on the gap (laser power = 2.5 kW and welding speed = 9 m/min).
Figure 9 Time-series signal for (a) spectrometer at 1071.68 nm and (b) OCT sensor (Laser welding parameter: laser power = 2.5 kW, welding speed = 9 m/min, gap = 0.1 mm)).
Figure 10 Definition of output classes.
Figure 11 Architecture of the developed CNN structure for interfacial gap prediction.
Figure 12 Training and validation losses for CNN models: (a) B2, (b) OCTP-MC, and (c) OCTL-MC.
Figure 13 Training results for CNN models: (a) B2, (b) OCTP-MC, and (c) OCTL-MC.
Figure 14 Trained results for sole and dual sensor configurations.
Welding variables used in experiment.
| Variables | Parameter (Level) |
|---|---|
| Laser power (kW) | 2.5, 2.25, 2.0 (3) |
| Welding speed (m/min) | 9 (1) |
| Gap (mm) | 0, 0.02, 0.04, 0.06, 0.08, 0.1 (6) |
Accuracy of trained models.
| Accuracy (%) | B2 Model | OCTP-MC | OCTL-MC |
|---|---|---|---|
| Training | 100 | 97.18 | 95.54 |
| Validation | 99.31 | 81.53 | 89.68 |
| Test | 98.84 | 81.11 | 89.47 |
Evaluation metrics (precision, recall, and F1 score) of B2 model.
| Precision | Recall | F1 Score | |
|---|---|---|---|
| 0.02 mm binary classification | 0.9720 | 0.9805 | 0.9760 |
Evaluation metrics (precision, recall, and F1 score) of OCTP-MC model.
| 0 mm | 0.02 mm | 0.04 mm | 0.06 mm | 0.08 mm | 0.1 mm | |
|---|---|---|---|---|---|---|
| Precision | 0.874275 | 0.833333 | 0.835341 | 0.690104 | 0.847222 | 0.791367 |
| Recall | 0.980477 | 0.756458 | 0.684211 | 0.835962 | 0.653571 | 0.856031 |
| F1 score | 0.924335 | 0.793037 | 0.75226 | 0.756063 | 0.737903 | 0.82243 |
Evaluation metrics (precision, recall, and F1 score) of OCTL-MC model.
| 0 mm | 0.02 mm | 0.04 mm | 0.06 mm | 0.08 mm | 0.1 mm | |
|---|---|---|---|---|---|---|
| Precision | 0.958696 | 0.877256 | 0.889251 | 0.864516 | 0.890977 | 0.848148 |
| Recall | 0.956616 | 0.896679 | 0.898026 | 0.845426 | 0.846429 | 0.891051 |
| F1 score | 0.957655 | 0.886861 | 0.893617 | 0.854864 | 0.868132 | 0.86907 |
Classification accuracy for sole and dual sensor configurations.
| CMOS | Spectrometer | OCT | CMOS + Spectrometer | CMOS + OCT | Spectrometer + OCT | |
|---|---|---|---|---|---|---|
| Accuracy (%) | 60.69 | 81.64 | 24.40 | 87.72 | 61.38 | 85.40 |
1. Wu, D.; Zhang, P.; Yu, Z.; Gao, Y.; Zhang, H.; Chen, H.; Chen, S.; Tian, Y. Progress and perspectives of in-situ optical monitoring in laser beam welding: Sensing, characterization and modeling. J. Manuf. Process.; 2022; 75, pp. 767-791. [DOI: https://dx.doi.org/10.1016/j.jmapro.2022.01.044]
2. Will, T.; Jeron, T.; Hoelbling, C.; Müller, L.; Schmidt, M. In-process analysis of melt pool fluctuations with scanning optical coherence tomography for laser welding of copper for quality monitoring. Micromachines; 2022; 13, 1937. [DOI: https://dx.doi.org/10.3390/mi13111937]
3. Blecher, J.J.; Galbraith, C.M.; Van Vlack, C.; Palmer, T.A.; Fraser, J.M.; Webster, P.J.L.; DebRoy, T. Real time monitoring of laser beam welding keyhole depth by laser interferometry. Sci. Technol. Weld. Join.; 2014; 19, pp. 560-564. [DOI: https://dx.doi.org/10.1179/1362171814Y.0000000225]
4. Schmoeller, M.; Stadter, C.; Liebl, S.; Zaeh, M.F. Inline weld depth measurement for high brilliance laser beam sources using optical coherence tomography. J. Laser Appl.; 2019; 31, 022409. [DOI: https://dx.doi.org/10.2351/1.5096104]
5. Schmoeller, M.; Weiss, T.; Goetz, K.; Stadter, C.; Bernauer, C.; Zaeh, M.F. Inline weld depth evaluation and control based on OCT keyhole depth measurement and fuzzy control. Processes; 2022; 10, 1422. [DOI: https://dx.doi.org/10.3390/pr10071422]
6. Stadter, C.; Schmoeller, M.; von Rhein, L.; Zaeh, M.F. Real-time prediction of quality characteristics in laser beam welding using optical coherence tomography and machine learning. J. Laser Appl.; 2020; 32, 022046. [DOI: https://dx.doi.org/10.2351/7.0000077]
7. Wu, D.; Zhang, P.; Shi, H.; Lu, Q.; Yu, Z.; Huang, Y. Advancements and prospects of OCT-enabled all-process monitoring and inline quality assurance in laser keyhole welding: A critical review. J. Manuf. Process.; 2025; 152, pp. 1179-1203. [DOI: https://dx.doi.org/10.1016/j.jmapro.2025.08.063]
8. Angeloni, C.; Francioso, M.; Liverani, E.; Ascari, A.; Fortunato, A.; Tomesani, L. Laser welding in e-mobility: Process characterization and monitoring. Lasers Manuf. Mater. Process.; 2024; 11, pp. 3-24. [DOI: https://dx.doi.org/10.1007/s40516-023-00216-7]
9. Sokolov, M.; Franciosa, P.; Al Botros, R.; Ceglarek, D. Keyhole mapping to enable closed-loop weld penetration depth control for remote laser welding of aluminum components using optical coherence tomography. J. Laser Appl.; 2020; 32, 032004. [DOI: https://dx.doi.org/10.2351/7.0000086]
10. Stadter, C.; Schmoeller, M.; Zeitler, M.; Tueretkan, V.; Munzert, U.; Zaeh, M.F. Process control and quality assurance in remote laser beam welding by optical coherence tomography. J. Laser Appl.; 2019; 31, 022408. [DOI: https://dx.doi.org/10.2351/1.5096103]
11. Fetzer, F.; Boley, M.; Weber, R.; Graf, T. Comprehensive analysis of the capillary depth in deep penetration laser welding. Proceedings of the High-Power Laser Materials Processing: Applications, Diagnostics, and Systems VI; San Franciscon, CA, USA, 28 January–2 February 2017; 1009709. [DOI: https://dx.doi.org/10.1117/12.2250500]
12. Kogel-Hollacher, M.; Schoenleber, M.; Bautze, T.; Strebel, M.; Moser, R. Measurement and closed-loop control of the penetration depth in laser materials processing. Proceedings of the 9th International Conference on Photonic Technologies—LANE; Fürth, Germany, 19–22 September 2016; pp. 19-22.
13. He, G.; Gao, X.; Li, L.; Gao, P. OCT monitoring data processing method of laser deep penetration welding based on HDBSCAN. Opt. Laser Technol.; 2024; 179, 111303. [DOI: https://dx.doi.org/10.1016/j.optlastec.2024.111303]
14. Hamidi Nasab, M.; Masinelli, G.; de Formanoir, C.; Schlenger, L.; Van Petegem, S.; Esmaeilzadeh, R.; Wasmer, K.; Ganvir, A.; Salminen, A.; Aymanns, F. Harmonizing sound and light: X-ray imaging unveils acoustic signatures of stochastic inter-regime instabilities during laser melting. Nat. Commun.; 2023; 14, 8008. [DOI: https://dx.doi.org/10.1038/s41467-023-43371-3] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/38052793]
15. Cao, L.; Li, J.; Zhang, L.; Luo, S.; Li, M.; Huang, X. Cross-attention-based multi-sensing signals fusion for penetration state monitoring during laser welding of aluminum alloy. Knowl. -Based Syst.; 2023; 261, 110212. [DOI: https://dx.doi.org/10.1016/j.knosys.2022.110212]
16. Dong, F.; Kong, L.; Wang, H.; Chen, Y.; Liang, X. Cross-section geometry prediction for laser metal deposition layer-based on multi-mode convolutional neural network and multi-sensor data fusion. J. Manuf. Process.; 2023; 108, pp. 791-803. [DOI: https://dx.doi.org/10.1016/j.jmapro.2023.11.036]
17. Yang, W.; Qiu, Y.; Liu, W.; Qiu, X.; Bai, Q. Defect prediction in laser powder bed fusion with the combination of simulated melt pool images and thermal images. J. Manuf. Process.; 2023; 106, pp. 214-222. [DOI: https://dx.doi.org/10.1016/j.jmapro.2023.10.006]
18. Dold, P.M.; Bleier, F.; Boley, M.; Mikut, R. Two-stage quality monitoring of a laser welding process using machine learning: An approach for fast yet precise quality monitoring. at-Automatisierungstechnik; 2023; 71, pp. 878-890. [DOI: https://dx.doi.org/10.1515/auto-2023-0044]
19. Kang, S.; Lee, K.; Kang, M.; Jang, Y.H.; Kim, C. Weld-penetration-depth estimation using deep learning models and multisensor signals in Al/Cu laser overlap welding. Opt. Laser Technol.; 2023; 161, 109179. [DOI: https://dx.doi.org/10.1016/j.optlastec.2023.109179]
20. Kang, S.; Kang, M.; Jang, Y.H.; Kim, C. Spectrometer as a quantitative sensor for predicting the weld depth in laser welding. Opt. Laser Technol.; 2024; 175, 110855. [DOI: https://dx.doi.org/10.1016/j.optlastec.2024.110855]
21. Ocaya, R.O.; Akinyelu, A.A.; Al-Sehemi, A.G.; Dere, A.; Al-Ghamdi, A.A.; Yakuphanoğlu, F. Machine learning models for efficient characterization of Schottky barrier photodiode internal parameters. Sci. Rep.; 2023; 13, 13990. [DOI: https://dx.doi.org/10.1038/s41598-023-41111-7]
22. Brežan, T.; Franciosa, P.; Jezeršek, M.; Ceglarek, D. Fusing optical coherence tomography and photodiodes for diagnosis of weld features during remote laser welding of copper-to-aluminum. J. Laser Appl.; 2023; 35, 012018. [DOI: https://dx.doi.org/10.2351/7.0000803]
23. Caprio, L.; Previtali, B.; Demir, A.G. Sensor selection and defect classification via machine learning during the laser welding of nusbar connections for high-performance battery pack production. Lasers Manuf. Mater. Process.; 2024; 11, pp. 329-352. [DOI: https://dx.doi.org/10.1007/s40516-023-00238-1]
24. Cai, W.; Jiang, P.; Shu, L.; Geng, S.; Zhou, Q. Real-time laser keyhole welding penetration state monitoring based on adaptive fusion images using convolutional neural networks. J. Intell. Manuf.; 2023; 34, pp. 1259-1273. [DOI: https://dx.doi.org/10.1007/s10845-021-01848-2]
25. Kim, B.-J.; Kim, Y.-M.; Kim, C. Transfer learning-based multi-sensor approach for predicting keyhole depth in laser welding of 780DP Steel. Materials; 2025; 18, 3961. [DOI: https://dx.doi.org/10.3390/ma18173961] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/40942387]
26. Stavropoulos, P.; Sabatakakis, K.; Bikas, H. Welding challenges and quality assurance in electric vehicle battery pack manufacturing. Batteries; 2024; 10, 146. [DOI: https://dx.doi.org/10.3390/batteries10050146]
27. SciPy Developers. API Reference—Scipy.signal.resample. Available online: https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.resample.html (accessed on 28 October 2025).
28. Chollet, F. Deep Learning with Python; Manning Publications Co.: Shelter Island, NY, USA, 2018.
29. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv; 2014; arXiv: 1412.6980
30. Kim, H.; Kang, S.; Kim, C.; Jang, Y.H.; Kang, M. Al-driven interfacial gap prediction in overlapped Al/Cu laser weld joint for battery applications. IEEE Access; 2025; 13, pp. 181302-181312. [DOI: https://dx.doi.org/10.1109/ACCESS.2025.3621683]
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.