1. Introduction
In the field of aerosol measurement, the determination of particle number concentrations is a major topic. Optical Particle Counters (OPCs) represent the most widespread instruments for obtaining Particle Number (PN), based on the optical detection of spatially separated particles. Recently, Ref. [1] introduced a Holographic Particle Counter to image a certain sampling volume and count all present particles at once. Because of the holographic approach, imaged particles are recognized as interference patterns. In typical holographic particle imaging applications, 3D reconstruction algorithms are used to reconstruct the particles in the sampled volume [2,3,4,5,6] for the detection, characterization and visualization of the particles’ fields. OPCs primarily aim to count aerosol particles wherefore wavefront reconstruction is not strictly necessary. The tracing of particles and the determination of their size is mostly an ancillary indicator of proper device operation or maintenance need. Under these aspects, the recognition of particles as valid interference patterns at the Two-Dimensional hologram plane is sufficient and can be performed with common pattern recognition techniques.
The herein presented work addresses the design, evaluation and comparison of different pattern recognition techniques to detect and count particles. To validate the performance of the proposed recognition techniques on real world measurement data, the Particle Imaging Unit (PIU) developed in [1] is used, which is the primary application of the presented counting methods. It resolves particles that are larger than roughly 3–4μm which corresponds to pixel size of the imager. It is operated in the same setup as outlined in their work where the imaging unit is set on top of a so called Condensation Nucleus Magnifier (CNM) which grows particles to a homogeneous and predetermined size of arounddprt=7μmby means of condensation (particles are condensed by a working fluid—n-decane in this case—to form individual droplets). Subsequently, these droplets are imaged in the PIU and yield the particles’ interference patterns. To validate the detection performance of the presented pattern recognition methods, a referencing Condensation Particle Counter (CPC) (A CPC is based on the same working principle but with a different optical counting approach.) is taken for comparison. CPCs output particle counting rates in terms of particle number concentrations innumberofparticles/unitvolume= [#/cm3 ]. Because the generation and supply of particles at an unambiguous, continuous and reproducible rate is practically impossible, a comparison of number concentrations is most reasonable. In Figure 1, the PIU is sketched on the right side and its working principle shown on the left.
Particles in the sampling cell of the PIU are illuminated by a reference plane wave, generated by a low coherence diode laser. Each particle in the sampling channel acts as a single point-like object which diffracts the incident plane wave to yield a spherical object wave. Both wave fronts propagate along the z-axis and interfere in a distancezprtat the detection- or hologram plane.
2. Fringe Patterns and Its Features
When talking about point-like objects such as a particle in Figure 1, the interference pattern is also referred to as a fringe pattern which is a set of radially symmetric rings or fringes. Constructive and destructive interference lead to bright and dark fringes at the detection plane whereby the depth information of the object is carried by the phase information of the pattern.
2.1. Information Content of Fringe Patterns
The fringe pattern of one particle is described by the following function [3,7]:
Ψprt(x,y)=C1+C2·sinπλ·zprt(x2+y2)
whereC1is a constant bias summarizing the intensities of the reference wave and the object wave. The change of constructive and destructive interference is given by a sine-function with amplitudeC2. The phase information yields the density of fringes—more precisely the spatial rate of change—which linearly increases along thexy -plane from the center to the outside of the pattern. This spatial rate of change is known as the fringe frequency [7] with:
ν=12πddxπλ·zprtx2=xλ·zprt
where x is the spatial coordinate,λthe wavelength of illumination andzprt the position of the particle along the illumination path. That key characteristic of the spatial frequency increase is also known in digital image processing to test filter approaches [8] and will be made use of later in this report. The radii of fringes and, thus, the extent of a fringe pattern at the detection plane depends onzprt: particles in a larger distancezprtyield large fringe patterns and close particles lead to smallers. The range of possible z-distances is determined by the depth of the sampling volume (here the sampling channel depthzch of the PIU in Figure 1) and are a reason for a certain size distribution of fringe patterns at the detection plane, depicted in Figure 2a. On the right side, the normalized intensity histogram is illustrated. The question to address is now what are the smallest and largest possible sizes of patterns which need to be detected.
2.2. Features to Extract
The formation of fringe patterns can be described by the Angular Spectrum Method (ASM) as in [9]. A fringe pattern, however, also largely complies with a sinusoidal Fresnel Zone Plate (FZP) [7] which is primarily used to focus light, based on diffraction. The spacing of zones is such that light transmitted by the transparent zones constructively interferes at a desired focus. Holograms follow the same principle, i.e., its interference rings are caused by a common focus point—the particle. The idea here is to make use of the analogy to zone plates to easily estimate the extent of fringe patterns. As the focal length f in Figure 3 corresponds to the distancezprt from the particle to the detection plane, each zone n may be interpreted as a fringe of radius [10]:
Rn≃n·λ·zprt
withλthe wavelength of the illumination light. At the detection plane, constructive interferences appear as bright fringes and correlate to zones of even multiples of n in a zone plate.
The smallest circle radiusRmin to be expected with the utilized sampling cell is therefore estimated by calculating Equation (3) for the first even zonen=2and at the smallest possible distancez0 (see Figure 1 right). Conversely, the largest circle radiusRmaxis obtained with the largest resolvable zonenmaxat the furthermost possible particle locationzprt=z0+zch, wherezchin this case is the sampling channel depth of the PIU and, thus, the furthermost boundary. With that given Depth of Field (DoF) in a range of{z0,…,z0+zch}, the smallest circle radius to recognize is found withRmin=19pxland the largest withRmax=70pxl, where the outermost meaningful fringe is considered withn=10. One practicable goal is now to recognize patterns which consist of a set of concentric circles of radii in the range ofR=19…70pxl.
Another beneficial circumstance provided by the given DoF is that center zones are predominately dark blobs as apparent from Figure 2. Instead of detecting circles as fringes, center zones may be targeted as well for the recognition of fringe patterns.
2.3. Intensity Dependence of Fringe Patterns
The intensity of diffracted waves is magnitudes lower than the direct beam, which lead to fringe patterns of a fairly low contrast. Besides the illumination intensity incident onto particles, the contrast is additionally affected by a variety of influences, such as the particle size and diffraction properties of particles and multiple scattering as a function of particle concentrations [1,11]. As a consequence, a wide span of contrast adds additional difficulty to the feature extraction and its parameterization of optimal sensitivity.
3. Methods 3.1. Customized Hough Transform
The Hough Transform (HT) is a very well known feature extraction technique in digital image processing for detecting arbitrary geometrical shapes such as straight lines, circles or ellipses [12]. It makes use of a parameter space—the so-called Accumulator or Hough Space—where a voting procedure is carried out over a set of parameterized image objects; here circles with a certain range of radii. Object edge points, which ultimately form the object’s shape, are transformed into that parameter space using its respective mathematical representation. The resulting accumulated feature candidates allow for easier grouping and are therefore robust in the presence of noise, occlusions and varying illuminations.
3.1.1. Working Principle
The implemented customized HT is based on the work of Atherton and Kerbyson [13, 14] where the edge filtered image is convolved with a complex filter operator:
OPCA(x,y)=ej·φxyiffRmin2<x2+y2<Rmax20otherwise
that forms a Phase Coded Annulus. In this manner, the range of scanned circle radii betweenRminandRmaxis phase-coded (from 0 to2π) into a complex accumulator space with the phase coding across the annulus following a log coding:
φxy=2πlog(x2+y2)−logRminlogRmax−logRmin
In parameter space, constructive accumulation now occurs only at bins where the transformed candidates intersect with the same phase—the bin which corresponds to the circle’s center. Centers are then estimated by detecting such bins as peaks and determining its centroids using geometric moments (see also Section 3.2.3). The sensitivity of that peak detection is in the range ofSHT={0..1}and leads to fewer detected circles at lower sensitivity levels.
The radii are estimated by simply decoding the phase information from the estimated center location.
3.1.2. Image Preprocessing
In order to enhance the Signal to Noise Ratio (SNR), and thus improve fringe visibility of the patterns prior to the edge filtering, a Gaussian smoothing kernel is taken. Since higher order fringes are not mandatory for the recognition of valid patterns, the filter size of the lowpass kernel and ultimately its standard deviationσlp is configured to filter the unwanted outermost fringes. The cutoff frequency of the filter is determined by making use of one of the most common and heuristic measures when dealing with Gaussian distributions, known as the 3-sigma rule [15]. The smallest feature in an image unaffected by filtering has to fit within the3σor 99% confidence interval. In terms of a fringe pattern, the distance between fringes of the same parity (even to even or odd to odd) may be interpreted as the smallest detail to preserve. This distanceΔdr in Figure 3 can be estimated as the total width of two successive zones (opposite parity) by making again use of the FZP from [16]:
Δdr≃∑m=nn+1drm=∑m=nn+1m·λ·zprt2m
with
drn=Rn2n
Considering that the 99% confidence interval of a Gaussian filter kernel is its total width with6σlp+1, the filter size of a Gaussian lowpass filter can be calculated by:
σlp=Δdr−16
3.1.3. Parameterization
From Equation (3), it is clear that the radiiRnof fringes are only dependent on their respective fringe index n within the pattern and the distancezprtfrom the particle (the wavelengthλis constant). The range of possible distanceszprtis bounded by the sampling channel. Taking into account that the innermost fringe is principally sufficient for pattern recognition, the range of radii to search may be truncated which greatly speeds up the HT. In this work, a single-step approach is chosen where only the innermost fringe atn=2is searched withR2=22…33pxl. Due to the geometries of the given sampling cell the innermost fringe may span a range ofR2=23…32pxl. A margin of±1pxlis added. Analyses showed that especially at high particle densities the overlap of fringe patterns is too strong to identify higher order fringes. Moreover, its intensities tend to be too low to be detected. Thus, the Gaussian filter is set to a cutoff ofσlp=2.62to retain an approximate level of detail ofΔdr≈17pxl. The sensitivity is set toSHT=0.93and was heuristically determined.
3.2. Blob Detection
Blob detection is a subcategory of image matching techniques, aiming to detect regions of common properties such as a homogeneous brightness or grayscale that thereby distinguish them from background regions [15,17,18,19]. Blob detectors can be based on image gradients (contrast), eigenvalues or templates [19]. Since the mathematical representation of fringe patterns is known, template matching [20] is a suitable approach.
3.2.1. Blob Extraction Using Template Matching
An artificially generated fringe pattern is of course a viable template to use. However, since patterns start to overlap strongly at higher particle number concentrations, a mask that emphasizes the sole center zone is more meaningful. A multi-step template matching is performed using circular masks of steadily increasing radiiRm:
gmTM(x,y)=1iff(x−x¯)2+(y−y¯)2<=Rm20otherwise
with(x¯,y¯)the circle center and m the current step. The templates equal non-normalized disk-like box filter kernels that gradually lowpass-filter background noise with increasing radius of the masks and thereby emphasize regions that match it.
3.2.2. Blob Segmentation
To segment blobs, global thresholding is necessary to find the optimal threshold in the histogram. Although Otsu’s method is one of the most widespread thresholding techniques due to its good performance and yet simplicity, it faces clustering problems with unimodal histograms. Small object areas compared to background areas are the cause for unimodality as reported in [21,22]. Unimodality however is the major case in our presented work and thus disqualifies methods of that kind of clustering thresholding. Instead, with the maximum entropy thresholding [23,24], an entropy-based thresholding method is utilized that interprets the maximum entropy as indicative of maximum information transfer.
It is based on the probability distribution function of gray-level histograms. Assuming two distributions where one belongs to the class of blobs (dark pixels) and the other to the class of background (bright pixels), then the optimum thresholdkoptof inter-class entropy is found at:
kopt=argmaxL1≤k<L2[Hdrk(k)+Hbr(k)]
whereHdrkis the entropy of dark pixels, based on the probabilityPdrkthat pixels are assigned to the class of dark pixels, andHbrthe entropy of bright pixels with its probabilityPbr, respectively. A lower limitL1and an upper limitL2confine the thresholdkoptto a certain gray-level range for later discussed reasons. The standard setting isL1=0andL2=L, where L is the number of gray-levels. With a set ofk={0,1,2,…,L−1} of L gray-levels, the entropies of both classes are calculated as follows [23]:
Hdrk(k)=−∑l=0kPdrk(l)·log[Pdrk(l)]andHbr(k)=−∑l=k+1LPbr(l)·log[Pbr(l)]
3.2.3. Blob Labeling and Counting
After thresholding, blobs remain as regions of connected pixels in the binary image and are typically detected by connected components labeling [25]. All connected or neighboring pixels corresponding to a separate region are assigned the same labels. The total number of different labels equals the number of detected blobs and, in the ideal case, also equals the total number of fringe patterns. In fact, fringe patterns may strongly overlap and yield merged blobs though. In order to discover such scenarios, blob features are meaningful to assess using different descriptors.
Regional descriptors are very often used in combination with connected components labeling and are based on mathematical moments of the form [20]:
mpq=∑(x,y)∈Rxp yq·I(x,y)
where(p,q)are the indices of the moment and(x,y)the pixels of the regionRin gray-scale imagesI(x,y). The sump+qof the indices corresponds to the order of the momentmpq. For binary images, as given after thresholding, the termI(x,y)equals 1.
Moments carry physical interpretations of shapes such as the mass (area), center of mass or gravity (centroid), eccentricity or orientation of the region. Therein, the order of the moment determines the property. The most common are the zeroth order momentm00as the area A and the first order moment as the centroid withx¯=m10/m00andy¯=m01/m00. The centroid is also a common feature to locate or tag regions at its center point. In the given problem statement, it is of particular significance because the centroid of fringes represents the actual location in thexy - plane (see in Figure 1). In conjunction with the perimeter P (a boundary descriptor), the circularity is another meaningful descriptor [8]:
circularity=4πAP2
It is a measure independent of size, orientation, and translation, and is 1 for a circle. Merged blobs form elongated or asymmetric shapes that deviate strongly from the ideal circularity of 1 and therefore indicate multiple fringe patterns. In this work, it is used as a correction means which adds an additional count to regions where the circularity is beneath a threshold ofcircularity≤0.95.
3.3. Deep Convolutional Neural Network (DCNN)
Convolutional neural networks have made some great advances in visual recognition tasks, e.g., [26]. While convolutional neural networks have been used for a long time [27], their success was limited due to the size of available training sets and the size of available networks. A breakthrough has been achieved by Krizhevsky et al. [28] who were able to supervise a training of a large network with eight layers and millions of parameters on the ImageNet dataset with 1 million training images. Since then, even larger and deeper networks have been trained [29].
3.3.1. Working Principle
The network architecture is depicted in Figure 4 and is based on the principle of a U-Net structure [30]. In total, the network has 23 convolutional layers. It comprises a contracting information path (left path) and an expansive path (right path). The contracting path follows the architecture of a convolutional network and includes the successive application of two 3 × 3 convolutions, each followed by a Rectified Linear Unit (ReLu) for activation and a 2 × 2 max pooling operation for downsampling. At each downsampling step, the number of feature channels is doubled. Every step in the expansive path consists of an upsampling of the feature map followed by a 2 × 2 convolution, a concatenation with a correspondingly cropped feature map from the contracting path, and two 3 × 3 convolutions, followed by a ReLu. Cropping is required due to the loss of boarder pixels at every convolution.
3.3.2. Training
Input images and its corresponding segmentation maps are used to train the network with the stochastic gradient descent implementation of [31]. Due to the unpadded convolutions, the output image is smaller than the input by a constant border width. To minimize overhead and make maximum use of the Graphics Processing Unit (GPU) memory, large inputs are favored over a large batch size. Hence, the batch is reduced to a single image. Accordingly, a high momentum (0.99) is used, such that a large number of the previously seen training samples determine the update in the current optimization step.
3.3.3. Data Processing and Evaluation
Data augmentation is the main process to teach the network the desired invariance and robustness characteristics in case only a few training samples are available. In case of fringe patterns, shift and rotation invariance are needed as well as robustness to deformations and gray value variations. The data set is provided by the EM segmentation challenge [32] that was started at ISBI 2012 and is still open for new contributions. The training data are a set of 30 frames (512 × 512 pixels) from the challenge. Each image within this data set is delivered with a corresponding fully annotated ground truth segmentation map for cells (white) and other structures within this challenge (black). In a second step, artificial data generated with the Aerosol Particle Model (APM) from Brunnhofer and Bergmann [9] was trained. The network is finally trained on real world measurement samples aside from simulated data sets.
An evaluation of the U-Net segmentation can be conducted by looking at the model accuracy and model loss for the training and validation set at hand (a data set as e.g., in Figure 2a).
4. Results In the following section, the results of the customized HT, blob detection and the DCNN are compared in terms of detection performance and computational speed and which method is most suitable for the application in Holographic Particle Counters. For that purpose, an imaged hologram of a real measurement sample is taken as an example image where the density of fringe patterns is moderately high. Subsequently, a selected section of that image is used to first assess the HT and blob detection on an empirical basis. It contains strongly overlapping but also clearly separated fringe patterns and poses certain complexities to both methods. The neural network needs to be assessed with multiple validation images instead. The second part outlines a qualitative comparison of all methods based on a real measured data set. 4.1. Customized HT
The aforementioned image section in Figure 5b shows mutliple overlaps of several fringe patterns in the lower right corner and a very strong merge in the lower left part. The dense group of patterns in the right corner is almost entirely detected except for one missing hit (tagged in orange). The preprocessed image (Figure 5a) reveals that all innermost fringes are resolved very sharply and suggests that the sensitivitySHTof the HT should be refined to recognize the missing hit as well. However, a higher sensitivity was identified to lead to an increase in false-positive hits and was hence deliberately avoided.
Strong overlaps in the lower left corner are reliably separated. While a distinction of the left two patterns can be validated through visual inspections and experiences, the right group is strongly bundled and requires backpropagation means to verify the particles in the reconstructed 3D-volume Not elaborated in this work, but the actual count of particles in this particular bundle is 4 indeed. 4.2. Blob Detection
The majority of the center spots in fringe patterns is dark. These dark centers equal multi-scale blobs that are extracted by the multi-step template matching approach. Since the filter kernels of the templates are non-normalized, a bias in pixel intensities is introduced during filtering. With respect to the image histogram, it narrows the mainlobe of Gaussian intensity distribution due to lowpass-filtering and shifts it to brighter intensity values as a consequence of the bias (compare the histograms of Figure 2b with Figure 6a).
The final histogram is right-sided with a mainlobe relating to the background and a certain side-distribution to its left which contains the information of the darker center blobs (and fringes of odd parity).
With maximum entropy thresholding, the optimal intensity thresholdkoptis found where the best contrast is obtained in terms of maximum information transfer. Its indicative measure is the maximum sum entropyHmax=max[Hbr(k)+Hdrk(k)] . Figure 6a shows the determination of the sum entropy with the entropyHbrof bright pixels and the entropyHdrkof dark pixels plotted separately. In this particular example image, the correct optimum threshold is located close to the left of the mainlobe atkopt=0.74. However, the algorithm would actually fail to find that threshold because the background entropyHbrgains a peak at lower pixel intensities (atk=0.37) and the actual maximum sum entropy would be erroneously reached at that particular threshold value. To avoid such misinterpretations, the thresholding limitsL1andL2in Equation (8) are introduced. The upper limitL2equals the histogram bin of the mainlobe peak and is determined for each sample image individually. Evident from the given fringe pattern properties, the intensity of blobs will not exceed background levels and is therefore always located left of the mainlobe.L1is a rather empirical value and is set to0.5as a threshold for right-sided histograms.
The unwanted peak in the background entropyHbr is a system artefact introduced by the imaging process of particles and can have different reasons. Some were identified as being caused by: (i) higher background flicker as a consequence of high particle densities. A rising number of particles means an increase of speckle noise in the sampling cell due to multiple scattering; (ii) fluctuations in the background that may come from vibrations during the imaging process or instabilities of the light source; or (iii) an inhomogeneous exposure of the camera with a tendency to poorer illumination at the detectors corners—cf. [1].
Figure 6b depicts the same image section of fringe patterns like before but overlaid with the resulting blobs after thresholding. Blue crosses are the centroids of each fringe pattern and correlate to thexy-position of its respective particle unless the shape of the blob deviates too much from an ideal circle of 1. In such cases, the assumption is made that at least two fringe patterns are overlapping and a correction in the count of detected particles is made by+1for each affected blob. A second cross nearby the actual blob centroid marks the correction made. Of course, the actual particle position does not relate to the determined centroids any more. Orange crosses annotate valid fringe patterns which the algorithm does not recognize. These are missing hits.
4.3. DCNN The detection performance of the DCNN is evaluated in terms of accuracy and precision. A computer with an Intel Xeon W-2145 (Skylake-W) 8-Core CPU and a GPU (NVIDIA GeForce RTX2080 TI) was used. Regarding the training dataset, around 6000 artificially generated holograms have been modelled with the APM and following 64 images were selected as a validation dataset. In these datasets, the number of particles steadily increases from 0 to greater than 200 particles. The number of epochs for the training was 50 and the training time was approximately 4.5 h.
Table 1 shows the accuracy and precision values of the selected samples. In this selection, particles are ranging from 53 to 180. The accuracy is calculated by the number of True Positives + True Negatives divided by the total number of predictions. The formula for precision is the number of True Positives divided by True Positives + False Positives. If no detection of a False Positives takes place, the precision is 1.0.
4.4. Comparison of Detection Performance
A quantification of detection performance on real world measurement data is difficult considering that the generation of particles and its supply to the measurement instrument at an unambiguous, steady and reproducable rate is practically impossible. Under these aspects, the actual particle count in the imaged sampling volume of the PIU is in fact unknown and badly verifiable. Therefore, a comparison of particle number concentrationCNinnumberofparticles/unitvolume= [#/cm3 ] is most reasonable wherefore the counting results obtained by the three detection methods need to be converted according to [1]:
CN=N(τ)Q·τs︸referenceCPC=N(V) Vs︸PIU
The conversion for the PIU is the measured particle countN(V)over the known sampling volumeVsof the PIU. In contrast to that, the reference CPC operates at a known flow rate Q and counts the particlesN(τ)in a certain sampling intervalτs [33].
In Figure 7, the counting results of the customized HT (top left), the blob detection (top right) and the DCNN (bottom) are compared in terms of the aforementioned particle number concentration. The concentration was ramped from 0 to 2030 #/cm3 which, with respect to image processing, means an average count of roughly 0 to 488 fringe patterns per image. The course of the ideal correlation is drawn as a black reference line. Three frames per measurement point were acquired because the sampling interval of both, the PIU and the reference CPC match best—cf. [1]. The measurement curve is fitted with a polynomial regression function of 3rd order to emphasize the course of detection points.
All three counting methods provide good linearity as long as fringe patterns are spatially well separated (CN<500#/cm3). With increasing particle densities, the likelhood of partially overlapping fringe pattern rises and the detection performance of the DCNN significantly drops. The customized HT and the blob detection can handle particle number concentrations up to approximatelyC=1250#/cm3 (or 300 particles per frame) before a regression is noticeable. This implies that partial overlaps are separable very well with these methods (see also in Figure 5 and Figure 6). At even higher counting rates, however, the linear correlation is distorted because of the rising occurence of coinciding particles which is a well known limitation when optically detecting particles [1]. Fringe patterns do not only partially overlap any more but start to superimpose to a mutual pattern of fringes at which neither algorithm is capable of resolving this complex formed patterns (a separation task like this now requires backpropagation algorithms).
Higher error bars in all methods are mostly related to fluctuations of particle rates during the measurement.
4.4.1. Details on Customized HT
From Figure 7, the conclusion can be drawn that the counting performance of the customized HT and the blob detection is very similar in terms of counting rates. The customized HT is more robust against noise and intensity fluctuations in images and therefore less erroneous though. The detectability of fringes at even strong overlaps is very high as was found out in Figure 5. Images without particles are unproblematic and make this method a good candidate for “zero-particle” monitoring.
4.4.2. Details on Blob Detection
Figure 6 illustrates that strong overlaps of fringe patterns lead to merged blobs. This problem is counteracted with the implemented corrective measure where non-circular blobs are treated as multiple occurances. Such blobs are double counted which acts to some extent as a coincidence correction. As a result, the blob detection even gains a slight advantage over the HT at higher particle densities (3) in Figure 7b.
However, annotations (1) and (2) reveal the weaknesses. They indicate sample points where only single measurement frames are distorted by high background fluctuations. As a consequence, the recognition of patterns in these frames fails and yields mainly False Positives, as evident from the concerned measurement frames in Figure 8. Since blob detection is based on histogram thresholding, a misinterpreted threshold leads to incorrect detection hits. The fluctuations add low pixel intensity shares to the histogram which are confused with dark areas of fringe patterns. In the case of zero-particle frames, the impact of a misinterpreted threshold is vast. Because the histogram is divided into foreground and background pixels, zero-particle images are more difficult to classify and prone to misclassifications.
4.4.3. Details on DCNN
At low particle concentrations (<500 #/cm3 ), the detection performance of the U-Net is comparable with the other methods. The U-Net is capable of classifying zero-particle images as well as images with higher background fluctuations and is as powerful as the HT. The high accuracy at low particle numbers from Table 1 also confirms these good detection rates. At higher concentrations though, Figure 7c with annotation (4) and Table 1 illustrate the strong decrease in detectability. The reason is that, as the number of particle increases, fringe patterns start to overlap for which the network is insufficiently trained. Although the training dataset contained numerous occurrences of fringe pattern overlaps, the network was trained specifically for individual occurrences. Since the accuracy of a DCNN depends on training, an enhanced set of training data would improve the detection performance at least to a limited extent. Due to the lack of real measurement samples and its ground truth data, primarily only modelled holograms could be used. Thus, the training relies on the degree of reality in the Aerosol Particle Model which provided the training data.
Another reason is in the loss of border pixels in every convolution step. Fringe patterns located at the border of images are therefore likely to be missed. Especially at higher particle concentrations, the probability of more particles passing at the edge of the detector increases, degrading the detection performance additionally. 4.5. Comparison of Computational Speed
The comparison of computational speed is also based on the dataset of the ramped particle number concentration, also used in Section 4.4. The processing time of all three presented counting methods is examined on every sample point of the measurement curve. These sample points ofCN are directly proportional to the counting rate of particles N as given in Equation (12). Figure 9 compares the computational speed of the methods as a function of particle number concentration. It has to be mentioned that both the blob detection and the customized HT are MatLab based algorithms which are executed on CPU without GPU support. The U-Net, on the other hand, is a Python script running on a GPU which is optimized for Artificial Intelligence (AI) applications.
The blob detection and the U-Net are nearly constant over all measurement samples. With an average processing time of roughly0.45ms, the blob detection is the fastest method and was selected as the benchmark to which the other algorithms are normalized for comparison. The U-Net is slightly slower, with an average processing time of0.68ms or, in terms of computational speed, takes a factor of1.56longer in calculation than blob detection.
The customized HT takes at least six times longer and exhibits a strong dependence on the particle rate. There, the number of particles may be interpreted as the number of cycles the algorithm has to iterate to obtain its counting result. It is reasoned in the single transformation of every fringe pattern occurence into Hough space. On closer inspection, the blob detection behaves similarly due to the segmentation and labeling of the growing number of blobs. The impact is at a very small and narrow scale though. Hence, the dependence on the number of particles is negligible. The U-Net utilizes linear and invariable convolution, pooling and sampling operators and therefore operates at a steady speed. 5. Conclusions
The novel application of holography in Optical Particle Counters does not nesseccarily require wavefront reconstruction to reconstruct the particles in the sampled volume. Instead of such typical 3D- backpropagation algorithms, common pattern recognition techniques are sufficient to detect and count interference patterns as valid particles at the Two-Dimensional hologram plane. With a Hough Transform, a variant of blob detection and a Deep Convolutional Neural Network, three different pattern recognition techniques were customized, validated, and compared in terms of detection performance of fringe patterns and computational speed. While model data generated from a holographic Aerosol Particle Model aided the design of the methods, the validation and comparison were based on real measurement samples conducted with the Particle Imaging Unit from [1].
All three methods show basic suitability as counting methods, though with different limitations and drawbacks. At higher particle number concentrations, the rising probability of particle coincidence, as a well known limitation for OPCs, inevitably reduces the detection performance of all methods. Since it takes into account a sort of coincidence correction, blob detection turns out to be the best method with respect to counting rates. The superior computational speed with constant processing times (even without GPU support) enables a Real-Time (RT) application and makes it the best candidate for particle counters. The customized HT is more robust against noise and intensity fluctuations in images and shows slightly better precision at the detection of fringe patterns. However, the longer processing times, which vary as a function of particle rate, disqualify it for practical use. A solution based on a DCNN only works satisfactorily at low particle rates because only a few overlaps of fringe patterns occur. Its accuracy and detection performance may be increased by training with greater datasets and real measurement samples. Although this is a topic for further investigations, similarly high counting rates are difficult to achieve due to the high amount of different overlapping possibilities of fringe patterns. The requirement of a GPU is additionally disadvantageous over CPU-executable RT applications.
The herein provided set of measurement samples spans a range of roughly 0–490 particle counts per measurement frame. Because of coincidence, the used PIU is limited to particle number concentrations of roughlyCN= 2000 #/cm3. Further investigations could therefore focus on a redesigned sampling cell to provide a larger detection area for better particle distribution, or smaller fringe patterns at the hologram plane. Both measures would reduce the degree of pattern overlaps and increase the detection performance of all presented methods, or enhance the limit of detectable particles. The latter raises additional research questions with regard to the resolving capabilities of the methods.
Figure 1.(a) in-line holographic principle where a single particle creates a fringe pattern at the camera plane (particles are single neclei in droplets); (b) schematic of the in-line holographic counting unit, subsequently called Particle Imaging Unit (PIU) cf. [1].
Figure 2.(a) example of a typical detection plane at low particle number concentration with overlapping fringe patterns and patterns of different extent as a result of thezprt-location in the sampling channel; (b) the normalized intensity histogram.
Figure 3. Fresnel Zone Plate (FZP) to estimate the radiusRnof fringes and the size of fringe patterns;Δdris the distance between two successive zone centers of the same parity (even or odd) and may be interpreted as the smallest detail to preserve when lowpass filtering fringe patterns;drn is the width of nth zone.
Figure 4. U-Net architecture example for 32 × 32 pixels in the lowest resolution. Each blue box corresponds to a multi-channel feature map. The number of channels is denoted on top of the box. Thexy-size is provided at the lower left edge of the box. White boxes represent copied feature maps. The arrows denote the different operations.
Figure 5. Detection result of the customized HT (selected image section) (a) Gaussian filtered fringe patterns (σlp=2.62) where all detected fringes are highlighted with circles; (b) the original fringe patterns. Its determined centroids equal the actual position of the particles in thexy- plane. There is one missing hit (orange).
Figure 6. Detection result of blob detection (selected image section). (a) histogram and the optimal thresholdkopt of the whole image, obtained by maximum entropy thresholding. Equation (8) needs to be confined to a lower threshold limit set toL1=0.5and an upper limit ofL2=0.84which is the peak of the mainlobe; (b) fringe patterns overlaid with the corresponding blobs that result from a threshold atkopt=0.74. Four hits are missing (orange).
Figure 7. Comparison of the monitored particle number concentration to the counting rates obtained by the PIU. (a) customized HT; (b) blob detection with maximum entropy thresholding; (c) DCNN based on a U-Net.
Figure 8. Zoomed segments of measurement samples from Figure 7b that suffer strong background fluctuations: (1) zero-particle frame; (2) particle number concentration ofCN=194#/cm3; while the customized HT outputs correct hits (only True Positives), the blob detection in both scenarios fails (also False Positives) because of a misinterpreted intensity threshold in the histogram.
Number of Particles | Precision | Accuracy |
---|---|---|
53 | 0.55 | 0.98 |
88 | 0.45 | 0.91 |
103 | 0.36 | 0.87 |
155 | 0.35 | 0.74 |
180 | 0.25 | 0.69 |
Author Contributions
Conceptualization, methodology, and validation, G.B., I.H., and A.B.; investigation, software, data curation, formal analysis, visualization, and writing-original draft, G.B. and I.H.; supervision, project administration, funding acquisition, resources and writing-review and editing, A.B. and M.K. All authors have read and agreed to the published version of the manuscript.
Funding
This project was performed within the COMET Centre ASSIC-Austrian Smart Systems Integration Research Center, which is funded by BMK, BMDW, and the Austrian provinces of Carinthia and Styria, within the framework of COMET-Competence Centers for Excellent Technologies. The COMET programme is run by FFG.
Acknowledgments
The authors want to thank the great team of the Institute of Electronic Sensor Systems at TU Graz where all experiments were conducted. We also appreciate the support of Philip Bermann who wrote his Bachelor Thesis on the topic of moving aerosol particles. Special thanks are also dedicated to Michael Gobald, Thomas Krenn, and Martin Hirzer from the TR department of AVL for their fruitful discussions which helped in developing and assessing the presented image processing methods.
Conflicts of Interest
The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results.
Abbreviations
The following abbreviations are used in this manuscript:
CPC | Condensation Particle Counter |
PC | Particle Counter |
HPC | Holographic Particle Counter |
HPCs | Holographic Particle Counters |
CNM | Condensation Nucleus Magnifier |
OPCs | Optical Particle Counters |
PN | Particle Number |
HPIV | Holography Particle Image Velocimetry |
HT | Hough Transform |
CHT | Circular Hough Transform |
PIU | Particle Imaging Unit |
APM | Aerosol Particle Model |
ASM | Angular Spectrum Method |
FZP | Fresnel Zone Plate |
FZPs | Fresnel Zone Plates |
DIH | Digital Inline Holography |
3D | Three-Dimensional |
2D | Two-Dimensional |
SNR | Signal to Noise Ratio |
DNN | Deep Neural Network |
DCNN | Deep Convolutional Neural Network |
LoG | Laplacian of Gaussian |
DoF | Depth of Field |
ReLu | Rectified Linear Unit |
AI | Artificial Intelligence |
RT | Real-Time |
GPU | Graphics Processing Unit |
1. Brunnhofer, G.; Bergmann, A.; Klug, A.; Kraft, M. Design & Validation of a Holographic Particle Counter. Sensors 2019, 19, 4899.
2. Pan, G.; Meng, H. Digital In-line Holographic PIV for 3D Particulate Flow Diagnostics. In Proceedings of the 4th International Symposium on Particle Image Velocimetry, Gottingen, Germany, 17-19 September 2001; pp. 611-617.
3. Gire, J.; Denis, L.; Fournier, C.; Thiébaut, E.; Soulez, F.; Ducottet, C. Digital holography of particles: Benefits of the 'inverse problem' approach. Meas. Sci. Technol. 2008, 19, 074005.
4. Berg, M.J.; Heinson, Y.W.; Kemppinen, O.; Holler, S. Solving the inverse problem for coarse-mode aerosol particle morphology with digital holography. Sci. Rep. 2017, 7, 1-9.
5. Murata, S.; Yasuda, N. Potential of digital holography in particle measurement. Opt. Laser Technol. 2001, 32, 567-574.
6. Malek, M.; Allano, D.; Coëtmellec, S.; Özkul, C.; Lebrun, D. Digital in-line holography for three-dimensional-two-components particle tracking velocimetry. Meas. Sci. Technol. 2004, 15, 699-705.
7. Poon, T.C.; Liu, J.P. Introduction to Modern Digital Holography; Cambridge University Press: Cambridge, UK, 2014; p. 223.
8. Gonzalez, R.; Woods, R. Digital Image Processing; Prentice Hall: Upper Saddle River, NJ, USA, 2002; p. 190.
9. Brunnhofer, G.; Bergmann, A. Modelling a Holographic Particle Counter. Proceedings 2018, 2, 967.
10. Attwood, D.; Sakdinawat, A. X-Rays and Extreme Ultraviolet Radiation: Principles and Applications, 2nd ed.; Cambridge University Press: Cambridge, UK, 2017.
11. Dixon, L.; Cheong, F.C.; Grier, D.G. Holographic particle-streak velocimetry. Opt. Express 2011, 19, 4393-4398.
12. Duda, R.O.; Hart, P.E. Use of the Hough transformation to detect lines and curves in pictures. Commun. ACM 1972, 15, 11-15.
13. Atherton, T.J.; Kerbyson, D.J. Using phase to represent radius in the coherent circle Hough transform. In IEE Colloquium on Hough Transforms; IET: London, UK, 1993; pp. 1-4.
14. Atherton, T.; Kerbyson, D. Size invariant circle detection. Image Vis. Comput. 1999, 17, 795-803.
15. Kong, H.; Akakin, H.C.; Sarma, S.E. A generalized laplacian of gaussian filter for blob detection and its applications. IEEE Trans. Cybern. 2013, 43, 1719-1733.
16. Last, A. Fresnel Zone Plates 2016. Available online: http://www.x-ray-optics.de/index.php/en/types-of-optics/diffracting-optics/fresnel-zone-plates (accessed on 11 December 2019).
17. Lindeberg, T. Detecting salient blob-like image structures and their scales with a scale-space primal sketch: A method for focus-of-attention. Int. J. Comput. Vis. 1993, 11, 283-318.
18. Jayanthi, N.; Indu, S. Comparison of image matching techniques. Int. J. Latest Trends Eng. Technol. 2016, 7, 396-401.
19. Kaspers, A. Blob Detection. Master's Thesis, UMC Utrecht, Utrecht, The Netherlands, 2009. Available online: https://dspace.library.uu.nl/handle/1874/204781 (accessed on 11 December 2019).
20. Sonka, M.; Hlavac, V.; Boyle, R. Image Processing, Analysis, and Machine Vision; Cengage Learning: Boston, MA, USA, 2014.
21. Kittler, J.; Illingworth, J. On threshold selection using clustering criteria. EEE Trans. Syst. Man, Cybern. 1985, SMC-15, 652-655.
22. Lee, H.; Park, R. Comments on An optimal multiple threshold scheme for image segmentation. IEEE Trans. Syst. Man Cybern. 1990, 20, 741-742.
23. Kapur, J.N.; Sahoo, P.K.; Wong, A.K.C. A new method for gray-level picture thresholding using the entropy of the histogram. Comput. Vision Graph. Image Process. 1985, 29, 273-285.
24. Sezgin, M.; Sankur, B. Survey over image thresholding techniques and quantitative performance evaluation. J. Electron. Imaging 2004, 13, 146-165.
25. Haralick, R.M.; Shapiro, L.G. Image segmentation techniques. Comput. Vision Graph. Image Process. 1985, 29, 100-132.
26. Girshick, R.; Donahue, J.; Darrell, T.; Malik, J. Rich feature hierarchies for accurate object detection and semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, 23-28 June 2014; pp. 580-587.
27. LeCun, Y.; Boser, B.; Denker, J.S.; Henderson, D.; Howard, R.E.; Hubbard, W.; Jackel, L.D. Backpropagation applied to handwritten zip code recognition. Neural Comput. 1989, 1, 541-551.
28. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 25, 1097-1105.
29. Simonyan, K.; Zisserman, A. Two-stream convolutional networks for action recognition in videos. In Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada, 8-13 December 2014; pp. 568-576.
30. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5-9 October 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234-241.
31. Jia, Y.; Shelhamer, E.; Donahue, J.; Karayev, S.; Long, J.; Girshick, R.; Guadarrama, S.; Darrell, T. Caffe: Convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM international conference on Multimedia, Orlando, FL, USA, 3-7 November 2014; pp. 675-678.
32. Em Segmentation Challenge. 2012. Available online: https://imagej.net/2011-10-25_-_EM_segmentation_challenge_(ISBI_-_2012) (accessed on 31 March 2020).
33. TSI Incorporated. Model 3775 Condensation Particle Counter; Operation and Service Manual; TSI Incorporated: Shoreview, MN, USA, 2007.
Georg Brunnhofer1,2,3,*,†, Isabella Hinterleitner2, Alexander Bergmann3 and Martin Kraft2
1Nanophysics & Sensor Technologies, AVL List GmbH, 8020 Graz, Austria
2Sensor Systems, Silicon Austria Labs GmbH, 9524 Villach/St. Magdalen, Austria
3Institute of Electronic Sensor Systems, Graz University of Technology, 8010 Graz, Austria
*Author to whom correspondence should be addressed.
†Current address: AVL List GmbH, 8020 Graz, Austria.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2020. This work is licensed under http://creativecommons.org/licenses/by/3.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Digital Inline Holography (DIH) is used in many fields of Three-Dimensional (3D) imaging to locate micro or nano-particles in a volume and determine their size, shape or trajectories. A variety of different wavefront reconstruction approaches have been developed for 3D profiling and tracking to study particles’ morphology or visualize flow fields. The novel application of Holographic Particle Counters (HPCs) requires observing particle densities in a given sampling volume which does not strictly necessitate the reconstruction of particles. Such typically spherical objects yield circular intereference patterns—also referred to as fringe patterns—at the hologram plane which can be detected by simpler Two-Dimensional (2D) image processing means. The determination of particle number concentrations (number of particles/unit volume [#/cm3]) may therefore be based on the counting of fringe patterns at the hologram plane. In this work, we explain the nature of fringe patterns and extract the most relevant features provided at the hologram plane. The features aid the identification and selection of suitable pattern recognition techniques and its parameterization. We then present three different techniques which are customized for the detection and counting of fringe patterns and compare them in terms of detection performance and computational speed.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer