1. Introduction
For the past few decades, generating high dynamic range (HDR) images from a group of low dynamic range (LDR) images with different exposures has been a challenge [1]. Low dynamic range images taken with mobile phones and cameras cannot accurately reflect the dynamic range of the real world, mainly because the image sensors in mobile phones and cameras have limited response to the dynamic range of natural light [2], so they cannot capture all the details of the light and dark areas in the dynamic scene. In the early stage of development, many researchers proposed an HDR generation scheme to solve this problem using the camera response function (CRF) and the exposure time information of the input image to linearly map the pixel value of the scene brightness [1] to generate HDR images in line with the visual quality. However, this approach is difficult to determine the CRF because researchers need to calculate the CRF each time they change the capturing device or adjust the parameters (exposure time, IOS, etc.) [3]. In addition, after the HDR image is acquired, the range of the HDR image is compressed using hue mapping to make it display on the monitor [4]. Since tone mapping is time-consuming and requires many steps to complete, it limits the applicability of this method.
Therefore, the new concept of multi-exposure fusion (MEF) is proposed. MEF technology is a fusion technology that can fuse images with different exposure times according to fusion rules to generate HDR images that meet the visual quality without any imaging information and can be displayed directly on the display [5].
Generally, MEF methods are divided into two categories: traditional MEF methods and deep learning-based MEF methods. The traditional MEF method technology is very mature, and many researchers have proposed many traditional MEF methods. For example, Liu et al. [6] proposed a MEF method based on dense scale-invariant feature transform (SIFT), which uses dense SIFT operators to extract local details from source images, suitable for both static and dynamic scenes, and is able to satisfactorily remove ghosting artifacts in dynamic scenes. Huang et al. [7] proposed a multi-exposure fusion algorithm based on feature evaluation, which can adaptively evaluate the exposure weight of the image, and the obtained fusion image has better brightness. Moreover, it can retain some details but details will be lost in areas with large differences in brightness.
In recent years, deep learning has attracted the attention of many researchers and has been applied to the field of MEF. Prabhakar et al. [8] took the lead in applying depth to the MEF field. Wang et al. [9] proposed a CNN-based fusion method for fusing images with different exposure levels. However, deep learning-based methods require many multi-exposure images with ground-truth information for training, which is challenging to meet.
Using the traditional MEF method, this paper proposes an exposure weight calculation method based on the weighted average adaptive factor, which is based on the Huang method, and then uses a fast local Laplacian filter for local detail enhancement. The main contributions of this paper are as follows:
This paper proposes a new exposure weight evaluation scheme based on the weighted average adaptive factor, determined by the local average brightness and the expected brightness. The local average and expected brightness are used as the adjustment parameters of the exposure evaluation function and the exposure weight of the pixel is adaptively evaluated.
A local detail enhancement method for Laplacian pyramid fusion is proposed, which uses the K-means algorithm to divide the input image sequence into over-exposed images (OEI) and low-exposure images (LEI) according to the image brightness histogram distribution, and then uses the fast local Laplacian filter (FLLF) to enhance the darker regions of the OEI and the brighter regions of the LEI.
The rest of this paper is organized as follows. Section 2 introduces related work; Section 3 gives a detailed introduction to the proposed methods; then, we analyze the experimental results from both the objective and subjective analyses in Section 4; and in Section 5, we summarize this work and discuss future work.
2. Related Work
Exposure fusion was first proposed by Mertens et al. [10]. In this method, each input image is decomposed into a Laplace pyramid and a Gauss pyramid of weight map is established for each image according to contrast, saturation, and exposure, thus realizing a multi-resolution fusion framework. Multi-resolution fusion methods achieve good visual results in the multi-exposure images of most scenes but lose valid details in brighter and darker areas. Goshtasby et al. [11] proposed a fusion method based on smoothing filtering, which divides low dynamic range images into blocks and uses entropy to calculate the weight of each block, and then uses the block with the highest weight at each location to construct the resulting image; however, the boundaries between image blocks will appear unsmooth. In order to smooth these boundaries, a two-dimensional Gaussian filter is deployed. In this method, smaller blocks have a better smoothing effect than larger blocks but lose computation time, so there is a trade-off between smoothness and computational time complexity.
Shen et al. [12] proposed a generalized random walk framework that comprehensively considers neighborhood information, local contrast, and color consistency and achieves a globally optimal probabilistic model. Gu et al. [13] inversely transform and linearly stretch the gradient field by maximizing the structure tensor in the gradient field. However, the fused image suffers from color blurring and unnatural phenomena.
Qi et al. [14] used guided filtering to decompose the source image into base and detail layers, performed a weighted fusion of the base and detail layers determined by the average level of local brightness changes, and implemented a low-level feature-based multi-exposure fusion algorithm. This method performs well in image sharpness and color information preservation but is prone to noise and halos.
Although these traditional multi-exposure fusion schemes can achieve good visual effects, there is a trade-off between image details and brightness. If the brightness weight is considerable, the fusion image will be brighter but some details will be lost; conversely, a certain amount of luminance weight is lost to highlight more texture details.
In recent years, people have also introduced detail enhancement mechanisms in multi-exposure fusion methods. However, most of them introduce a detailed extraction mechanism based on edge-preserving filters, such as a bilateral filter [15], a weighted least-squares filter (WLS) [16], a guided filter [17], and other improved filters. Raman et al. [18] used the difference between the original pixel value and the bilateral filtered pixel value to determine the weight value of each pixel. Although it can retain strong edges, it performs poorly in global contrast and color visibility.
Ma et al. [19] proposed a structural block decomposition-based MEF algorithm (SPD-MEF) but the structural block decomposition is time-consuming. Hayat et al. [20] used edge-preserving recursive filtering techniques to reduce near-edge artifacts. Shu Tao et al. [21] made full use of the spatial consistency to fuse the base and detail layers through a weighted-average technique based on guided filtering. Kou et al. [22] utilized a gradient domain-based weighted least-squares method to extract details of the brightest and darkest regions. However, most of these multi-exposure image fusion algorithms that are based on detail enhancement enhance the details of all pixels except the brightest and darkest areas. This processing method increases the complexity of the algorithm and leads to the excessive enhancement of normal pixel details.
In order to adapt to the local brightness of the input image sequence, this paper proposes a new weighted average adaptive factor exposure evaluation scheme; unlike previous enhancement schemes, to enhance the details of brighter and darker regions a local detail enhancement mechanism is proposed, which only processes the brighter and darker regions determined by the average brightness while leaving the other regions unchanged.
3. Proposed Methods
3.1. Proposed Frame
The fusion framework of this paper is shown in Figure 1, which includes two steps: (1) For the input LDR multi-exposure image sequence, use the well-exposure evaluation function and chrominance evaluation function proposed in this paper to determine the exposure weight map and the chrominance weight map; after normalization and fusion, a comprehensive weight map is obtained. Secondly, the initial fusion pyramid is obtained by fusing the Laplacian pyramid of the input image and the Gaussian pyramid of the comprehensive weight map; and (2) Use the K-means algorithm to divide the input image into a low-exposure image set (LEI) and an over-exposure image set (OEI) according to the image brightness histogram, and then apply the fast local Laplace filter (FLLF) to enhance the pixels of the brighter region of the LEI and the darker region of the OEI.
3.2. Well-Exposure Evaluation Weight
The exposure ratio can measure the exposure effect of an image. The image we want should be well-exposed, neither over-exposed nor under-exposed, and contain sharp details. When fusing, if the image is under-exposed or over-exposed, the fused image weights will be suppressed, and when the pixels are well-exposed, the fusion weights should remain large. Therefore, pixel values with brightness close to 0.5 should be given greater weight, whereas those close to 0 and 1 should be given less weight.
According to this principle, Huang et al. [7] proposed an exposure evaluation function based on an adaptive factor to calculate the exposure weight. Figure 2a shows the exposure evaluation function, and the adaptive factor (γ) is defined as follows:
(1)
where represents the pixel value of the th input image at , represents the number of input images, and h and w represent the size of the input image. is determined by the difference between the average brightness of all input images and 0.5. We found that when the average value is more than 0.5, brighter areas are given greater weights, and when the average value is too small, is larger and pixel exposure weights in the range of are assigned 0, causing the fusion image to lose details that should be contained in darker areas. When the average value is less than 0.5, the darker area is given a higher weight. However, when the average value is too large, is larger and the pixel exposure weight in the range of is given 0, resulting in the fusion image losing details that the brighter areas should contain. Experiments show that the brightness of the fused image is lower than the average brightness of the input image, resulting in a low local brightness of the image, which can be solved by the well-exposed evaluation function proposed in this section.We obtain the fusion weight by normalizing the exposure rate, which is defined as follows:
(2)
where represents the exposure weight of the kth input image sequence at the position. Although a well-exposure weight can prevent the fusion image from being low-exposure or over-exposure, it often encounters contrast degradation and local brightness that does not adapt to the input image. We illustrate this phenomenon with two pixels in the Chinese garden of the MEF dataset. As shown in Figure 3, the values of Pixel A from dark to light are 0.2, 0.55, and 0.75; and the values of Pixel B are 0.35, 0.7, and 0.95. After calculation, the average brightness of the Chinese garden is 0.35, so the corresponding exposure weight is obtained from the function represented by the dotted line in Figure 2a. The result is shown in Figure 4. According to Figure 4, the values of the fused image are as follows:The value of A in the fused image is greater than that of B, and the value of B is much smaller than the average value of the input pixel B. This is because the expected value of all pixels in the input image is set to 0.5 regardless of the actual value. Therefore, to maintain consistency with the input image in terms of luminance distribution, the desired luminance of different pixels needs to be consistent with the values in the input image. We define a new weighted-average-adaptation factor defined as follows:
(3)
where represents the average brightness of N input images at , and is the expected brightness value, respectively, defined as follows:(4)
(5)
is the weight value of 0.5. If is too large or too small, the local contrast will be reduced in the fused image. In order to keep the brightness distribution consistent with the input image and obtain better exposure, we set to 0.5; and represent the darkest and brightest pixel values of N input images at , defined as follows:
(6)
(7)
Therefore, the specific implementation method of the exposure evaluation function based on the weighted-average-adaptive factor proposed in this paper is as follows: the brightness value near the expected brightness should be given a larger weight and the pixels near the two ends should be given a lower weight. If is smaller than the expected brightness, the input pixel is dark and the weight of the darker area should be appropriately increased. In this way, a small number of pixels in the brighter area will not obtain a higher weight, so more details in the brighter area will be obtained on the fused image. At this time, the exposure evaluation function is defined as follows:
(8)
If is larger than the expected brightness, the input pixel is brighter. In this case, the weight of the brighter area should be appropriately increased, so the fused image will obtain more details in the darker area. At this time, the exposure evaluation function is defined as follows:
(9)
Figure 2b shows the exposure evaluation function proposed in this paper. Therefore, the final well-exposure of the two groups of pixels in Figure 3 calculated by Formulas (8) and (9) is shown in Figure 5. According to the new well-exposure weights, the two-pixel values in the fused image are as follows:
Now, the brightness values of pixel A and pixel B in the fused image are close to the average brightness. The brightness of pixel B is more pronounced than that of pixel A, which is consistent with the brightness in the input image, indicating that the exposure evaluation is based on the weighted average adaptive factor. The scheme can effectively avoid the problem of local contrast reduction. In addition, we use the new as well as Huang’s exposure weights to fuse the image sequences in the MEF dataset, then use the non-reference quality metric (NIQE) [23] to evaluate the fused images objectively. The lower the NIQE value, the better the fused image quality. The results are shown in Figure 6. As can be seen in Figure 6, although some of the NIQE values obtained by the proposed scheme are slightly higher than Huang’s, as shown in the enlarged area in the figure, most of the fused images achieve better performance in the NIQE index, indicating that the proposed algorithm can better preserve image structure and features.
3.3. Color-Intensity Weight
When the fused image has bright colors, it will give people a good visual effect, so the pixels with higher saturation in the input image should be given higher weights, which are calculated as follows:
(10)
where represents the average value of the three channels of R, G, and B of the th input image at , which is defined as follows:(11)
3.4. Local Detail Enhancement
Pyramid-based fusion methods can retain most details in normal areas but cannot preserve details in brighter and darker areas. The larger number of pyramid layers, the more obvious this weakness is. In this paper, we first use the K-means algorithm to divide the input image sequence into over-exposed images (OEI) and low-exposure images (LEI) according to the luminance histogram features of each image. According to , the brightness the region(BoE) and the darkness the region(DoE) can be determined, then, the fast local Laplacian filter (FLLF) proposed by Aubry et al. [24] can be used to enhance the BoE area of the LEI and the DoE area of the OEI. The frame diagram is shown in Figure 7.
3.4.1. Brighter and Darker Area Determination Criteria
Since the darker areas of the low-exposure images and the brighter ones of the over-exposed images do not have any detailed information, there is no need to enhance these details, as shown in Figure 8d,e. Figure 8d,e show the horizontal pixel intensity of the red area (the brightest area in Figure 8a) and the yellow area (the darkest area in Figure 8c) in Figure 8a–c, respectively. In Figure 8d, the brightest area of the over-exposed image has a straight line, indicating that this part of the area does not contain any details. However, the same area in Figure 8b,c contains more information; in other words, the under-exposed image contains the texture information of the brighter area of the over-exposed image, as shown in Figure 8e. It is necessary to exclude the DoE of the low-exposure image and the BoE of the over-exposed image, then use the DoE as the enhanced area of the over-exposed image and the BoE as the enhanced area of the low-exposure image. Although the DoE region does not have any detailed information in the low-exposure image, it contains this part of the detailed information in the over-exposure image.
The brightest and darkest regions are determined by the mean value at of input images, as shown in Equation (4). When , it means that the current pixel belongs to the BoE, as shown in Figure 8f; when , it means that the current pixel belongs to the normal area; when , it indicates that the current pixel belongs to the DoE, as shown in Figure 8g.
3.4.2. FLLF-Based Enhancement of Brighter and Darker Regions
For each input image , we use the traditional Laplacian pyramid method to construct an initial Laplacian pyramid , where represents the number of pyramid levels, defined as follows:
(12)
where and represent length and width. For the DoE region in the over-exposed image(OEI) and the BoE region in the low-exposure image (LEI), we use FLLF to enhance the details of the two regions. Firstly, calculate the Gaussian pyramid of the OEI and LEI, and then use a set of from small to large to regularly sample the value range of , where and is the sampling time. For a sampled value of in the DoE and BoE, calculate the local Laplacian pyramid coefficient , where represents the remapping function of , defined as follows:(13)
where is used to distinguish edges from details, and is a parameter used to control the amount of detail enhancement. In this paper, and are set to 0.5 and 0.25, respectively. For the pyramid coefficient of a specific at the position of the layer, under the condition that , use calculate the interpolation parameter , and then use linear interpolation to calculate the corresponding pyramid coefficients in the DoE and BoE:(14)
(15)
where represents the enhancement coefficient value of the layer pyramid of the th over-exposed image in the DoE area. To make the fused image have better details, take the maximum value of the coefficient values of the same pyramid level:(16)
(17)
3.5. Multiscale Fusion
The combined weight consists of a well-exposure weight and a chrominance weight and is defined as follows:
(18)
where is a tiny number to avoid the situation where and are both zero. Then use a multi-scale fusion strategy to fuse the Gaussian pyramid of the synthetic weight map and the Laplacian pyramid of the input image.(19)
where represents the initial fusion Laplacian pyramid. To enhance the details of the DoE and BoE regions, we updated the Laplace coefficient of and to the corresponding DoE and BoE regions of . Finally, the final fusion image is generated by the inverse Laplace transform. Algorithm 1 shows the detailed calculation process.Algorithm 1: The proposed algorithm |
Parameter: means normalized to the grayscale image of , represents interpolation parameter, represents average brightness |
Input: Source image sequences , |
Output: The result after fusion |
1: for each image do |
2::= rgb2gray () |
3: Calculate by the Equations (8) and (9) |
4: Calculate by the Equation (10) |
5: obtain by computing the Laplace Pyramid of |
6: end for |
7: for each and do |
8: Calculate by the Equation (18) |
9: end for |
10: Compute the Gaussian pyramid of |
11: Compute by the Equation (4) |
12: Use to determine DoE and BoE |
13: Use K-Means to divide into OEI and LEI |
14: Calculate the Gaussian pyramid of OEI and LEI |
15: for each of in DoE and BoE do |
16: Calculate by the Equation (13) |
17: Calculate interpolation parameter by using |
18: Calculate the corresponding pyramid coefficients in DoE and BoE by using Equations (14) and (15) |
19: end for |
20: Calculate and by using Equations (16) and (17) |
21: Calculate by the Equation (19) |
22: Update with and |
23: Use Laplacian Pyramid reconstruction to obtain fused image |
4. Results and Analysis
In this paper, the proposed multi-exposure image fusion method is validated on the MEF dataset, as shown in Table 1. The fusion results of our method are compared with seven recently proposed fusion algorithms: median and recursive filtering-based fusion (MRF) [25], structural block-based fusion (SPD-MEF) [19], linear combination-based local exposure weights (LE) [26], boosted Laplacian pyramid-based fusion (B-LP) [27], a fusion technique using a dense SIFT descriptor and guided filter (DSIFT) [28], a fusion method based on low-level features (LLF) [14], and image fusion based on feature evaluation with adaptive factors (FEAF) [7]. Four representative sets of the MEF dataset were selected for presentation in Figure 9. Figure 10, Figure 11 and Figure 12 show the results of these four sets of images fused with different methods. The proposed method can achieve good visual effects for image sequences with different exposures both indoors and outdoors through subjective and objective analysis. It also retains more valuable details in both the brightest and darkest areas. All experiments were run with MATLAB R2019a on a computer with a Core i5-11400F @1.6 GHz processor and 16.00 GB RAM.
4.1. Subjective Analysis
SPD-MEF uses the structural block decomposition method to decompose image blocks into signal intensity, signal structure, and average intensity. This method shows good global contrast but it is prone to over-sharpening, resulting in local color distortion. As shown in Figure 12a, there is a false color in the sunset sky, which is inconsistent with the color information of the original image and there is a loss of details in Figure 10 and Figure 11a. Although the MRF method has achieved good visual effects, it has the phenomenon of excessive unnatural brightness and unclear boundaries; as shown in Figure 10b, there are shadows around the lights and the cloud contours in the sky in Figure 11 and Figure 12b are not clear.
DSIFT uses dense SIFT descriptors to compute contrast and smoothing weights for guided filtering, resulting in images with good contrast but easily lost details. As shown in Figure 10c, the details of the top area of the toad are lost. In Figure 11c, the details of the cloud texture in the sky are lost and the clarity is poor. The FEAF method can ensure the consistency of the local brightness of the fused image and the input image, but when the number of input images is large, the overall brightness will be darker and the contrast will decrease, as shown in Figure 10d. Although this method can retain some detailed information, it will still lose details when the image brightness difference is large. Figure 11 and Figure 12d show that the clouds in the sky lose the texture details.
The fusion results of the LE method and the B-LP method are distorted. As shown in Figure 10e,f, there is much false information and the image has a lot of halo and noise. It can be seen in Figure 11f and Figure 12f that the B-LP method is more practical than the LE method but the color information is lost in Figure 12f and the clouds in the sky appear as blank areas. The LLF method performs well in image clarity and color saturation but it is prone to noise and loss of details at the junction of light and dark. Figure 12g shows that the clouds have poor contrast and unclear outlines.
Compared with the other seven methods, the algorithm proposed in this paper shows advantages in all aspects. As shown in Figure 10h, the areas on the top of the toad’s head with large changes in light and shade have rich texture details and bright colors. In Figure 11h, the outline of the cloud is clear and the boundary has no halo. The sunset in Figure 12h has better color saturation and no loss of color information. To sum up, the algorithm in this paper has outstanding performance in detail and color information retention and avoids excessive detail enhancement to show a comfortable visual effect.
In order to verify the robustness of the algorithm in this paper, we will use a set of 30 multi-exposure image sequences as a validation dataset and compare it with the other six algorithms. The fusion results are shown in Figure 13. The B-LP method produces a poor visual effect and the image’s color is seriously distorted. The other six algorithms have achieved sound sensory effects. In Figure 13a, there is a gray shadow around the lamp. In Figure 13b, the color transition in some areas is unnatural, such as the dark arch in the red area. The overall contrast of the image in Figure 13c is low. The overall image in Figure 13d is brighter, resulting in the lack of details of the arched edge in the red frame. Figure 13f contains much noise. From the fusion results, Figure 13g has good color saturation, such as the arched buildings are brightly colored, the texture details are clear, and there is no halo around the lights in the yellow areas. In general, our algorithm can consistently achieve satisfactory results in static scenes, even in large datasets of multi-exposure images.
To better analyze the effectiveness of the MEF algorithm in this paper, we analyze the pixel intensity in the horizontal direction of the enlarged area in Figure 10, as shown in Figure 14. In Figure 14b, the pixel intensities of DSIFT, SPD-MEF, LLF, and MRF are almost a straight line, indicating that they contain almost no texture information. The LE and B-LP methods contain some detailed information but the whole image is distorted. The FEAF and the algorithm proposed in this paper have achieved consistent results.
4.2. Objective Analysis
Objective indicators are very important for evaluating the performance of fusion algorithms. To objectively evaluate the fusion results, we adopt the image quality assessment (IQA) proposed by Ma et al. [26] to evaluate the image quality of human visual perception. IQA scores range from 0 to 1, the higher values indicating better image quality. Using the gradient-based quality index proposed by Petrovic et al. [37], the fusion performance is evaluated by measuring the guidance degree of the source image’s gradient information against that of the fusion image. The larger the value of , the better the fusion effect. We also use a non-reference quality metric NIQE to evaluate the possible losses of ’naturalness’ in a fused image. The performance of the state-of-the-art and the proposed method is evaluated by the above objective metrics.
Table 2 presents the quantitative evaluation results of the different fusion methods of the IQA in the MEF dataset and the best fusion results are shown in bold. As can be seen in Table 2, our fused images have the highest human visual perception scores in 10 of the total 16 sets and are only slightly lower than the DSIFT and LLF in a few test sets. Although the algorithm in this paper is slightly lower than other algorithms in several sets of test data sets, it can retain more details in regions with large exposure differences.
As shown in Table 3, among the 16 groups of comparison results, the average value of the algorithm in this paper is the largest. can evaluate the edge information transmitted from the input image to the fusion. The test results show that the proposed algorithm performs well in edge preservation.
According to Table 4, the algorithm in this paper achieves lower NIQE values in 10 test datasets. From the experimental results, the algorithm in this paper can maintain a low loss of color information and detailed information. In other words, in most cases, the natural loss of our fusion results is lower than that of other algorithms. In general, the algorithm performs well in terms of visual effects, detail preservation, and reduction of image information loss.
In order to verify the performance of the method, we calculate the running time of the existing seven methods and the proposed method on 3 test sets named “Landscape,” “Lamp,” and “Madison Capital.” The results are shown in Table 5. The running time of our algorithm is greatly improved compared to the B-LP method. From the experimental results, when a set of datasets contains a small number of multi-exposure images, the algorithm’s running time is not much different from other algorithms. When the number of multi-exposure images is large, the processing speed of the algorithm in this paper is slow. Although the running time of the proposed algorithm is slightly inferior to other algorithms, it is also at a low level. On the other hand, the superiority of the algorithm in this paper is also reflected in the subjective and objective analyses.
5. Conclusions and Future Directions
This paper proposes an improved weighted average adaptive factor exposure assessment technique and local detail enhancement scheme. Different exposure evaluation strategies are adopted for different local brightness average values of multi-exposure image sequences, which effectively solves the problem that the fusion image cannot adapt to the local brightness of the input image sequence and the brightness and darkness are excessively unnatural. Use the K-means algorithm to divide the images into over-exposed images and low-exposure images according to the characteristics of the brightness histogram, the enhancement area is defined for each image using the brightness average value to reduce the enhanced pixels. In addition, a local Laplacian pyramid is employed to enhance the details in the brighter and darker areas. Through subjective and objective evaluation, the proposed method has achieved good results. However, the algorithm in this paper is inefficient when dealing with a large number of multi-exposure datasets, which also provides a direction for our next work.
In recent years, the rapid development of deep learning has led to good application prospects in various fields. Therefore, in future work, we will try to apply neural networks to multi-exposure fusion algorithms, which are no longer limited to static multi-exposure images but are also applicable to dynamic multi-exposure images.
Methodology, D.W.; software, C.X. and D.W; validation, W.T., Z.A. and J.H.; data curation, Y.H., K.Q. and Q.F.; writing—original draft preparation, D.W.; writing—review and editing, C.X. and B.F.; visualization, D.W.; supervision, C.X.; project administration, C.X.; funding acquisition, C.X. All authors have read and agreed to the published version of the manuscript.
The authors declare no conflict of interest.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Figure 2. Exposure evaluation function. (a) Huang’s exposure evaluation function. (b) Our exposure evaluation function.
Figure 8. Analysis of areas of interest. (a) Over-exposed image. (b) Normal-exposure image. (c) Low-exposure image. (d) Pixel intensity analysis of the red region in (a–c). (e) Pixel intensity analysis of the yellow area in (a–c). (f) Brightest area. (g) Darkest area.
Figure 9. Three sets of multi-exposure image sequences. (a) ‘Lamp’ image sequences. (b) ‘Landscape’ image sequences. (c) ‘Lighthouse’ image sequences. (d) ‘Madison Capital’ image sequences.
Figure 10. Comparison of fusion results for Figure 9a. (a) SPD-MEF method fusion result. (b) MRF method fusion result. (c) DSIFT method fusion result. (d) FEAF method fusion result. (e) LE method fusion result. (f) B-LP method fusion result. (g) LLF method fusion result. (h) The proposed method fusion result.
Figure 11. Comparison of fusion results for Figure 9b. (a) SPD-MEF method fusion result. (b) MRF method fusion result. (c) DSIFT method fusion result. (d) FEAF method fusion result. (e) LE method fusion result. (f) B-LP method fusion result. (g) LLF method fusion result. (h) The proposed method fusion result.
Figure 11. Comparison of fusion results for Figure 9b. (a) SPD-MEF method fusion result. (b) MRF method fusion result. (c) DSIFT method fusion result. (d) FEAF method fusion result. (e) LE method fusion result. (f) B-LP method fusion result. (g) LLF method fusion result. (h) The proposed method fusion result.
Figure 12. Comparison of fusion results for Figure 9c. (a) SPD-MEF method fusion result. (b) MRF method fusion result. (c) DSIFT method fusion result. (d) FEAF method fusion result. (e) LE method fusion result. (f) B-LP method fusion result. (g) LLF method fusion result. (h) The proposed method fusion result.
Figure 13. Comparison of fusion results for Figure 9d. (a) SPD-MEF method fusion result. (b) MRF method fusion result. (c) DSIFT method fusion result. (d) FEAF method fusion result. (e) B-LP method fusion result. (f) LLF method fusion result. (g) The proposed method fusion result. (a1–g1) represent magnified images of the red area in (a–g). (a2–g2) represent magnified images of the yellow area in (a–g).
Figure 14. Pixel intensity analysis of the zoom-in patches in Figure 10 along the horizontal direction. (a) Comparison of eight methods. (b) Enlarged image of the data in the black box in (a).
MEF Dataset.
Image Sequences | Size | Image Origin |
---|---|---|
Balloons | 512 × 339 × 9 | Erik Reinhard [ |
Belgium house | 512 × 384 × 9 | Dani Lischinski [ |
Chinese garden | 512 × 384 × 4 | Bartlomiej Okonek [ |
House | 512 × 341 × 4 | Tom Mertens [ |
Church | 335 × 512 × 3 | Jianbing Shen [ |
Kluki | 512 × 341 × 3 | Bartlomiej Okonek [ |
Lamp | 512 × 384 × 15 | Martin Cadik [ |
Landscape | 512 × 341 × 3 | HDRsoft [ |
Laurenziana | 356 × 512 × 3 | Bartlomiej Okonek [ |
Lighthouse | 512 × 340 × 3 | HDRsoft [ |
Mask | 512 × 341 × 3 | HDRsoft [ |
Madison Capital | 512 × 384 × 30 | Chaman Singh Verm [ |
Office | 512 × 340 × 6 | MATLAB [ |
River | 512 × 341 × 3 | Martin Cadik [ |
Room | 512 × 341 × 3 | Pangeasoft [ |
Tower | 512 × 341 × 3 | Jacques Joffre [ |
Comparison of image quality assessment (IQA) with existing MEF techniques.
Image | SPD-MEF [ |
MRF [ |
DSIFT [ |
FEAF [ |
LE * [ |
B-LP [ |
LLF [ |
Ours |
---|---|---|---|---|---|---|---|---|
Balloons | 0.9640 | 0.9445 | 0.9621 | 0.8934 | 0.7704 | 0.8174 | 0.9512 | 0.9645 |
Belgium house | 0.9702 | 0.9448 | 0.9723 | 0.9127 | 0.7346 | 0.7500 | 0.9747 | 0.9703 |
Chinese garden | 0.9853 | 0.9820 | 0.9729 | 0.9640 | 0.9171 | 0.9062 | 0.9708 | 0.9873 |
Church | 0.9924 | 0.9789 | 0.9922 | 0.9657 | - | 0.8787 | 0.9904 | 0.9849 |
House | 0.9094 | 0.9213 | 0.9609 | 0.9419 | 0.6571 | 0.3661 | 0.9464 | 0.9621 |
Kluki | 0.9704 | 0.9649 | 0.9800 | 0.9301 | 0.8514 | 0.8175 | 0.9722 | 0.9701 |
Lamp | 0.9537 | 0.9306 | 0.9644 | 0.9254 | 0.5770 | 0.5510 | 0.9611 | 0.9675 |
Landscape | 0.9731 | 0.9721 | 0.9720 | 0.9770 | 0.9008 | 0.9207 | 0.9916 | 0.9937 |
Laurenziana | 0.9854 | 0.9760 | 0.9892 | 0.9690 | - | 0.8806 | 0.9867 | 0.9897 |
Lighthouse | 0.9704 | 0.9529 | 0.9742 | 0.9679 | 0.7934 | 0.8845 | 0.9749 | 0.9813 |
Mask | 0.9826 | 0.9814 | 0.9922 | 0.9696 | - | 0.8845 | 0.9883 | 0.9841 |
Office | 0.9876 | 0.9722 | 0.9874 | 0.9614 | 0.8307 | 0.7674 | 0.9889 | 0.9780 |
River | 0.9877 | 0.9733 | 0.9895 | 0.9814 | - | 0.8721 | 0.9885 | 0.9898 |
Room | 0.9777 | 0.9722 | 0.9820 | 0.9439 | - | 0.8447 | 0.9777 | 0.9740 |
Tower | 0.9860 | 0.9835 | 0.9870 | 0.9499 | 0.8979 | 0.8728 | 0.9861 | 0.9873 |
Madison Capital | 0.9768 | 0.9182 | 0.9723 | 0.9268 | - | 0.5490 | 0.9772 | 0.9778 |
Average | 0.9732 | 0.9605 | 0.9781 | 0.9487 | 0.7930 | 0.7852 | 0.9766 | 0.9789 |
* LE fusion images are provided by [
Comparison of image quality assessment (
Image | SPD-MEF [ |
MRF [ |
DSIFT [ |
FEAF [ |
LE * [ |
B-LP [ |
LLF [ |
Ours |
---|---|---|---|---|---|---|---|---|
Balloons | 0.6882 | 0.6332 | 0.6379 | 0.6893 | 0.5887 | 0.4231 | 0.6730 | 0.6963 |
Belgium house | 0.7406 | 0.7421 | 0.7452 | 0.6541 | 0.6465 | 0.5087 | 0.7322 | 0.7501 |
Chinese garden | 0.8262 | 0.8366 | 0.8222 | 0.8374 | 0.6906 | 0.5934 | 0.8116 | 0.8221 |
Church | 0.8519 | 0.8519 | 0.8443 | 0.8721 | - | 0.6337 | 0.8286 | 0.8522 |
House | 0.6049 | 0.6997 | 0.7023 | 0.7264 | 0.6043 | 0.4253 | 0.7009 | 0.7347 |
Kluki | 0.6726 | 0.7141 | 0.6407 | 0.6816 | 0.6019 | 0.3999 | 0.6618 | 0.7197 |
Lamp | 0.7707 | 0.7456 | 0.7658 | 0.7980 | 0.5557 | 0.3979 | 0.7755 | 0.8032 |
Landscape | 0.8329 | 0.8329 | 0.8117 | 0.8612 | 0.6518 | 0.4932 | 0.8107 | 0.7947 |
Laurenziana | 0.8141 | 0.8141 | 0.8163 | 0.8373 | - | 0.5715 | 0.7992 | 0.8409 |
Lighthouse | 0.7625 | 0.7217 | 0.7484 | 0.8142 | 0.7055 | 0.5533 | 0.7618 | 0.7650 |
Mask | 0.8405 | 0.8506 | 0.8458 | 0.8772 | - | 0.5546 | 0.8214 | 0.8859 |
Office | 0.8713 | 0.8829 | 0.8359 | 0.8828 | 0.6730 | 0.5748 | 0.8664 | 0.8746 |
River | 0.7119 | 0.6910 | 0.7193 | 0.6780 | - | 0.5218 | 0.7144 | 0.7199 |
Room | 0.8059 | 0.8057 | 0.8083 | 0.8267 | - | 0.5635 | 0.7921 | 0.8023 |
Tower | 0.6653 | 0.7168 | 0.6718 | 0.7263 | 0.6838 | 0.3856 | 0.6435 | 0.7288 |
Madison Capital | 0.8083 | 0.7686 | 0.7999 | 0.8335 | - | 0.4194 | 0.7908 | 0.8408 |
Average | 0.7667 | 0.7692 | 0.7634 | 0.7872 | 0.6401 | 0.5012 | 0.7614 | 0.7894 |
* LE fusion images are provided by [
Comparison of image quality assessment (NIQE) with existing MEF techniques.
Image | SPD-MEF [ |
MRF [ |
DSIFT [ |
FEAF [ |
LE * [ |
B-LP [ |
LLF [ |
Ours |
---|---|---|---|---|---|---|---|---|
Balloons | 3.1730 | 3.2798 | 3.1199 | 3.5777 | 3.9937 | 4.2004 | 3.1292 | 3.0679 |
Belgium house | 2.8407 | 2.8546 | 2.8917 | 2.6962 | 3.7687 | 4.7149 | 2.9922 | 2.7239 |
Chinese garden | 1.9233 | 1.9612 | 1.9473 | 1.9023 | 3.7687 | 3.4454 | 1.9765 | 1.8895 |
Church | 5.6679 | 5.4785 | 5.8223 | 5.5908 | - | 11.6606 | 6.127 | 5.4624 |
House | 4.2175 | 3.4627 | 3.8084 | 3.9044 | 7.029 | 5.4794 | 3.9829 | 3.7875 |
Kluki | 2.0643 | 2.0241 | 1.9835 | 2.0839 | 2.772 | 3.7827 | 1.9691 | 2.0995 |
Lamp | 2.9679 | 3.1207 | 2.9444 | 3.2446 | 5.0047 | 5.1575 | 2.9170 | 2.8495 |
Landscape | 2.5482 | 2.6530 | 2.6577 | 2.5973 | 3.3069 | 5.0838 | 2.6745 | 2.4388 |
Laurenziana | 2.4811 | 2.4045 | 2.5069 | 2.3916 | - | 3.8703 | 2.4442 | 2.3915 |
Lighthouse | 2.8888 | 2.7888 | 2.792 | 2.7928 | 3.7854 | 3.4865 | 2.9048 | 2.7841 |
Mask | 2.8740 | 2.9096 | 2.8884 | 2.6709 | - | 4.3010 | 3.0076 | 2.9386 |
Office | 3.1470 | 2.7835 | 2.9711 | 2.6890 | 4.6544 | 4.8830 | 2.9289 | 2.4686 |
River | 3.3543 | 3.3329 | 3.2862 | 3.4824 | - | 4.5504 | 3.3380 | 3.2731 |
Room | 2.6044 | 2.5601 | 2.7151 | 2.6818 | - | 3.2275 | 2.6519 | 2.8765 |
Tower | 2.9965 | 2.3662 | 3.2855 | 2.2915 | 2.5976 | 4.1269 | 2.3101 | 2.1678 |
Madison Capital | 2.2954 | 2.7648 | 2.3173 | 2.5553 | - | 5.0202 | 3.1901 | 2.7962 |
Average | 3.0027 | 2.9215 | 2.9961 | 2.9470 | 4.0681 | 4.8119 | 3.0340 | 2.8759 |
* LE fusion images are provided by [
Time (s) comparison of the proposed method with six existing methods.
Image | Number | SPD-MEF [ |
MRF [ |
DSIFT [ |
FEAF [ |
B-LP [ |
LLF [ |
Ours |
---|---|---|---|---|---|---|---|---|
Landscape | 3 | 1.36 | 0.32 | 0.45 | 0.90 | 23.55 | 1.23 | 2.35 |
Lamp | 15 | 6.25 | 1.83 | 2.45 | 3.37 | 32.25 | 7.11 | 13.35 |
Madison Capital | 30 | 14.65 | 3.57 | 5.20 | 5.48 | 50.71 | 18.11 | 23.54 |
References
1. Saha, R.; Banik, P.P.; Kim, K.-D. Low Dynamic Range Image Set Generation from Single Image. Proceedings of the 18th Annual International Conference on Electronics, Information, and Communication (ICEIC), Inst Elect & Informat Engineers; Auckland, New Zealand, 22–25 January 2019; pp. 347-349.
2. Kou, F.; Wei, Z.; Chen, W.; Wu, X.; Wen, C.; Li, Z. Intelligent Detail Enhancement for Exposure Fusion. IEEE Trans. Multimed.; 2018; 20, pp. 484-495. [DOI: https://dx.doi.org/10.1109/TMM.2017.2743988]
3. Keerativittayanun, S.; Kondo, T.; Kotani, K.; Phatrapornnant, T.; Karnjana, J. Two-layer pyramid-based blending method for exposure fusion. Mach. Vis. Appl.; 2021; 32, 48. [DOI: https://dx.doi.org/10.1007/s00138-021-01175-9]
4. Zhang, W.-L.; Liu, X.-L.; Wang, W.-C.; Zeng, Y.-J. Multi-exposure image fusion based on wavelet transform. Int. J. Adv. Robot. Syst.; 2018; 15, pp. 1-19. [DOI: https://dx.doi.org/10.1177/1729881418768939]
5. Mertens, T.; Kautz, J.; Van Reeth, F. Exposure fusion. Proceedings of the 15th Pacific Conference on Computer Graphics and Applications (Pacific Graphics 2007); Maui, HI, USA, 29 October–2 November 2007; pp. 382-390.
6. Liu, Y.; Wang, Z. Dense SIFT for ghost-free multi-exposure fusion. J. Vis. Commun. Image Represent.; 2015; 31, pp. 208-224. [DOI: https://dx.doi.org/10.1016/j.jvcir.2015.06.021]
7. Huang, L.; Li, Z.; Xu, C.; Feng, B. Multi-exposure image fusion based on feature evaluation with adaptive factor. IET Image Processing; 2021; 15, pp. 3211-3220. [DOI: https://dx.doi.org/10.1049/ipr2.12317]
8. Ram Prabhakar, K.; Sai Srikar, V.; Venkatesh Babu, R. Deepfuse: A deep unsupervised approach for exposure fusion with extreme exposure image pairs. Proceedings of the IEEE International Conference on Computer Vision; Venice, Italy, 22–29 October 2017; pp. 4714-4722.
9. Wang, J.; Wang, W.; Xu, G.; Liu, H. End-to-end exposure fusion using convolutional neural network. IEICE Trans. Inf. Syst.; 2018; 101, pp. 560-563. [DOI: https://dx.doi.org/10.1587/transinf.2017EDL8173]
10. Mertens, T.; Kautz, J.; Van Reeth, F. Exposure Fusion: A Simple and Practical Alternative to High Dynamic Range Photography. Comput. Graph. Forum; 2009; 28, pp. 161-171. [DOI: https://dx.doi.org/10.1111/j.1467-8659.2008.01171.x]
11. Goshtasby, A.A. Fusion of multi-exposure images. Image Vis. Comput.; 2005; 23, pp. 611-618. [DOI: https://dx.doi.org/10.1016/j.imavis.2005.02.004]
12. Shen, R.; Cheng, I.; Shi, J.; Basu, A. Generalized Random Walks for Fusion of Multi-Exposure Images. IEEE Trans. Image Processing; 2011; 20, pp. 3634-3646. [DOI: https://dx.doi.org/10.1109/TIP.2011.2150235]
13. Gu, B.; Li, W.; Wong, J.; Zhu, M.; Wang, M. Gradient field multi-exposure images fusion for high dynamic range image visualization. J. Vis. Commun. Image Represent.; 2012; 23, pp. 604-610. [DOI: https://dx.doi.org/10.1016/j.jvcir.2012.02.009]
14. Qi, G.; Chang, L.; Luo, Y.; Chen, Y.; Zhu, Z.; Wang, S.J.S. A precise multi-exposure image fusion method based on low-level features. Sensors; 2020; 20, 1597. [DOI: https://dx.doi.org/10.3390/s20061597] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/32182986]
15. Frédo Durand, J.D. Fast Bilateral Filtering for the Display of High-Dynamic-Range Images. Assoc. Comput. Mach.; 2002; 21, pp. 257-266.
16. Farbman, Z.; Fattal, R.; Lischinski, D.; Szeliski, R. Edge-preserving decompositions for multi-scale tone and detail manipulation. ACM Trans. Graph. (TOG); 2008; 27, pp. 1-10. [DOI: https://dx.doi.org/10.1145/1360612.1360666]
17. He, K.; Sun, J.; Tang, X. Guided Image Filtering. IEEE Trans. Pattern Anal. Mach. Intell.; 2013; 35, pp. 1397-1409. [DOI: https://dx.doi.org/10.1109/TPAMI.2012.213] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/23599054]
18. Raman, S.; Chaudhuri, S. Bilateral Filter Based Compositing for Variable Exposure Photography. Eurographics (Short Papers); pp. 1–4. 2009; Available online: https://www.researchgate.net/publication/242935652_Bilateral_Filter_Based_Compositing_for_Variable_Exposure_Photography (accessed on 20 April 2022).
19. Ma, K.; Li, H.; Yong, H.; Wang, Z.; Meng, D.; Zhang, L. Robust Multi-Exposure Image Fusion: A Structural Patch Decomposition Approach. IEEE Trans. Image Processing; 2017; 26, pp. 2519-2532. [DOI: https://dx.doi.org/10.1109/TIP.2017.2671921] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/28237928]
20. Hayat, N.; Imran, M. Detailed and enhanced multi-exposure image fusion using recursive filter. Multimed. Tools Appl.; 2020; 79, pp. 25067-25088. [DOI: https://dx.doi.org/10.1007/s11042-020-09190-0]
21. Li, S.; Kang, X.; Hu, J. Image Fusion with Guided Filtering. IEEE Trans. Image Processing; 2013; 22, pp. 2864-2875. [DOI: https://dx.doi.org/10.1109/tip.2013.2244222]
22. Kou, F.; Li, Z.; Wen, C.; Chen, W. Multi-scale exposure fusion via gradient domain guided image filtering. Proceedings of the IEEE International Conference on Multimedia and Expo (ICME); Hong Kong, China, 10–14 July 2017; pp. 1105-1110.
23. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a "Completely Blind" Image Quality Analyzer. IEEE Signal Process. Lett.; 2013; 20, pp. 209-212. [DOI: https://dx.doi.org/10.1109/LSP.2012.2227726]
24. Aubry, M.; Paris, S.; Hasinoff, S.W.; Kautz, J.; Durand, F. Fast Local Laplacian Filters: Theory and Applications. Acm Trans. Graph.; 2014; 33, pp. 1-14. [DOI: https://dx.doi.org/10.1145/2629645]
25. Li, S.; Kang, X. Fast Multi-exposure Image Fusion with Median Filter and Recursive Filter. IEEE Trans. Consum. Electron.; 2012; 58, pp. 626-632. [DOI: https://dx.doi.org/10.1109/TCE.2012.6227469]
26. Ma, K.; Zeng, K.; Wang, Z. Perceptual Quality Assessment for Multi-Exposure Image Fusion. IEEE Trans. Image Processing; 2015; 24, pp. 3345-3356. [DOI: https://dx.doi.org/10.1109/TIP.2015.2442920] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/26068317]
27. Shen, J.; Zhao, Y.; Yan, S.; Li, X. Exposure Fusion Using Boosting Laplacian Pyramid. IEEE Trans. Cybern.; 2014; 44, pp. 1579-1590. [DOI: https://dx.doi.org/10.1109/TCYB.2013.2290435] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/25137687]
28. Hayat, N.; Imran, M. Ghost-free multi exposure image fusion technique using dense SIFT descriptor and guided filter. J. Vis. Commun. Image Represent.; 2019; 62, pp. 295-308. [DOI: https://dx.doi.org/10.1016/j.jvcir.2019.06.002]
29. Reinhard, E.; Heidrich, W.; Debevec, P.; Pattanaik, S.; Ward, G.; Myszkowski, K. High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting; Morgan Kaufmann: Burlington, MA, USA, 2010.
30. The Hebrew University of Jerusalem. Dani Lischinski HDR Webpage. Available online: http://www.cs.huji.ac.il/hdr/pages/belgium.html (accessed on 20 April 2022).
31. Mallorca, S. HDR Photography Gallery Samples. Available online: http://www.easyhdr.com/examples (accessed on 18 April 2022).
32. Čadík, M. Martin Cadik HDR Webpage. Available online: http://cadik.posvete.cz/tmo (accessed on 19 April 2022).
33. HDRsoft Gallery. Available online: http://www.hdrsoft.com/gallery (accessed on 22 April 2022).
34. Verma, C.S. Chaman Singh Verma HDR Webpage. Available online: http://pages.cs.wisc.edu/CS766_09/HDRI/hdr.html (accessed on 2 June 2022).
35. MathWorks. MATLAB HDR Webpage. Available online: http://www.mathworks.com/help/images/ref/makehdr.html (accessed on 11 April 2022).
36. The Hebrew University of Jerusalem. HDR Pangeasoft. Available online: http://pangeasoft.net/pano/bracketeer/ (accessed on 22 April 2022).
37. Xydeas, C.A.; Petrovic, V. Objective image fusion performance measure. Electron. Lett.; 2000; 36, pp. 308-309. [DOI: https://dx.doi.org/10.1049/el:20000267]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
In order to adapt to the local brightness and contrast of input image sequences, we propose a new weighted average adaptive factor well-exposure weight evaluation scheme. The exposure weights of brighter and darker pixels are determined according to the local average brightness and expected brightness. We find that in the traditional multi-exposure image fusion scheme, the brighter and darker regions of the scene lose many details. To solve this problem, we first propose a standard to determine the brighter and darker regions and then use a fast local Laplacian filter to enhance the image in the region. This paper selects 16 multi-exposure images of different scenes for subjective and objective analysis and compares them with eight existing multi-exposure fusion schemes. The experimental results show that the proposed method can enhance the details appropriately while preserving the details in static scenes and adapting to the input image brightness.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 School of Integrated Circuits, Anhui University, Hefei 230601, China;
2 School of Integrated Circuits, Anhui University, Hefei 230601, China;