1. Introduction
With the continuous development of imaging science, imaging technology has been widely applied in many fields such as video conferencing, medical imaging, remote sensing, compressive sensing, and social media [1,2]. In machine vision systems, extreme external illumination can further lead to a decrease in the image quality [3]. Illumination plays a critical role in capturing the image procedure, and a change in the illumination is the main factor causing image blurring and distortion [4]. The evaluation of image quality consists of analyzing and quantifying the degree of distortion and developing a quantitative evaluation index. A subjective quality evaluation is relatively reliable. However, it is time-consuming, labor-intensive, and not conducive to application in intelligent evaluation systems [5]. Studies relating to objective image quality assessment (IQA), e.g., image sharpness assessments, are becoming increasingly important in assessing the impact of the variation in an image’s appearance on the resulting visual quality and ensuring the reliability of image processing systems [6,7].
According to the availability of a reference image, IQA can usually be divided into three categories: full-reference IQA (FR-IQA) [8,9], reduced-reference IQA (RR-IQA) [10], and no-reference IQA (NR-IQA) [11]. In practical applications, it is usually impossible to obtain undistorted images or their features as references, so NR-IQA has practical significance. In this field, many mature algorithms have achieved good results in image quality evaluation. Gaussian blur is one of the common and dominant types of distortion perceived in images when captured under low-light conditions. Therefore, a suitable and efficient blurriness or sharpness evaluation method should be explored [7].
Usually, statistical data on the structural features of an image are important in NR-IQA research. When the blurred image is captured under nonideal illumination conditions, the structural features will change accordingly. This structural change can be characterized by some specific structural statistics of the image, which can be used to evaluate the image quality [11,12,13]. Bahrami and Kot [11] proposed a model to measure sharpness based on the maximum local variation (MLV). Li [12] proposed a blind image blur evaluation (BIBLE) algorithm based on discrete orthogonal moments, where gradient images are divided into equally sized blocks and orthogonal moments are calculated to characterize the image sharpness. Gvozden [13] proposed a fast blind image sharpness/ambiguity evaluation model (BISHARP), and the local contrast information of the image was obtained by calculating the root mean square of the image. These methods are all built in the traditional way on the spatial domain/spectral domain of an image.
Unlike traditional methods, learning-based methods can improve the accuracy of the evaluation results [14]. Li [15] proposed a reference-free and robust image sharpness assessment (RISE) method, which evaluates image quality by learning multi-scale features extracted in spatial and spectral domains. A no-reference image sharpness metric based on structural information using sparse representation (SR) [16] was proposed. Yu [17] proposed a blind image sharpness assessment by using a shallow convolutional neural network (CNN). Kim [18] applied a deep CNN to the NR-IQA by separating the training of the NR-IQA into two stages: (1) an objective distortion part and (2) a part related to the human visual system. Liu [19] developed an efficient general-purpose no-reference NR-IQA model that utilizes local spatial and spectral entropy features on distorted images. Li [20] proposed a method based on semantic feature aggregation (SFA) to alleviate the impact of image content variation. Zhang [21] proposed a deep bilinear convolutional neural network (DB-CNN) model for blind image quality assessment that works for both synthetically and authentically distorted images.
These methods can solve simulated blur evaluation problems well, but the majority of them cannot accurately evaluate the realistic blur introduced during image capturing, especially under different illumination imaging conditions. Moreover, evaluating realistic blur is undoubtedly more significant than the evaluation of simulated blur. Therefore, it is necessary to design a sharpness assessment method that is effective for image sharpness under different illumination imaging conditions.
In this research, an NR-IQA method for blurred images under different illumination imaging conditions is proposed to evaluate image sharpness based on a particle swarm optimization-based general regression neural network (PSO-GRNN). Firstly, some basic image feature maps are extracted, i.e., the visual saliency (VS), color difference, and gradient, and the feature values of all maps are obtained by using statistical calculation. Secondly, the feature values are trained and optimized by a PSO-GRNN. Lastly, after the PSO-GRNN is determined, an evaluation result for the real blurry images will be calculated. The experimental results show that the evaluation performance of the proposed method on real blur databases, i.e., BID [22], CID2013 [23], and CLIVE [24], is better than the state-of-the-art and recently published NR methods.
2. Feature Extraction
2.1. Visual Saliency (VS) Index
The vs. of an image can reflect how “salient” a local region is [25,26]. The relationship between the vs. and image quality has been integrated into IQA studies [27]. For a blurred image that is captured under non-ideal illumination conditions, the important areas of the scene will decrease, and, consequently, the vs. map of the image will also change. The extraction method for the vs. in this study is based on the SDSP method [27]. In Figure 1, the blurred and distorted images with the same content under different lighting conditions and their vs. maps (pseudo-color maps) are presented. From Figure 1, it can be seen that the vs. maps can accurately extract important regions. Figure 1c,d shows the vs. maps of Figure 1a,b, which show the blurred level of an image under specific lighting conditions, in line with the human visual perception characteristics.
2.2. Color Difference (CD) Index
The previous section introduces the overall structural information of images, i.e., the vs. index. For a color image, color information is also important for image quality. The CD index [28] can reflect the color distortion by different illumination imaging conditions. Therefore, the CD is used to evaluate the image quality for the color information. For an RGB image, mapping the image to a specific color space where each pixel contains three color components (brightness , color channel , and ) allows the CD index between adjacent pixels to be calculated using Equation (1):
(1)
where and are the lightness channels of two neighbor pixels; and are the color channels of two neighbor pixels; and are the other color channels of two neighbor pixels. The CD operators in the horizontal and vertical directions of the k channel are defined as(2)
(3)
where represent the intensity at the pixel location for each color channel. The color channel number is represented by k.By combining the local CD operators in the above two directions, the local CD (ΔEL) for pixel (i, j) is obtained, which is given by Formula (4):
(4)
Figure 2 shows the CD pseudo-color maps corresponding to the images with the same content under different lighting conditions. After comparing, it can be seen that there are obvious differences in the CD maps of images under different lighting environments. Therefore, the CD index can be used to evaluate the quality of real blurred images.
2.3. Gradient Index
In IQA studies, a grayscale image gradient is also a commonly used feature [29]. The calculation of an image gradient refers to the magnitude of image changes. For the edge parts of an image, the grayscale values change significantly, and the gradient values also change significantly. On the contrary, for the smoother parts of an image, the grayscale values change less, and the corresponding gradient values also change less. As shown in Figure 3, the red rectangular area is more prominent. The degree of blur in an image is positively correlated with the change in edge location. By calculating the image gradient, the corresponding change in edge location can be determined. In this study, the Roberts operator is utilized to calculate the image gradient. The gradient value of the pixel point at is defined as , and the gradient calculation formula is
(5)
2.4. Image Feature Value
Based on the above analysis, the VS, CD, and gradient information are selected to extract image features in this study. As shown in Figure 4, for a blurred image, the vs. map, CD map, and gradient map are processed at first. Then, these three feature maps are subjected to maximum (Max), relative change (RC), and variance (Var) calculations. After that, nine feature values can be obtained. These values are used to construct the feature value of an image, which is then input into the following parts of the proposed method.
The calculation process for the Max, RC, and Var eigenvalues of the obtained feature map M, e.g., the VS, CD, and gradient maps, is as follows:
(6)
(7)
(8)
where x1, x1, …, xn represents the pixel value of each feature map, represents the average pixel value of a feature map, and n represents the total number of pixels.3. Algorithm Framework
3.1. Generalized Regression Neural Network (GRNN)
GRNN is a powerful regression tool with a dynamic network structure [30]. It is a radial basis function neural network based on non-parametric estimation for nonlinear regression. The network structure of a GRNN is shown in Figure 5, which includes an input layer for conditional samples, a corresponding network pattern layer, a summation layer, and an output layer for the final network training results. The number of neurons in the input layer is equal to the dimension of the input vector in the learning sample. In this paper, the number of neurons in the input layer is n, which equals the number of feature values in an image. Each neuron is a simple distribution unit that directly passes input variables to the pattern layer. The number of neurons in the pattern layer is equal to the number n in the input layer, and each neuron corresponds to a different sample. Each neuron in the pattern layer is connected to two neurons in the summation layer, and the output layer calculates the quotient of the two outputs of the summation layer to generate feature-based predictions. The corresponding network input and network output are
(9)
where n is the number of observed values in the sample. and are the values of the sample. is the transfer function of the pattern layer neuron, and σ is the transfer parameter. The larger the value of σ, the smoother the function approximation.To achieve the optimal performance of a GRNN, it is necessary to determine the ideal variable value σ. In IQA studies, the method of controlling variables is commonly used to calculate the variable values. However, adaptive optimization methods are selected in the proposed method to obtain better performance. In Table 1, three adaptive optimization methods, i.e., fruit fly optimization algorithm (FOA), firefly algorithm (FA), and particle swarm optimization (PSO), are tested on the BID database. In addition, a GRNN without an adaptive optimization method is also tested. The best results are highlighted in boldface in Table 1. Based on the corresponding evaluation standard value (SROCC and PLCC), PSO performs better than other methods. Therefore, PSO-GRNN is selected as the main framework of the proposed IQA method.
3.2. Particle Swarm Optimization (PSO) Algorithm
The calculation steps of PSO are shown in Figure 6. The specific implementation steps are as follows:
(1). Initializing the population: Randomly initialize the position (Pi) and velocity of each particle in the population (vi), the maximum number of iterations of the algorithm, etc.
(2). Calculate the fitness value of each particle based on the fitness function, save the optimal position of each particle (i), and save the individual best fitness value (pbesti) and the global best position of the population (gbesti).
(3). Update the velocity and position based on the velocity and position the update formula according to the following equations:
(10)
(11)
where c1, c2 are the learning factors, also known as the acceleration constant. r1, r2 are the uniform random numbers within the range of [0, 1]. ω is the inertia weight. t is the iteration number. Calculate the fitness value of each particle after updating and compare the best fitness value of each particle with its historical best position fitness value. If it is good, use its current position as the optimal position for that particle. For each particle, compare the fitness value corresponding to its optimal position with the population’s optimal fitness value. If it is better, update the population’s optimal position and fitness value.(4). Determine whether the search results meet the stopping conditions (reach the maximum number of iterations or meet the accuracy requirements). If the stopping conditions are met, output the optimal value. Otherwise, proceed to the second step and continue running until the stopping conditions are met.
3.3. PSO-GRNN Image Quality Evaluation Model
A GRNN has a strong nonlinear mapping ability and a flexible network structure. The PSO-GRNN prediction model was introduced in reference [31]. It can solve the problems of easy convergence to local minima, slow the convergence speed during network training, and improve the generalization ability of the neural network by optimizing the expansion speed of GRNN functions.
This article extends the PSO-GRNN model to blurred images quality evaluation under different illumination imaging conditions. The design of the PSO-GRNN-based IQA method is proposed in Figure 6. Firstly, the VS, CD, and gradient processing is performed on all color blurred images to obtain feature values by Equations (6)–(8). Then, 80% of the images in the database are randomly selected, and the feature values and benchmark values, i.e., MOS or DOMS, of these images are input in the GRNN for training. Later, the variable value of the GRNN is optimized by PSO. Finally, the trained PSO-GRNN is used to evaluate the quality of the other 20% of images in the database and obtain the predicted value for image quality.
4. Experiments and Discussion
4.1. Database and Evaluation Indicators
In our study, experiments are performed on the BID [22], CID2013 [23], and CLIVE [24] databases. These three databases are all public realistic blur image databases. The information of each database is introduced in Table 2. Eight different scenes are included in CID2013. Scenes 1, 2, 3, 6 include six datasets (I–VI), Scene 4 includes four datasets (I–IV), Scene 5 includes five datasets (I–V), Scene 7 includes one dataset (VI), and Scene 8 includes two datasets (V, IV). Table 3 shows the descriptions and example images for each scene. The BID database contains 586 images, while the CLIVE database contains 1162 images, all of which are real blurry images in real-world environments. These three databases are commonly utilized collections in real IQA studies, covering a wide range of ordinarily authentic distortions in real-world applications.
The most commonly used sharpness performance evaluation indicators are the Pearson linear correlation coefficient (PLCC) and Spearman’s rank-ordered correlation coefficient (SROCC). The PLCC and SROCC reaching unity 1 means that the prediction performance of an objective method is performing better.
-
1.. Prediction Accuracy
The PLCC is used to measure the prediction accuracy of an IQA method. To compute the PLCC, a logistic regression with the five parameters is used to obtain the same scale values with subjective ratings [31]:
(12)
where x denotes the objective quality scores directly from an IQA method, p(x) denotes the IQA scores after the regression step, and β1, …, β5 are model parameters that are found numerically to maximize the correlation between subjective and objective scores. The PLCC value of an IQA method is then calculated as(13)
where and are the mean values of xi and yi, respectively, and σx and σy are the corresponding standard deviations.-
2.. Prediction Monotonicity
SROCC is used to predict the monotonicity of an IQA method. The SROCC value of an IQA method on a database with n images is calculated as [32]
(14)
where rxi and ryi represent the ranks of the prediction score and the subjective score, respectively.4.2. Performance Comparison
In this section, real blurry images from the BID, CID2013, and CLIVE databases are tested to obtain the prediction scores for each image. The predicted scores are then linearly fitted with the subjective scores of the images to obtain the corresponding PLCC and SROCC values. Each database was tested 20 times by the proposed method, and the average value of 20 tests was taken as the final fitting result for the entire database. The PLCC and SROCC results of the proposed method and comparison methods are shown in Table 4. The comparison methods are related to the spatial domain, the frequency domain, machine learning, and deep learning. The best results are highlighted in boldface for the two indices in Table 4. RISE [15] (2017), SR [16] (2016), Yu’s CNN [17] (2017), DIQA [18] (2019), SSEQ [19] (2014), SFA [20] (2019), DB-CNN [21] (2020), DIVINE [33], and NIQE [34] are learning-based algorithms, while MLV [11] (2014), BIBLE [12] (2017), BISHARP [13] (2018), and GCDV [28] (2024) are the methods related to the spatial and frequency domains. Moreover, DIVINE [33] and NIQE [34] are general-purpose NR-IQA methods, and the other compared methods are all image sharpness assessment methods.
It can be seen that the results of the proposed method on these three databases are all above 0.85. Overall, learning-based algorithms perform better than the methods related to the spatial and frequency domains in evaluating the quality of real blurry and distorted images. The PSO-GRNN-based proposed method yields better performance than other advanced network structure-based methods, i.e., Yu’s CNN [17] (2017), DIQA [18] (2019), SFA [20] (2019), and DB-CNN [21] (2020). Therefore, a fully connected GRNN is suitable for dealing with IQA problems.
4.3. Performance of Image Feature Selection
In this section, the impact of feature selection on the performance of the proposed method is verified, and the features are the VS, CD, and gradient. Three feature value calculation methods, i.e., Max, RC, and Var, are selected for these features. In Table 5, seven different feature value combinations are tested and the best results are highlighted in boldface for the two indices.
From Table 5, it can be seen that the results of all combinations on these two databases are almost all above 0.8. Thus, the proposed method can yield better performance by all combinations. Especially, the combination of all three feature value calculation methods yields the best results. Furthermore, analyzing and comparing the data reveals that the Var plays a slightly greater role in feature combination than the Max and RC.
4.4. Performance on Different Scenarios on CID2013
The CID2013 database consists of six datasets (I–VI) and each dataset contains six different scenes [23]. This section focuses on testing images from different groups in the CID2013 database to verify the evaluation effect of the proposed method on images with the same content under different lighting conditions. The test results are shown in Table 6. In Table 6, the results of the same scenarios with different subjects are set in the same background color.
A total of 36 groups were tested, and after analysis, more than 50% (20 groups) of the fitting results were above 0.90, and more than 75% (28 groups) of the fitting results were above 0.80. The worst SROCC and PLCC results from the above test were 0.7090 and 0.7326, respectively. In this part, the content of the test images was the same in each group, but the lighting conditions were different. From the test data, it can be concluded that the proposed method yields good and stable performance on IQA under different lighting conditions.
Based on the results from Table 6, two box charts of SROCC and the PLCC on different scenarios are shown in Figure 7. From Figure 7, the proposed method shows better performance in Scenes 3 and 4, which are all indoor scenarios with subject illuminance between 10 and 1000 lux. In addition, the proposed method also shows stable performance in Scenes 3 and 4. For the other scenarios, the proposed method yields similar performance.
4.5. Scatter Plot and Fitting Curve
A total of 20 experiments were conducted on the BID, CID2013, and CLIVE databases to obtain 20 PLCC and SROCC values. The average value of these 20 values was selected as the final PLCC and SROCC values of the proposed method, and the results are presented in Table 4. Here, a random test was conducted on these databases, and the scatter plot of the proposed method’s prediction results and subjective scores from the databases are shown in Figure 8. From Figure 8, it can be seen that the proposed methods perform well in evaluating the quality of real blurry images. The regression curve of the proposed method has a better correlation with the subjective observation values.
5. Conclusions
An NR image sharpness evaluation method for images under different lighting imaging conditions is proposed in this article. The proposed method consists of two parts, namely the feature values extraction part and the machine learning part using a PSO-GRNN. Firstly, the VS, CD, and gradient feature information are extracted from the test image and the related feature maps are obtained. Then, the Max, RC, and Var calculations are conducted on these feature maps to obtain the feature values. Lastly, the PSO algorithm is used to optimize the GRNN, and the image feature values are input into the PSO-GRNN to predict the image sharpness. Some tests are conducted on real databases, i.e., the BID, CID2013, and CLIVE databases, and some other state-of-the-art or widely cited leaning-based IQA methods are selected for comparison with the proposed method. The results indicate that the proposed method produces better prediction accuracy than all other competing methods. In the future, further studies can be conducted on the evaluation of the different specific illumination parameters.
Conceptualization, H.H. and C.S.; methodology, H.H.; software, H.H.; validation, H.H., B.J., and C.S.; formal analysis, C.S.; investigation, Y.L. (Yandan Lin); resources, B.J.; data curation, H.H.; writing—original draft preparation, H.H.; writing—review and editing, C.S.; visualization, C.S.; supervision, Y.L. (Yuelin Lu); project administration, C.S.; funding acquisition, B.J. and C.S. All authors have read and agreed to the published version of the manuscript.
Not applicable.
Not applicable.
Data are contained within the article.
The authors declare no conflicts of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1. Images of the same content under different lighting conditions and the corresponding vs. maps. (a,b) are images of the same content under different lighting conditions [23], while (c,d) are the corresponding vs. maps.
Figure 2. (a) and (b) are two CD pseudo-color maps of different images in CID2013.
Figure 3. Blurred images of the same content under different lighting conditions and corresponding gradient maps: (a,b) are blurry images of the same content under different lighting conditions [23], while (c,d) are corresponding gradient maps.
Performance on different adaptive optimization methods.
Database | Criteria | GRNN | FOA-GRNN | FA-GRNN | PSO-GRNN |
---|---|---|---|---|---|
BID | SROCC | 0.880 | 0.880 | 0.876 | 0.885 |
PLCC | 0.885 | 0.887 | 0.883 | 0.890 |
Database information description.
Database | Blur Images | Subjective Scores | Typical Size | Score Range |
---|---|---|---|---|
BID | 586 | MOS | 1280 × 960 | [0, 5] |
CID2013 | 474 | MOS | 1600 × 1200 | [0, 100] |
CLIVE | 1162 | MOS | 500 × 500 | [0, 100] |
Introduction to CID2013.
Cluster | Subject | Subject- | Scene Description | Example Images | Image | Motivation |
---|---|---|---|---|---|---|
1 | 2 | 0.5 | Close-up in dark lighting conditions | [Image omitted. Please see PDF.] | I–VI | Bar and restaurant setting |
2 | 100 | 1.5 | Close-up in typical indoor lighting conditions | [Image omitted. Please see PDF.] | I–VI | Living room |
3 | 10 | 4.0 | Small group in dim lighting conditions | [Image omitted. Please see PDF.] | I–VI | Living room |
4 | 1000 | 1.5 | Studio image | [Image omitted. Please see PDF.] | I–IV | Studio image |
5 | >3400 | 3.0 | Small group in cloudy bright to sunny lighting conditions | [Image omitted. Please see PDF.] | I–V | Typical tourist image |
6 | >3400 | >50 | Close-up in high dynamic range lighting conditions | [Image omitted. Please see PDF.] | I–VI | Landscape image |
7 | >3400 | 3.0 | Small group in cloudy bright to sunny lighting conditions (~3× optical or digital zoom) | [Image omitted. Please see PDF.] | VI | General zooming |
8 | >3400 | 1.5 | Close-up in high dynamic range lighting conditions | [Image omitted. Please see PDF.] | V, VI | High dynamic range scene |
Comparison between the proposed method and others.
Databases | BID [ | CID2013 [ | CLIVE [ | |||
---|---|---|---|---|---|---|
Criteria | PLCC | SROCC | PLCC | SROCC | PLCC | SROCC |
BISHARP [ | 0.356 | 0.307 | 0.678 | 0.681 | - | - |
BIBLE [ | 0.392 | 0.361 | 0.698 | 0.687 | 0.515 | 0.427 |
MLV [ | 0.375 | 0.317 | 0.689 | 0.621 | 0.400 | 0.339 |
GCDV [ | 0.338 | 0.294 | 0.681 | 0.596 | 0.405 | 0.334 |
RISE [ | 0.602 | 0.584 | 0.793 | 0.769 | 0.555 | 0.515 |
SR [ | 0.415 | 0.467 | 0.621 | 0.634 | - | - |
Yu’s CNN [ | 0.560 | 0.557 | 0.715 | 0.704 | 0.501 | 0.502 |
DIQA [ | 0.506 | 0.492 | 0.720 | 0.708 | 0.704 | 0.703 |
SSEQ [ | 0.604 | 0.581 | 0.689 | 0.676 | - | - |
SFA [ | 0.840 | 0.826 | - | - | 0.833 | 0.812 |
DB-CNN [ | 0.471 | 0.464 | 0.686 | 0.672 | 0.869 | 0.851 |
DIVINE [ | 0.506 | 0.489 | 0.499 | 0.477 | 0.558 | 0.509 |
NIQE [ | 0.471 | 0.469 | 0.693 | 0.633 | 0.478 | 0.421 |
Proposed | 0.89 0 | 0.88 5 | 0.924 | 0.913 | 0.873 | 0.867 |
The performance of different feature selections.
Databases | Feature Value | PLCC | SROCC |
---|---|---|---|
BID | Max | 0.828 | 0.825 |
RC | 0.802 | 0.788 | |
Var | 0.852 | 0.850 | |
Max + RC | 0.846 | 0.844 | |
Max + Var | 0.886 | 0.877 | |
RC + Var | 0.877 | 0.869 | |
Max + RC + Var | 0.890 | 0.88 5 | |
CID2013 | Max | 0.902 | 0.892 |
RC | 0.843 | 0.828 | |
Var | 0.917 | 0.909 | |
Max + RC | 0.902 | 0.888 | |
Max + Var | 0.92 5 | 0.915 | |
RC + Var | 0.922 | 0.916 | |
Max + RC + Var | 0.924 | 0.913 | |
CLIVE | Max | 0.834 | 0.821 |
RC | 0.785 | 0.769 | |
Var | 0.851 | 0.858 | |
Max + RC | 0.866 | 0.857 | |
Max + Var | 0.871 | 0.863 | |
RC + Var | 0.861 | 0.855 | |
Max + RC + Var | 0.873 | 0.867 |
Test results in different scenarios on CID2013.
Scenarios | SROCC | PLCC | Scenarios | SROCC | PLCC | Scenarios | SROCC | PLCC |
---|---|---|---|---|---|---|---|---|
IS_I_C01 | 0.789 | 0.801 | IS_I_C02 | 0.878 | 0.934 | IS_I_C03 | 0.989 | 0.991 |
IS_II_C01 | 0.709 | 0.860 | IS_II_C02 | 0.781 | 0.745 | IS_II_C03 | 0.841 | 0.934 |
IS_III_C01 | 0.979 | 0.991 | IS_III_C02 | 0.772 | 0.741 | IS_III_C03 | 0.985 | 0.990 |
IS_IV_C01 | 0.908 | 0.890 | IS_IV_C02 | 0.955 | 0.997 | IS_IV_C03 | 0.960 | 0.960 |
IS_V_C01 | 0.902 | 0.890 | IS_V_C02 | 0.968 | 0.990 | IS_V_C03 | 0.993 | 0.999 |
IS_VI_C01 | 0.866 | 0.938 | IS_VI_C02 | 0.928 | 0.937 | IS_VI_C03 | 0.966 | 0.991 |
IS_I_C04 | 0.964 | 0.956 | IS_I_C05 | 0.884 | 0.930 | IS_I_C06 | 0.964 | 0.984 |
IS_II_C04 | 0.968 | 0.948 | IS_II_C05 | 0.877 | 0.836 | IS_II_C06 | 0.775 | 0.733 |
IS_III_C04 | 0.972 | 0.971 | IS_III_C05 | 0.844 | 0.871 | IS_III_C06 | 0.791 | 0.807 |
IS_IV_C04 | 0.908 | 0.968 | IS_IV_C05 | 0.977 | 0.996 | IS_IV_C06 | 0.952 | 0.969 |
IS_VI_C07 | 0.849 | 0.967 | IS_V_C05 | 0.975 | 0.961 | IS_V_C06 | 0.765 | 0.833 |
IS_V_C08 | 0.965 | 0.984 | IS_VI_C08 | 0.804 | 0.968 | IS_VI_C06 | 0.743 | 0.983 |
References
1. Sachin; Kumar, R.; Sakshi; Yadav, R.; Reddy, S.G.; Yadav, A.K.; Singh, P. Advances in Optical Visual Information Security: A Comprehensive Review. Photonics; 2024; 11, 99. [DOI: https://dx.doi.org/10.3390/photonics11010099]
2. Xu, W.; Wei, L.; Yi, X.; Lin, Y. Spectral Image Reconstruction Using Recovered Basis Vector Coefficients. Photonics; 2023; 10, 1018. [DOI: https://dx.doi.org/10.3390/photonics10091018]
3. Sun, X.; Kong, L.; Wang, X.; Peng, X.; Dong, G. Lights off the Image: Highlight Suppression for Single Texture-Rich Images in Optical Inspection Based on Wavelet Transform and Fusion Strategy. Photonics; 2024; 11, 623. [DOI: https://dx.doi.org/10.3390/photonics11070623]
4. Qiu, J.; Xu, H.; Ye, Z.; Diao, C. Image quality degradation of object-color metamer mismatching in digital camera color reproduction. Appl. Opt.; 2018; 57, pp. 2851-2860. [DOI: https://dx.doi.org/10.1364/AO.57.002851]
5. Liu, C.; Zou, Z.; Miao, Y.; Qiu, J. Light field quality assessment based on aggregation learning of multiple visual features. Opt. Express; 2022; 30, pp. 38298-38318. [DOI: https://dx.doi.org/10.1364/OE.467754]
6. Kim, B.; Heo, D.; Moon, W.; Hahh, J. Absolute Depth Estimation Based on a Sharpness-assessment Algorithm for a Camera with an Asymmetric Aperture. Curr. Opt. Photonics; 2021; 5, pp. 514-523.
7. Baig, M.A.; Moinuddin, A.A.; Khan, E. A simple spatial domain method for quality evaluation of blurred images. Multimed. Syst.; 2024; 30, 28. [DOI: https://dx.doi.org/10.1007/s00530-023-01223-6]
8. Wang, Z.; Bovik, A.; Sheikh, H. Image Quality Assessment: From error visibility to structural similarity. IEEE Trans. Image Process.; 2004; 13, pp. 600-612. [DOI: https://dx.doi.org/10.1109/TIP.2003.819861]
9. Shi, C.; Lin, Y. Full reference image quality assessment based on visual salience with color appearance and gradient similarity. IEEE Access; 2020; 8, pp. 97310-97320. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.2995420]
10. Dost, S.; Saud, F.; Shabbir, M.; Khan, M.G.; Shahid, M.; Lovstrom, B. Reduced reference image and video quality assessments: Review of methods. EURASIP J. Image Video Process.; 2022; 2022, pp. 1-31. [DOI: https://dx.doi.org/10.1186/s13640-021-00578-y]
11. Bahrami, K.; Kot, A.C. A fast approach for no-reference image sharpness assessment based on maximum local variation. IEEE Signal Process. Lett.; 2014; 21, pp. 751-755. [DOI: https://dx.doi.org/10.1109/LSP.2014.2314487]
12. Li, L.; Lin, W.; Wang, X.; Yang, G.; Bahrami, K.; Kot, A.C. No-reference image blur assessment based on discrete orthogonal moments. IEEE Trans. Cybern.; 2017; 46, pp. 39-50. [DOI: https://dx.doi.org/10.1109/TCYB.2015.2392129] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/25647763]
13. Gvozden, G.; Grgic, S.; Grgic, M. Blind image sharpness assessment based on local contrast map statistics. J. Vis. Commun. Image Represent.; 2018; 50, pp. 145-158. [DOI: https://dx.doi.org/10.1016/j.jvcir.2017.11.017]
14. Zhu, M.; Yu, L.; Wang, Z.; Ke, Z.; Zhi, C. Review: A Survey on Objective Evaluation of Image Sharpness. Appl. Sci.; 2023; 13, 2652. [DOI: https://dx.doi.org/10.3390/app13042652]
15. Li, L.; Xia, W.; Lin, W.; Fang, Y.; Wang, S. No-Reference and Robust Image Sharpness Evaluation Based on Multiscale Spatial and Spectral Features. IEEE Trans. Multimed.; 2017; 19, pp. 1030-1040. [DOI: https://dx.doi.org/10.1109/TMM.2016.2640762]
16. Lu, Q.; Zhou, W.; Li, H. A no-reference image sharpness metric based on structural information using sparse representation. Inf. Sci.; 2016; 369, pp. 334-346. [DOI: https://dx.doi.org/10.1016/j.ins.2016.06.042]
17. Yu, S.; Wu, S.; Wang, L.; Jiang, F.; Xie, Y.; Li, L. A shallow convolutional neural network for blind image sharpness assessment. PLoS ONE; 2017; 12, e0176632. [DOI: https://dx.doi.org/10.1371/journal.pone.0176632]
18. Kim, J.; Nguyen, A.D.; Lee, S. Deep CNN-based blind image quality predictor. IEEE Trans. Neural Netw. Learn. Syst.; 2019; 30, pp. 11-24. [DOI: https://dx.doi.org/10.1109/TNNLS.2018.2829819]
19. Liu, L.; Liu, B.; Huang, H.; Bovik, A.C. No-reference image quality assessment based on spatial and spectral entropies. Signal Process. Image Commun.; 2014; 29, pp. 856-863. [DOI: https://dx.doi.org/10.1016/j.image.2014.06.006]
20. Li, D.; Jiang, T.; Lin, W.; Jiang, M. Which has better visual quality: The clear blue sky or a blurry animal?. IEEE Trans. Multimed.; 2019; 21, pp. 1221-1234. [DOI: https://dx.doi.org/10.1109/TMM.2018.2875354]
21. Zhang, W.X.; Ma, K.D.; Yan, J.; Deng, D.; Wang, Z. Blind image quality assessment using a deep bilinear convolutional neural network. IEEE Trans. Circuits Syst. Video Technol.; 2020; 30, pp. 36-47. [DOI: https://dx.doi.org/10.1109/TCSVT.2018.2886771]
22. Ciancio, A.; da Costa, A.L.N.T.; Silva, E.A.B.D.; Said, A.; Samadani, R.; Obrador, P. No-reference blur assessment of digital pictures based on multifeature classifiers. IEEE Trans. Image Process.; 2011; 20, pp. 64-75. [DOI: https://dx.doi.org/10.1109/TIP.2010.2053549] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/21172744]
23. Toni, V.; Mikko, N.; Mikko, V.; Pirkko, O.; Jukka, H. CID2013: A database for evaluating no-reference image quality assessment algorithms. IEEE Trans. Image Process.; 2015; 24, pp. 390-402.
24. Deepti, G.; Alan, C.B. Massive online crowd sourced study of subjective and objective picture quality. IEEE Trans. Image Process.; 2016; 25, pp. 372-387.
25. Kim, W.; Kim, C. Saliency detection via textural contrast. Opt. Lett.; 2012; 37, pp. 1550-1552. [DOI: https://dx.doi.org/10.1364/OL.37.001550]
26. Zahra, S.S.; Karim, F. Visual saliency detection via integrating bottom-up and top-down information. Optik; 2019; 178, pp. 1195-1207.
27. Zhang, L.; Shen, Y.; Li, H. VSI: A visual saliency-induced index for perceptual image quality assessment. IEEE Trans. Image Process.; 2014; 23, pp. 4270-4281. [DOI: https://dx.doi.org/10.1109/TIP.2014.2346028]
28. Shi, C.; Lin, Y. No reference image sharpness assessment based on global color difference variation. Chin. J. Electron.; 2024; 33, pp. 293-302. [DOI: https://dx.doi.org/10.23919/cje.2022.00.058]
29. Varga, D. Full-Reference Image Quality Assessment Based on Grünwald–Letnikov Derivative, Image Gradients, and Visual Saliency. Electronics; 2022; 11, 559. [DOI: https://dx.doi.org/10.3390/electronics11040559]
30. Li, C.; Bovik, A.C.; Wu, X. Blind Image Quality Assessment Using a General Regression Neural Network. IEEE Trans. Neural Netw.; 2011; 22, pp. 793-799.
31. Zhao, M.; Ji, S.; Wei, Z. Risk prediction and risk factor analysis of urban logistics to public security based on PSO-GRNN algorithm. PLoS ONE; 2020; 15, e0238443. [DOI: https://dx.doi.org/10.1371/journal.pone.0238443] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/33017446]
32. Rahim, H.S.; Farooq, M.S.; Conrad, A.B. A statistical evaluation of recent full reference image quality assessment algorithms. IEEE Trans. Image Process.; 2006; 15, pp. 3440-3451.
33. Moorthy, A.K.; Bovik, A.C. Blind image quality assessment: From natural scene statistics to perceptual quality. IEEE Trans. Image Process.; 2011; 20, pp. 3350-3364. [DOI: https://dx.doi.org/10.1109/TIP.2011.2147325]
34. Mittal, A.; Soundararajan, R.; Bovik, A.C. Making a ‘completely blind’ image quality analyzer. IEEE Signal Process. Lett.; 2013; 20, pp. 209-212. [DOI: https://dx.doi.org/10.1109/LSP.2012.2227726]
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
Blurriness is troublesome in digital images when captured under different illumination imaging conditions. To obtain an accurate blurred image quality assessment (IQA), a machine learning-based objective evaluation method for image sharpness under different illumination imaging conditions is proposed. In this method, the visual saliency, color difference, and gradient information are selected as the image features, and the relevant feature information of these three aspects is extracted from the image as the feature value for the blurred image evaluation under different illumination imaging conditions. Then, a particle swarm optimization-based general regression neural network (PSO-GRNN) is established to train the above extracted feature values, and the final blurred image evaluation result is determined. The proposed method was validated based on three databases, i.e., BID, CID2013, and CLIVE, which contain real blurred images under different illumination imaging conditions. The experimental results showed that the proposed method has good performance in evaluating the quality of images under different imaging conditions.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details


1 School of Artificial Intelligence, Anhui Polytechnic University, Wuhu 241000, China;
2 School of Artificial Intelligence, Anhui Polytechnic University, Wuhu 241000, China;
3 School of Artificial Intelligence, Anhui Polytechnic University, Wuhu 241000, China;
4 Department of Illuminating Engineering & Light Sources, School of Information Science and Technology, Fudan University, Shanghai 200433, China