Content area
In the field of data compression, the performance of an image compression technique based on the amount of compression ratio achieved keeps the visual quality of the decompressed image as close to the original as possible. In conventional vector quantization techniques, the size of the code vector plays an important role in measuring the amount of space required to store an image. The compression ratio of the method decreases as the size of the code vector increases. The current study proposes a new image compression technique that generates a common code vector for a number of images of the same or different sizes by adjusting some tuning parameters. This common code vector holds a unique code word for each and every image. At the same time, index matrices are updated according to the index value of the common code vector. The images are decompressed using the respective index matrix and the common code vector. So, in this work, for the same or different sizes of images, only one common code vector is generated. The size of the common code vector is much less compared to the total size of the individual code vectors. Hence, it achieves a very high compression ratio. The proposed method is applied to many standard images found in literature and images from the UCIDv.2 color image database. Experimental results are analyzed in terms of peak signal to noise ratio (PSNR), structure similarity index parameter (SSIM), and compression ratio. The experimental result shows that the proposed method achieved an average of 95.12% compression ratio, which is 3.51% higher than the conventional vector quantization algorithm and 7.42% higher than the existing modified vector quantization technique, keeping the visual quality of the decompressed image almost the same as those two image compression algorithms.
Introduction
In an image compression technique, an image is compressed by removing redundancies present in the image. Hence, it reduces the amount of space required to store the image in the storage medium, which decreases the communication cost over the medium [1–18]. There are two basic steps in an image compression [10]. (i) Compression—consists of pre-processing and encoding of the input image. (ii) Decompression—reverse process of compression. Image compression techniques work in the spatial domain—pixels of the image are represented in 2D matrix form. b) On the input image, frequency domain-first image transformation techniques such as DFT and DCT are applied, followed by pre-processing techniques [8–10]. The objective of image compression is to reduce and remove redundant bits to decrease the size of an image. In literature, three types of redundancies are found: (a) coding redundancy is represented when a less optimal code word is used; (b) inter-pixel redundancy is based on correlation between neighboring pixels of the image; (c) psycho-visual redundancy exists because human eyes do not respond to all visual information with equal intensity [8, 18]. In the literature, image compression techniques are classified into two categories: (a) lossless image compression [8–10]—the decompressed image has the same visual quality as the original, but a very low compression ratio is achieved. Arithmetic encoding, LZW, run length encoding (RLE), and Huffman Encoding [8] are examples of some well-known lossless image compression techniques. (b) Lossy image compression [8, 9]: in the case of lossy image compression, the visual quality of the decompressed image is not up to the mark, but achieves a very high compression ratio by removing a large amount of redundant data from the image. [8], Color image quantization (CIQ) [8], JPEG [8], and JPEG2000 [8] are some lossy image compression techniques found in the literature.
All authors conceived and designed the study. Dibyendu Barman conducted the experiments, analyzed the data, and wrote the paper. All authors contributed to manuscript revisions. All authors approved the ultimate version of the manuscript and agreed to be held accountable for the content therein.
Literature Study
In the literature, many pre-processing and post-processing techniques for vector quantization are found, which improve the performance of the algorithm. Some of them are discussed below.
In 2022, Barman et al. [6] proposed a method that enhanced the performance of the multi-image compression technique in terms of image quality. In this work, two common code vectors are formed instead of one. The first common code vector holds those code words that play vital roles in the formation of the decompressed image. The second code vector contains less important code words. Then a multi-image compression technique is applied to those code vectors. This method improves the visual quality of the decompressed images, but at the same time, the amount of memory required to store the image is also increased.
Hasnat et al. [7] in 2021 proposed a luminance-approximate vector quantization algorithm that improves the quality of the decompressed images of the multi-image compression technique. This method is applicable only on the luminance channel. The visual quality of the decompressed images is very high, but at the same time the compression ratio of the algorithm decreases.
Hasnat et al. [8] in 2019 proposed a new image compression technique where multiple images of the same size are compressed together, achieving a very high compression ratio while keeping the visual quality of the decompressed as close to the conventional image compression techniques. Here, the luminance channel of each image is compressed separately using a vector quantization technique. The chrominance channels of a number of images are combined together into a three-dimensional matrix that forms the training vector. Then clustering is applied to this training vector to get the initial color representative. Thus, for the two chrominance channels of the number of images, one index matrix and one centroid matrix of size are generated, where 256 is the number of clusters. The centroid of every cluster is updated individually using an optimization technique to get a better centroid pair. This method achieves a very high compression ratio while keeping the visual quality of the decompressed image almost similar to the standard image compression techniques. The limitation of this method is that it is not applicable to images of different sizes.
In 2012, Kim et al. [11] developed a lossless image compression method which works in YCbCr color model, concentrates on the efficient coding of chrominance channels with a new color transform, and performs hierarchical coding of chrominance channel pixels. In this method, the luminance channel of an image is compressed using conventional lossless image compression techniques such as JPEG-LS, CALIC, or JPEG2000, whereas hierarchical decomposition and directional prediction techniques are applied on chrominance channels to encode it. The major drawback of this method is that only 40% compression ratio is achieved.
The objective of this work is to develop a method which gives a very high compression ratio while keeping the visual quality of the image as close to the original image as possible. For measuring the performance of an image compression algorithm, the size of the code vector plays a vital role. This method generates a common code vector for a set of images of the same or different sizes. The present work achieves a very high compression ratio compared to the conventional vector quantization (VQ) and modified vector quantization (MVQ) techniques, while keeping the visual quality of the decompressed image normal.
The article is organized as follows: “Literature Study” discusses literature review; the next section explains the proposed method. In “Experimental Result”, experimental results are shown, and the final section concludes the article.
Proposed Method
This section discusses an image compression technique where multiple images of the same or different sizes are compressed together to achieve higher compression ratios, keeping the visual quality of the decompressed image as close to the original as possible. The proposed method works on de-correlated color models like YCbCr, YIQ, and Lαβ. In YCbCrcolor model, the luminance (Y) channel represents image information and two chrominance channels (Cb and Cr) carry only color information. In the field of image compression, the size of the code vector plays a very important role. The performance of an image compression technique depends on the size of the code vector. The smaller size of code vectors means higher compression ratios. In this study, a number of images of the same or different size are compressed together by forming a common code vector by adjusting some tuning parameters. The size of the common code vector is very small compared to the total size of individual code vectors. The visual quality of the decompressed image is very close to that of the existing algorithms' decompressed images. Initially, a number of images are converted into the YCbCrcolor model. The step-by-step process is discussed in the subsections.
Compression Process
Algorithm 1: Input: number of luminance and number ofchrominance channels from number ofcolor images. Output: compressed luminance and chrominance channels.
Step 1: Take number of input images in de-correlated color model. Partition all the luminance and chrominance channels into equal number of blocks. The may be 4, 6 or 8.
Step 2: Apply K-means clustering algorithm on all luminance and chrominance channels of number of images. It produces one index matrix and a code vector for each channel. Each code vector contains number of code words of size where .
Step 3: Set two tuning parameters, and to the particular value to form a common code vector matrix using all the individual code vector of input images. denotes the number of differences in value at the same position of the code word, whereas measures the distance between two values at the same position. The detailed compression procedure is discussed below.
Figures 1, 2, and 3 show the simple code word insertion procedure in the common code vector matrix . Here, two tuning parameters, and are set to 2 and 4, respectively. Initially in Fig. 1, the code word number 236 is stored at the position 4 of the index matrix. Then each value of the code word is compared with all the code words already present in the common code vector matrix. From Fig. 1 it can be seen that a similar type of code word, i.e., difference in values in the same position, less than already exists in the third index of the common code vector. So this code word is not placed in the common code vector matrix and the index position 4 of the index matrix is updated with the value 3, the index value of the similar type of code word already present in the common code vector .
Fig. 1 [Images not available. See PDF.]
Insertaion of a code word in the common code vector matrix
Fig. 2 [Images not available. See PDF.]
Insertaion of a code word in the common code vector matrix
Fig. 3 [Images not available. See PDF.]
Insertion of a code word in the common code vector matrix
In Fig. 2, it is seen that code word number 176, at position 5 of the index matrix, finds a similar type of code word at index 2 of the common code vector matrix . The distance between the values at the same position of those two code words is within the except at position 2. This is the only position where distance measures are 6, i.e., exceeds the . But the other tuning parameter is set to 2, so only one positional difference does not affect the result. So the code word at position 4 of the index matrix is not placed at the common code vector matrix and the position 4 of the index matrix is updated with 2, the index value of where a similar type of code word is found.
In Fig. 3, the code word with value 112 positioned sixth in the index matrix does not have any similar type of code word in the common code vector matrix . For this code word more than are found in more than i.e., two positions with the existing code words of . So, the code word is placed in the common code vector matrix at the next free index position, i.e., fifth and the sixth indexes of the index matrix are updated with the value 5.
Step 4: Apply the method discussed in step 3 on all three channels (Y, Cb, Cr) of the number of images to complete the common code vector matrix and number of updated index matrices.
Step 5: Apply run length encoding technique on all the index matrices to achieve further compression ratio.
Step 6: In this step, Huffman encoding technique is applied to the common code vector to achieve better compression ratio.
The size of the Huffman-encoded common code vector and RLE-encoded index matrices are used to calculate the size of the compressed images. The detailed compression procedure is shown in Fig. 4.
Fig. 4 [Images not available. See PDF.]
Flowchart of the proposed compression process
Decompression Process
Algorithm 2: Input: encoded common code vector and index matrices. Output: decompressed luminance and chrominance channels of number of images.
The decompression process is a reverse process of compression. The steps of the decompression process are discussed below.
Step 1: First, decode all the RLE-encoded index matrices using RLE decoding technique.
Step 2: Similarly, decode the common code vector matrix using the Huffman decoding technique.
Step 3: Reconstruct each number of decompressed images using the decoded index matrices and common code vector. The details of the decompression procedure are shown in Fig. 5.
Fig. 5 [Images not available. See PDF.]
Flowchart of the proposed decompression process
Experimental Result
The proposed method is designed to achieve a better compression ratio while keeping the visual quality of the decompressed image as close to the original as possible. This method is coded in MATLAB2018 and applied to many standard images found in literature and images from the UCID v.2 color image database. The proposed method works in the de-correlated color model. The objective of this work is to create a common code vector from individual code vectors of images generated after applying conventional vector quantization algorithms. The experimental results are compared with the existing vector quantization and modified vector quantization techniques. In the YCbCr color model, the luminance channels represent only image information, while the color of the image depends on chrominance channels. As the luminance channel plays a very vital role for the visual quality of the image, the two tuning parameter values for the luminance channel are set to (1, 2) smaller values to get a better quality of the decompressed image, while in the case of chrominance channels, they are set to higher values to achieve a better compression ratio. Figures 6a–g shows the original image, decompressed image using vector quantization [12–19], modified VQ (MVQ) [18], and a proposed method with tuning parameters and values (4,6) and (3,5) for the chrominance channels, taking 4 and 6 number of images together with block size = 4.
Fig. 6 [Images not available. See PDF.]
a Original image, decompressed image using b vector quantization, c modified vector quantization (MVQ), d proposed method taking 4 number of images together with = 4 and 6, e proposed method taking 4 number of images together with = 3 and 5, f proposed method taking 6 number of images together with = 4 and 6 g proposed method taking 6 number of images together with = 3 and 5
From Fig. 6, it can be seen that the quality of the decompressed “baboon” image using the proposed method is almost similar to the quality of decompressed images using VQ [12–19] and MVQ [16].
The performance of any image compression method is measured by two factors—the amount of space required to store the decompressed image and the quality of the decompressed image retained by the algorithm with reference to the original image. There are many ways of assessing the quality of a decompressed image, including peak signal to noise ratio (PSNR) [8, 18, 23–25] and structure similarity index parameter (SSIM) [21–25]. The details of the experimental results using SSIM, PSNR, and percentage of space reduction (compression ratio) are discussed below.
In the case of the vector quantization method, 256 clusters are considered. For modified vector quantization (MVQ) [18], all values of the luminance channel are clustered into eight groups (because the article reported that the initial grouping of the luminance channel into eight groups gives the optimized result). In the proposed method, experimental results are given for block sizes of 4 × 4.
Percentage of Space Reduction (Compression Ratio)
Comparative results in space reduction using vector quantization [12–19], modified vector quantization [18], and the proposed method taking 4 and 6 number of images together with block size 4 × 4 are shown in Tables 1 and 2, respectively. Here the parameters for the luminance channel are set to 1 and 2, respectively. In the case of chrominance channels, it is set to (3, 4) and (5, 6). The percentage of space reduction using the proposed method taking 4 number of images together with value (4, 6) lies between 94.17 and 96.09% and with value (3, 5) it is 92.94–95.76%. For 6 number of images together it is 94–96.90% and 93.09–96.29%, respectively, which is much higher than the existing VQ (88.31–94.18%) and modified VQ (86.02–90.26%) algorithms.
Table 1. Space reduction using VQ, MVQ, and the proposed method taking 4 number of images together with block size 4 × 4
Image | Vector quantization | Modified vector quantization (MVQ) | Proposed method, difference = 4, limit = 6 | Proposed method difference = 3, limit = 5 | ||||
|---|---|---|---|---|---|---|---|---|
Total | % of space reduction | Total | % of space reduction | Total | % of space reduction | Total | % of space reduction | |
4.1.01. tiff (Girl) | 22,975 | 88.31 | 25,646 | 86.96 | 11,453 | 94.17 | 13,867 | 92.94 |
4.1.02. tiff (Couple) | 21,929 | 88.85 | 25,056 | 87.26 | 11,424 | 94.18 | 12,901 | 93.43 |
4.1.05. tiff (House) | 21,212 | 89.21 | 24,211 | 87.69 | 10,575 | 94.62 | 12,268 | 93.76 |
4.1.08.tiff (Jelly beans) | 19,772 | 89.94 | 19,146 | 90.26 | 9953 | 94.93 | 11,112 | 94.34 |
4.2.02.tiff (Tiffany) | 54,079 | 93.12 | 98,620 | 87.46 | 31,940 | 95.93 | 36,400 | 95.37 |
4.2.03.tiff (Baboon) | 58,172 | 92.60 | 109,920 | 86.02 | 44,057 | 94.39 | 49,053 | 93.76 |
4.2.07.tiff (Pepper) | 53,232 | 93.23 | 86,711 | 88.97 | 32,585 | 95.85 | 36,348 | 95.37 |
House.tiff | 45,758 | 94.18 | 83,667 | 89.36 | 29,957 | 96.19 | 33,316 | 95.76 |
ucid00006.tif | 42,174 | 92.84 | 75,319 | 87.23 | 26,799 | 95.46 | 30,777 | 94.78 |
ucid00007.tif | 47,456 | 91.95 | 82,998 | 85.93 | 33,854 | 94.26 | 37,135 | 93.70 |
ucid00008.tif | 41,521 | 92.96 | 71,691 | 87.85 | 25,664 | 95.64 | 28,051 | 95.24 |
ucid000028.tif | 40,607 | 93.11 | 62,469 | 89.41 | 23,099 | 96.09 | 24,963 | 95.76 |
Table 2. Space reduction using VQ, MVQ, and the proposed method taking 6 number of images together with block size 4 × 4
Image | Vector quantization | Modified vector quantization (MVQ) | Proposed method, difference = 4, limit = 6 | Proposed method, difference = 3, limit = 5 | ||||
|---|---|---|---|---|---|---|---|---|
Total | % of space reduction | Total | % of space reduction | Total | % of space reduction | Total | % of space reduction | |
4.1.01. tiff (Girl) | 22,975 | 88.31 | 25,646 | 86.96 | 11,279 | 94.26 | 13,571 | 93.09 |
4.1.02. tiff (Couple) | 21,929 | 88.85 | 25,056 | 87.26 | 11,250 | 94.27 | 12,605 | 93.58 |
4.1.04. tiff (Female) | 22,836 | 88.39 | 24,869 | 87.35 | 11,784 | 94.00 | 13,316 | 93.22 |
4.1.05. tiff (House) | 21,212 | 89.21 | 24,211 | 87.69 | 10,361 | 94.73 | 11,970 | 93.91 |
4.1.06. tiff (Tree) | 21,413 | 89.11 | 26,318 | 86.61 | 11,258 | 94.27 | 12,820 | 93.47 |
4.1.08.tiff (Jelly beans) | 19,772 | 89.94 | 19,146 | 90.26 | 9889 | 94.97 | 10,913 | 94.44 |
4.2.02.tiff (Tiffany) | 54,079 | 93.12 | 98,620 | 87.46 | 31,675 | 95.97 | 36,116 | 95.40 |
4.2.03.tiff (Baboon) | 58,172 | 92.60 | 109,920 | 86.02 | 43,792 | 94.43 | 48,589 | 93.82 |
4.2.06.tiff (Sailboat) | 55,322 | 92.97 | 93,319 | 88.13 | 36,583 | 95.34 | 39,399 | 94.99 |
4.2.07.tiff (Pepper) | 53,232 | 93.23 | 86,711 | 88.97 | 32,423 | 95.87 | 36,064 | 95.41 |
House.tiff | 45,758 | 94.18 | 83,667 | 89.36 | 29,944 | 96.19 | 33,200 | 95.77 |
4.2.05.tiff (Jet) | 49,313 | 93.73 | 85,668 | 89.11 | 24,346 | 96.90 | 29,119 | 96.29 |
ucid00006.tif | 42,174 | 92.84 | 75,319 | 87.23 | 26,716 | 95.47 | 30,031 | 94.90 |
ucid00007.tif | 47,456 | 91.95 | 82,998 | 85.93 | 33,771 | 94.27 | 36,989 | 93.72 |
ucid00008.tif | 41,521 | 92.96 | 71,691 | 87.85 | 25,581 | 95.66 | 27,905 | 95.26 |
ucid000028.tif | 40,607 | 93.11 | 62,469 | 89.41 | 23,016 | 96.09 | 24,817 | 95.79 |
ucid00031.tif | 40,882 | 93.06 | 71,014 | 87.96 | 26,414 | 95.52 | 28,847 | 95.10 |
ucid00036.tif | 47,039 | 92.02 | 84,100 | 85.74 | 32,251 | 94.53 | 36,177 | 93.86 |
Structure Similarity Index Parameter (SSIM)
The SSIM [21–24] is used to measure the similarity between two images. While comparing any compression algorithm, higher SSIM [21–25] indicates higher quality of reconstructed images. The SSIM between two images and is measured by Eq. 1.
1
where is the average of , the average of , the variance of , the variance of , the covariance of and , and , the two variables to stabilize the division with weak denominator, the dynamic range of pixel values, and by default.Structure similarity index (SSIM) [21–25] between original images and decompressed using the proposed method taking 4 and 6 number of images together with block size 4 × 4 with two tuning parameters and values (4, 6) and (3, 5) for chrominance channels are shown in Table 3 and Table 4, respectively. In case of 4 number of images together with and values (4, 6) and (3, 5), it lies between (0.7654 and 0.9635) and (0.7650 and 0.9734), respectively, whereas in case of 6 number of images together it is (0.7523 and 0.9596) and (0.7519 and 0.9732), respectively. In case of existing vector quantization (VQ) [12–19] and modified vector quantization (MVQ) [18], it is (0.9793–0.7725) and (0.9828–0.8990), which is a little bit higher than the proposed method.
Table 3. Ssim using VQ, MVQ, and the proposed method taking 4 number of images together with block size 4 × 4
Image name | Vector quantization | Modified vector quantization | Proposed method, difference = 4, limit = 6 | Proposed method, difference = 3, limit = 5 |
|---|---|---|---|---|
4.1.01. tiff (Girl) | 0.8936 | 0.9048 | 0.8722 | 0.8713 |
4.1.02. tiff (Couple) | 0.8987 | 0.9169 | 0.8638 | 0.8669 |
4.1.05. tiff (House) | 0.9553 | 0.9641 | 0.9508 | 0.9539 |
4.1.08.tiff (Jelly Beans) | 0.9793 | 0.9828 | 0.9592 | 0.9734 |
4.2.02.tiff (Tiffany) | 0.9660 | 0.9688 | 0.9591 | 0.9624 |
4.2.03.tiff (Baboon) | 0.8067 | 0.8996 | 0.8032 | 0.8036 |
4.2.07.tiff (Pepper) | 0.9689 | 0.9727 | 0.9635 | 0.9661 |
House.tiff | 0.9163 | 0.9434 | 0.8974 | 0.9068 |
ucid00006.tif | 0.7725 | 0.8990 | 0.7654 | 0.7650 |
ucid00007.tif | 0.8374 | 0.9271 | 0.8325 | 0.8326 |
ucid00008.tif | 0.8624 | 0.9280 | 0.8512 | 0.8546 |
ucid000028.tif | 0.8825 | 0.9184 | 0.8277 | 0.8655 |
Table 4. SSIM using VQ, MVQ, and the proposed method taking 6 number of images together with block size 4 × 4
Image name | Vector quantization | Modified vector quantization | Proposed method, difference = 4, limit = 6 | Proposed method, difference = 3, limit = 5 |
|---|---|---|---|---|
4.1.01. tiff (Girl) | 0.8936 | 0.9048 | 0.8722 | 0.8713 |
4.1.02. tiff (Couple) | 0.8987 | 0.9169 | 0.8638 | 0.8669 |
4.1.04. tiff (Female) | 0.9518 | 0.9517 | 0.9312 | 0.9429 |
4.1.05. tiff (House) | 0.9553 | 0.9641 | 0.9508 | 0.9539 |
4.1.06. tiff (Tree) | 0.8984 | 0.9337 | 0.8867 | 0.8915 |
4.1.08.tiff (Jelly Beans) | 0.9793 | 0.9828 | 0.9596 | 0.9732 |
4.2.02.tiff (Tiffany) | 0.9660 | 0.9688 | 0.9591 | 0.9624 |
4.2.03.tiff (Baboon) | 0.8067 | 0.8996 | 0.8032 | 0.8036 |
4.2.06.tiff (Sailboat) | 0.9087 | 0.9295 | 0.9000 | 0.9027 |
4.2.07.tiff (Pepper) | 0.9689 | 0.9727 | 0.9633 | 0.9660 |
House.tiff | 0.9163 | 0.9434 | 0.8969 | 0.9067 |
4.2.05.tiff (Jet image) | 0.9060 | 0.9282 | 0.8599 | 0.8897 |
ucid00006.tif | 0.7725 | 0.8990 | 0.7654 | 0.7650 |
ucid00007.tif | 0.8374 | 0.9271 | 0.8325 | 0.8326 |
ucid00008.tif | 0.8624 | 0.9280 | 0.8512 | 0.8546 |
ucid000028.tif | 0.8825 | 0.9184 | 0.8277 | 0.8655 |
ucid00031.tif | 0.8715 | 0.9323 | 0.8438 | 0.8601 |
ucid00036.tif | 0.7556 | 0.8845 | 0.7523 | 0.7519 |
Peak Signal to Noise Ratio (PSNR)
Peak signal to noise ratio (PSNR) [8, 18, 21–25] is another technique used for measuring the quality of the decompressed image. Higher PSNR indicates better quality image when comparing the decompressed image with the original. PSNR is defined by MSE using Eq. 2.
2
where the mean squared error (MSE) [6] is defined as
3
In which I, Ic, represent the matrix data of original image and the decompressed image, respectively. M and N represent the number of rows and number of columns, respectively. MAXfis the maximum signal value that exists in the original image. The memory space reduction percentage is calculated using Eq. 4:4
where S is the space required to store the uncompressed image, and is the space required to store the compressed images.Tables 5 and 6 show the comparative study of peak signal to noise ratio (PSNR) [8, 18, 21–25] between the original image and the decompressed image using conventional vector quantization (VQ) [12–20], modified vector quantization (MVQ) [18], and the proposed method taking 4 and 6 number of images together with block size 4 × 4, respectively, keeping two tuning parameters and values to (4, 6) and (3, 5) for the chrominance channels. From the experimental results, it is observed that the quality of the image in terms of PSNR [8, 18, 21–25] using the proposed method is a little bit lower than the existing (VQ) [12–19], modified vector quantization (MVQ) [18], and image compression techniques.
Table 5. PSNR using VQ, MVQ, and the proposed method taking 6 number of images together with block size 4 × 4
Image name | Vector quantization | Modified vector quantization (MVQ) | Proposed method, difference = 4, limit = 6 | Proposed method, difference = 3, limit = 5 | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
Y | Cb | Cr | Y | Cb | Cr | Y | Cb | Cr | Y | Cb | Cr | |
4.1.01. tiff (Girl) | 35.55 | 38.43 | 36.98 | 36.13 | 38.50 | 36.93 | 33.50 | 37.43 | 35.98 | 33.49 | 37.68 | 36.57 |
4.1.02. tiff (Couple) | 34.26 | 39.99 | 38.87 | 36.88 | 40.00 | 38.83 | 34.13 | 38.58 | 37.24 | 34.17 | 39.07 | 38.02 |
4.1.04. tiff (Female) | 35.25 | 37.65 | 35.83 | 36.32 | 37.59 | 35.74 | 35.19 | 35.49 | 34.98 | 35.18 | 36.78 | 35.26 |
4.1.05. tiff (House) | 34.44 | 37.08 | 35.53 | 37.27 | 37.10 | 35.55 | 34.24 | 36.38 | 34.70 | 34.25 | 36.70 | 35.34 |
4.1.06. tiff (Tree) | 28.69 | 36.84 | 33.54 | 33.06 | 36.80 | 33.49 | 28.68 | 35.80 | 33.09 | 28.69 | 36.16 | 33.32 |
4.1.08.tiff (Jelly Beans) | 37.37 | 38.03 | 37.13 | 39.04 | 37.90 | 37.09 | 35.19 | 36.09 | 35.79 | 35.20 | 37.46 | 36.39 |
4.2.02.tiff (Tiffany) | 34.11 | 34.90 | 36.74 | 36.65 | 34.84 | 36.78 | 33.98 | 33.97 | 36.09 | 34.04 | 34.39 | 36.40 |
4.2.03.tiff (Baboon) | 25.25 | 30.84 | 31.76 | 31.08 | 30.83 | 31.73 | 25.25 | 30.61 | 31.45 | 25.25 | 30.73 | 31.62 |
4.2.06.tiff (Sailboat) | 29.68 | 34.38 | 32.44 | 33.41 | 34.36 | 32.38 | 29.67 | 33.82 | 32.08 | 29.66 | 34.07 | 32.24 |
4.2.07.tiff (Pepper) | 32.70 | 35.20 | 34.72 | 35.10 | 35.19 | 34.61 | 32.62 | 34.15 | 33.88 | 32.63 | 34.66 | 34.28 |
House.tiff | 30.22 | 36.42 | 33.47 | 34.98 | 36.38 | 33.41 | 30.16 | 35.29 | 32.83 | 30.17 | 35.78 | 33.16 |
4.2.05.tiff (Jet) | 32.45 | 37.75 | 37.55 | 36.66 | 37.69 | 37.51 | 32.37 | 36.97 | 36.24 | 32.37 | 37.31 | 37.15 |
ucid00006.tif | 25.82 | 36.11 | 35.53 | 30.76 | 36.89 | 36.34 | 25.01 | 36.05 | 35.75 | 25.02 | 36.23 | 35.93 |
ucid00007.tif | 25.06 | 33.76 | 34.37 | 31.07 | 34.70 | 35.61 | 23.97 | 34.18 | 35.11 | 23.97 | 34.36 | 35.21 |
ucid00008.tif | 28.34 | 37.65 | 37.39 | 32.27 | 38.27 | 38.11 | 27.42 | 37.11 | 36.84 | 27.42 | 37.56 | 37.47 |
ucid000028.tif | 30.37 | 40.52 | 39.25 | 33.53 | 41.40 | 40.12 | 29.44 | 38.97 | 37.88 | 29.44 | 40.15 | 39.12 |
ucid00031.tif | 27.26 | 36.53 | 35.46 | 31.91 | 37.68 | 36.30 | 26.22 | 36.25 | 35.48 | 26.22 | 36.92 | 35.99 |
ucid00036.tif | 27.03 | 35.95 | 34.24 | 31.08 | 36.46 | 34.60 | 26.21 | 35.65 | 34.13 | 26.21 | 36.07 | 34.31 |
Table 6. PSNR of using VQ, MVQ, and the proposed method taking 4 number of images together with block size 4 × 4
Image name | Vector quantization | Modified vector quantization (MVQ) | Proposed method, difference = 4, limit = 6 | Proposed method, difference = 3, limit = 5 | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
Y | Cb | Cr | Y | Cb | Cr | Y | Cb | Cr | Y | Cb | Cr | |
4.1.01. tiff (Girl) | 35.55 | 38.43 | 36.98 | 36.13 | 38.50 | 36.93 | 33.50 | 37.43 | 35.98 | 33.49 | 37.68 | 36.57 |
4.1.02. tiff (Couple) | 34.26 | 39.99 | 38.87 | 36.88 | 40.00 | 38.83 | 34.13 | 38.58 | 37.24 | 34.17 | 39.07 | 38.02 |
4.1.05. tiff (House) | 34.44 | 37.08 | 35.53 | 37.27 | 37.10 | 35.55 | 34.24 | 36.38 | 34.70 | 34.20 | 36.70 | 35.34 |
4.1.08.tiff (Jelly Beans) | 37.37 | 38.03 | 37.13 | 39.04 | 37.90 | 37.09 | 35.25 | 36.08 | 35.78 | 35.20 | 37.46 | 36.39 |
4.2.02.tiff (Tiffany) | 34.11 | 34.90 | 36.74 | 36.65 | 34.84 | 36.78 | 33.98 | 33.97 | 36.09 | 34.04 | 34.39 | 36.40 |
4.2.03.tiff (Baboon) | 25.25 | 30.84 | 31.76 | 31.08 | 30.83 | 31.73 | 25.25 | 30.61 | 31.45 | 25.25 | 30.73 | 31.62 |
4.2.07.tiff (Pepper) | 32.70 | 35.20 | 34.72 | 35.10 | 35.19 | 34.61 | 32.62 | 34.18 | 33.88 | 32.64 | 34.67 | 34.28 |
House.tiff | 30.22 | 36.42 | 33.47 | 34.98 | 36.38 | 33.41 | 30.17 | 35.29 | 32.84 | 30.18 | 35.78 | 33.16 |
ucid00006.tif | 25.82 | 36.11 | 35.53 | 30.76 | 36.89 | 36.34 | 25.01 | 36.05 | 35.70 | 25.02 | 36.23 | 35.93 |
ucid00007.tif | 25.06 | 33.76 | 34.37 | 31.07 | 34.70 | 35.61 | 23.97 | 34.18 | 35.11 | 23.97 | 34.36 | 35.21 |
ucid00008.tif | 28.34 | 37.65 | 37.39 | 32.27 | 38.27 | 38.11 | 27.42 | 37.11 | 36.84 | 27.42 | 37.56 | 37.47 |
ucid000028.tif | 30.37 | 40.52 | 39.25 | 33.53 | 41.40 | 40.12 | 29.44 | 38.97 | 37.88 | 29.44 | 40.15 | 39.12 |
Figure 7 shows the average percentage of the compression ratio achieved using conventional vector quantization [12–19], modified vector quantization (MVQ) [18], and the proposed method taking 4 and 6 number of images together with block size 4 × 4 keeping tuning parameters and values for Luminance channels to 1 and 2, respectively, and for the chrominance channels it is (4, 6) and (3, 5), respectively. From Fig. 7, it is clearly observed that the compression ratio achieved using the proposed method is much higher than the other two existing compression techniques.
Fig. 7 [Images not available. See PDF.]
Average space reduction using VQ, MVQ, and the proposed method taking 4 and 6 number of images together with block size 4 × 4. B denotes the number of images taking together
Therefore, based on the experiment results, the proposed multi-image compression technique managed to achieve an increase in compression ratio by 3.51%–7.42% over existing vector quantization and modified vector quantization techniques, while maintaining reduced peak signal to noise ratios (PSNR) and structure similarity index parameters (SSIM).
Conclusion
This article proposes an image compression method where more than one image that is same or different can be compressed together to achieve a better compression ratio. This method is applicable to the de-correlated color model. The objective of this work is to achieve a higher compression ratio using common code vectors instead of individual code vectors of the same or different sizes. The size of the common code vector is far less than the total size of the individual code vectors. Thus, the proposed method achieves a much higher compression ratio compared to conventional vector quantization and modified vector quantization techniques. The proposed method is applied to many standard images found in literature and images from the UCID v.2 database. The experimental result is analyzed in terms of compression ratio (CR), structure similarity index parameter (SSIM), and peak signal to noise ratio (PSNR). The experimental results show that the proposed method achieved 3.51–7.42% higher compression than existing image compression techniques, while keeping the visual quality of the decompressed image almost the same or a little bit less. Further work may be focused on improving the visual quality of the image while keeping the same amount of compression ratio.
Funding
This research receives no specific grant from any funding agency in the public, commercial, or not-for-profit sector.
Declarations
Conflict of interest
The authors declare that there is no conflict of interest.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
1. Gonzalez, RC; Woods, RE; Eddins, SL. Digital image processing using MATLB; 2011; Mc-Graw Hill:
2. Gan, G; Ma, C; Wu, J. Data clustering theory, algorithms and applications; 2007; SIAM: [DOI: https://dx.doi.org/10.1137/1.9780898718348]zbMath ID: 1185.68274
3. Jain, AK; Dubes, RC. Algorithms for clustering data; 2004; Prentice-Hall:zbMath ID: 0665.62061
4. Kil DH, Shin FB. Reduced dimension image compression and its applications. In: Proc. of Int. Conference Image Processing 3, 1995; pp. 500–503.
5. Li, CK; Yuen, H. A high-performance image compression technique for multimedia applications. IEEE Trans Consum Electron; 1996; 42,
6. Barman, D; Hasnat, A; Barman, B. An enhanced technique to improve the performance of multi-image compression technique. Advanced computing and intelligent technologies; 2022; Springer: [DOI: https://dx.doi.org/10.1007/978-981-19-2980-9_25]
7. Hasnat, A; Barman, D; Barman, B. Luminance approximated vector quantization algorithm to retain better image quality of the decompressed image. Multimed Tools Appl; 2021; 80,
8. Hasnat, A; Barman, D. A proposed multi-image compression technique. J Intell Fuzzy Syst; 2019; 36,
9. Avcibas, I; Memon, N; Sayood, K. A progressive lossless/near lossless image compression algorithm. IEEE Signal Process Lett; 2002; 9,
10. Hussain, AJ; Fayadh, AA; Radi, N. Image compression techniques: a survey in lossless and lossy algorithm. Neurocomputing; 2018; 300, pp. 44-69. [DOI: https://dx.doi.org/10.1016/j.neucom.2018.02.094]
11. Kim S, Cho NI. A lossless color image compression method based on a new reversible color transform. In: Proc. of IEEE Int. Conference on Visual Communications and Image Processing 2012. https://doi.org/10.1109/VCIP.2012.6410808.
12. Linde, Y; Buzo, A; Gray, RM. An algorithm for vector quantizer design. IEEE Trans Commun COM; 1980; 28,
13. Gray, RM. Vector quantization. IEEE ASSP Mag; 1984; 1,
14. Wenhua, L; Salari, E. A fast vector quantization encoding method for image compression. IEEE Trans Circuits Syst Video Technol; 1995; 5,
15. Thepade SD, Mhaske V, Kurhade V. New Clustering Algorithm for Vector Quantization using Slant Transform. In: ICETACS, St. Anthony's College, Shillong, India, 2013. https://doi.org/10.1109/ICETACS.2013.6691415.
16. Mahapatra DK, Jena UR. Partitional K-means clustering based hybrid DCT-vector quantization for image compression. In: IEEE Conference on ICT, Noorul Islam University Thuckalay, Tamil Nadu, India, 2013. https://doi.org/10.1109/CICT.2013.6558278.
17. Leitao, HAS; Lopes, WTA; Madeiro, F. PSO algorithm applied to codebook design for channel-optimized vector quantization. IEEE Lat Am Trans; 2015; 13,
18. Hasnat, A; Barman, D; Halder, S; Bhattacharjee, D. Modified vector quantization algorithm to overcome the blocking artefact problem of vector quantization algorithm. IOSP; 2017; 32,
19. Rajini, H. Efficient image compression technique based on vector quantization using social spider optimization algorithm. Int J Innov Technol Explor Eng; 2019; 8,
20. Wu MT. Efficient reduction of artifact effect based on power and entropy measures. In: IEEE Int. Conference on Fuzzy System and Knowledge Discovery (FSKD), 2015. https://doi.org/10.1109/FSKD.2015.7382241.
21. Charrier, C; Knoblauch, K; Maloney, LT; Bovik, AC; Moorthy, AK. Optimizing multiscale SSIM for compression via mlds. IEEE Trans Image Process; 2012; 21,
22. Saad, MA; Bovik, AC; Charrier, C. Blind image quality assessment: a natural scene statistics approach in the dct domain. IEEE Trans Image Process; 2012; 21,
23. Al-Najjar, YAY; Soong, DC. Comparison of image quality assessment: PSNR, HVS, SSIM, UIQI. Int J Sci Eng Res; 2012; 3,
24. Sara, U; Akter, M; Uddin, MS. Image quality assessment through FSIM, SSIM, MSE and PSNR—a comparative study. J Comp Commun; 2019; 7,
25. Mandal, JK. Reversible steganography and authentication via transform encoding; 2020; Springer: [DOI: https://dx.doi.org/10.1007/978-981-15-4397-5] ISBN: 9789811543975
© The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd 2022. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.