Introduction
In today's interconnected world, the secure and efficient sharing of medical images is essential for facilitating accurate diagnosis, treatment, healthcare management, telemedicine, and collaborative research [1]. However, traditional methods of data sharing in the healthcare domain often face challenges related to data integrity, privacy, and scalability [2–6]. The sensitive nature of medical data, along with regulations such as HIPAA in the US and GDPR in Europe, requires a strong and secure framework for sharing medical images [7, 8]. Because of the strict constraints these rules impose on the protection of patient data, it is necessary to address the security issues regarding the sharing of medical pictures. The current way of sharing medical images using centralized servers raises significant privacy concerns as centralized servers are vulnerable to data breaches or unauthorized access that lead to identity theft, discrimination, or even extortion [9, 10].
One solution to the mentioned challenges is the implementation of a secure and efficient image-sharing platform. Such a platform can utilize blockchain technology to ensure feature map integrity and protect patient's privacy [11]. By securely transmitting and storing medical images, healthcare professionals can access them remotely and collaborate easily, leading to more accurate diagnoses and timely treatment decisions. Additionally, a scalable image-sharing platform can handle a large volume of medical images, allowing for efficient management of healthcare data. This can improve the overall efficiency of healthcare systems and reduce costs [12, 13]. Furthermore, a well-designed image-sharing platform can also integrate with existing healthcare systems, enabling seamless integration and interoperability by allowing healthcare professionals to access and view medical images directly within their electronic health records (EHR) or other clinical systems which allow the healthcare professionals to access and view medical images directly within their EHR or other clinical systems. Overall by adopting a decentralized image-sharing approach, healthcare organizations can significantly reduce the risk of privacy breaches, protect patient confidentiality, and maintain trust in the healthcare system [14, 15].
Blockchain technology has emerged as a promising solution for enhancing the security and integrity of data across various domains, including healthcare. By providing a decentralized and immutable ledger, blockchain offers a robust mechanism for ensuring data integrity and traceability [16]. In a blockchain, each block contains a hash of the preceding block, creating a chain of interlinked blocks. This interconnected structure makes it exceptionally difficult to alter any information within a block without necessitating changes in all subsequent blocks—an operation that is both computationally expensive and impracticable. Hashing plays a fundamental role in ensuring data integrity within a blockchain [17, 18]. Modifying data within a block alters its content, leading to a completely different hash value. Leveraging blockchain for medical image sharing comes with its own set of challenges [19], particularly in terms of scalability and ensuring the authenticity of images in the face of compression or pre-processing [20].
Furthermore, medical images are high-resolution images that require significant storage and high bandwidth to transfer the image through a secure channel [21]. Moreover, this raw image can take a very long time to transmit, especially over limited bandwidth connections. Compressing medical images significantly reduces file size, saves valuable storage on servers, and enables faster transmission [22]. Traditionally, images are shared through blockchain by storing image data in blocks, which are then added to a decentralized chain. Each block contains a cryptographic hash of the previous block, ensuring data integrity and security. However, compressing images before blockchain storage for storage optimization and faster data transmission can alter their hash, and this could compromise data integrity [23, 24]. In consideration of this limitation, our research uses the subject sensitive hashing (SSH) technique to generate a hash function. The fundamental principle of our proposed framework lies in the integration of this hashing with blockchain technology. SSH is a novel approach to hashing that generates content-based hashes for images which ensures the authenticity and integrity of images even in the face of compression or pre-processing. This helps transmit the compressed image through a secure channel while maintaining feature map integrity.
The primary purpose of our proposed method is to enable secure, efficient, and scalable sharing of medical images while addressing key challenges such as data integrity, privacy, and transmission efficiency [25]. By leveraging the SSH, the method generates content-based hash values that are robust to compression and preprocessing [26]. This ensures that the integrity of the image features is maintained, which is critical for medical diagnostics and research. The use of SHA-256 hashing combined with blockchain's decentralized nature ensures that sensitive medical data remains secure and tamper-proof during sharing and storage [27]. The integration with the InterPlanetary File System (IPFS) and blockchain provides a scalable solution for decentralized medical image sharing [28], to meet the increasing demands for healthcare data. Our proposed method is practically significant as it tackles real-world issues in healthcare data management, particularly in remote and resource-constrained environments. Furthermore, the decentralized nature of blockchain technology eliminates reliance on central servers, reducing vulnerabilities to single points of failure and ensuring high availability. This is particularly advantageous for rural healthcare networks and cross-border telemedicine initiatives, where infrastructure may be less robust. Ultimately, this approach not only enhances the accessibility and efficiency of medical image sharing but also strengthens patient trust by ensuring data privacy and security.
Key Challenges
The secure and efficient sharing of medical images lies at the heart of modern healthcare. However, the traditional approaches to medical image sharing face several critical challenges:
- Feature Map Integrity: Ensuring the authenticity and trustworthiness of medical images' clinically relevant features is paramount. Any unauthorized modifications or alterations can lead to misdiagnosis and incorrect treatment, potentially having severe consequences for patients.
- Privacy and Confidentiality: Medical images contain highly sensitive personal information. Maintaining patient privacy is both a legal requirement and an ethical imperative. Secure systems are needed to protect this data from breaches or unauthorized disclosures.
- Scalability: Centralized storage solutions for medical images can reach their limits as demand and volume increase. Healthcare systems require scalable and flexible solutions capable of handling vast amounts of imaging data.
- Verification After Manipulation: Compression, a common practice for efficient image storage and transmission, can alter traditional hash values. Maintaining feature map integrity even after routine image manipulation poses a significant challenge.
Key Contributions
To apply the security of medical images to solve the traditional blockchain's vulnerability to data updates and build a more resilient and flexible system that can handle necessary modifications without sacrificing feature map integrity or needing excessive processing power we use blockchain with a novel hashing technique. The main contribution of this paper is summarized as follows:
- Novel Hashing for High-Resolution Images: Our proposed model introduces a new way of generating image hashes specifically designed for high-resolution medical images.
- Preserving Feature Map Integrity: Our proposed method demonstrably preserves the integrity of high-resolution images' feature maps, meaning any modifications or manipulations would be detectable through changes in the generated hash. This safeguards against unauthorized tampering and enhances image authenticity.
- Integration with Blockchain: Our novel hashing technique is integrated with blockchain technology for ensuring the immutability and transparency of image data, as any alterations would be reflected in the blockchain, fostering trust and accountability.
- Robustness: The robustness of our method, indicates its ability to withstand potential disruptions or attacks. This is vital for ensuring the reliability and long-term viability in real-world applications.
Background
Blockchain
In this technological world, blockchain is a revolutionary concept that represents the decentralized system that how transactions are conducted and verified [29, 30]. In the traditional system, transferring money from one to another place needs an intermediate such as a bank, but using a peer-to-peer network blockchain eliminates this need where users can transact directly using a peer-to-peer network. Decentralization is essential in the blockchain, as it increases the trust and transparency of the participants in the network. When users participate in a transaction, it is not confined to just the sender and receiver, it becomes accessible to all participants, or peers, in the network. No single user can tamper the transaction data without others being aware, Transparency ensures this integrity [31]. Each transaction goes into a block, which enhances trust on the blockchain. Within a single block, multiple ledgers of transactions can be included. To connect these blocks, a unique code known as a hash is generated using a hashing algorithm. Adding a block to the blockchain is not a straightforward process; it involves a group known as miners. Miners validate the transaction and add to the blockchain.
Blockchain technology has evolved to include different types of blockchains such as public, private, and federated [32]. A public blockchain is open to anyone and anybody can read and write on a public blockchain. On the other hand, a private blockchain is restricted to a single company and limits its accessibility to a specific group of users. The federated blockchain falls in between and being open to a selected group of people or companies. Public blockchains typically exhibit lower speed than private ones due to their decentralized nature. Trust in participants differs between public and private blockchains, with participants in private blockchains generally being trusted [33].
Cryptography
Cryptography ensures security and privacy of transactions which is the fundamental aspect of blockchain [34]. In a blockchain network, data transfer happens between peer-to-peer, similar to the decentralized sharing which is seen on torrent networks. For the safety of data in blockchain, information needs to be converted into a format so that only the intended sender and receiver can get it. Four major concerns in the blockchain are confidentiality, integrity, non-repudiation, and authentication [35]. Confidentiality ensures that only the authentic user in the network can view the transaction which maintains the privacy of sensitive information. Integrity ensures that only the access person can modify the information. Non-repudiation ensures that there exists proof of the sender and receiver in a transaction. Authentication ensures that only authorized peers can access and conduct transactions, preventing anonymous or unauthorized activity. There exist two types of cryptography in the blockchain such as symmetric key cryptography and asymmetric key cryptography [36]. Symmetric key cryptography uses the same key for encryption and decryption, which requires a unique key for each pair of peers. Handling millions of nodes with different pairs of keys is not feasible. On the other hand, asymmetric key cryptography uses a pair of public and private keys. Suppose there is a transaction between A and B, the message is encrypted using B's public key and decrypted using B's private key which ensures confidentiality.
Subject Sensitive Hashing
SSH in blockchain is a technique that increases privacy without revealing the entire content. In the context of blockchain, this hashing method allows users to share specific transaction information while keeping the rest of the information private [26]. It achieves this by generating a hash that is unique to the selected information, ensuring privacy and confidentiality within the blockchain network. It's like sharing only the essential details of a transaction without sharing the entire details which means adding a layer of privacy to blockchain.
Related Studies
In recent years, several studies in digital image security, blockchain technology, and machine learning have significantly advanced our understanding and capabilities. Various research with different methods took place intending to ensure the integrity, privacy, and traceability of data across various domains. For instance, Koptyra et al.'s [37] introduction of a method to linearly structure digital images using SHA-256 hash values represents a foundational step towards embedding data blocks directly within images. This technique not only removes the need for an external ledger but also facilitates rapid detection of image modifications which enhances the integrity and traceability of image sequences. This approach underscores the potential of hash functions in maintaining a robust linkage between sequential images without relying on conventional repositories. In contrast, Ding et al. [38] focus on the realm of remote sensing (RS) image security through the development of AAU-Net, a novel architecture that advances subject-sensitive hashing by incorporating attention mechanisms and asymmetry. This innovation addresses the limitations of existing algorithms, particularly in recognizing subtle, subject-related tampering and handling JPEG compression. Despite its limitations, AAU-Net's contribution lies in its ability to improve RS image integrity, highlighting the need for heightened security measures. In research, Wu et al. [39] proposes eliminating the need for a centralized server and leveraging smart contracts through blockchain technology. The incorporation of conditional proxy re-encryption (C-PRE) provides a secure and flexible approach to sharing pathological data. Additionally, ciphertext equality tests enable secure messaging without the need for decryption during the sharing process, further enhancing data security in their model. Further expanding the application of blockchain technology, Sultana et al. [25] introduced a framework that combines blockchain with zero-trust models to secure medical data and image transfer/storage. This framework shows the utility of blockchain in ensuring data integrity and restricting access to authenticated users, which enhances the security of sensitive medical information. The use of tools like Ganache and Metamask in their methodology underscores the practical aspects of implementing blockchain solutions in real-world settings. The contributions of Mohsan et al. [40], Tsai et al. [41], Zhang et al. [42] further enrich the ability of blockchain's applicability across various domains, from patient-centric medical data management to copyright protection of digital images and smart contract vulnerability detection. Each study reveals the multifaceted potential of blockchain technology in enhancing data security, privacy, and integrity across diverse fields.
Our research aims to resolve these gaps in Table 1 by proposing a novel authentication method that combines blockchain technology with SSH to ensure the security, integrity, and robustness of high-resolution medical image data, even in the face of legitimate changes. By doing so, we contribute to the growing body of knowledge in the field of secure data storage and image verification, with a focus on the unique security demands of medical imaging within healthcare applications.
TABLE 1 Benchmark analysis of hashing methods between related studies.
Title | Hashing method | Image encryption | Access control | Data integrity | Authentication | Universal hash |
Zhang et al. [43] | Perceptual hashing | Yes | Public | No | No | Yes |
Ding et al. [38] | SSH | No | Private | No | No | No |
Kleinberg et al. [44] | SSH | No | Private | No | Yes | No |
Koptyra et al. [37] | Regular hashing | No | Private | No | Yes | No |
Mohsan et al. [40] | Regular hashing | No | Public | No | Yes | No |
Alnuaimi et al. [45] | Regular hashing | No | Private | No | Yes | No |
Yu et al. [46] | Regular hashing | No | Private | No | Yes | No |
Proposed method | SSH | Yes | Private | Yes | Yes | Yes |
Methodology
Traditional methods simply take the compressed, encrypted file and compute a single hash. However, this approach can be problematic in real healthcare scenarios, where images are frequently re-compressed or re-encrypted. Even minor adjustments in compression or encryption parameters alter the file's bits, causing the file-level hash to change even though the clinical content remains the same. To address this limitation, we propose a novel hashing approach, which derives cryptographic hashes from the feature maps of compressed images rather than the files themselves. The whole decentralized medical image sharing (DMIS) process is divided into two phases: phase 1 performs image encryption and upload to the IPFS server and phase 2 performs the image retrieval process. The complete process of all phases is described below: Figure 1 shows the workflow of the proposed framework. The whole work process of the framework and details of each component are described below.
[IMAGE OMITTED. SEE PDF]
Image Encryption and Upload to the IPFS Server and Phase
The process begins with the uploading of medical images by an organization. Before undergoing any transformations, the original image's feature map is extracted and stored on the blockchain. This step ensures that the feature map remains unaffected by subsequent compression and encryption processes, thereby preserving the image's integrity and authenticity. Next, the image undergoes encryption using the AES-256 algorithm [47], a widely used symmetric encryption method known for its robust security. This algorithm operates on fixed-size 128-bit data blocks, encrypting the medical image in segments. Each block is processed separately, generating a transformed version of the image known as ciphertext. This encrypted image is then stored on an InterPlanetary File server [48] for secure and decentralized storage. In this research, we used a hybrid approach with off-chain storage via IPFS and on-chain storage for essential metadata (image hashes, and encryption keys). This reduces blockchain storage requirements, enhances scalability, and maintains data security. Simultaneously, our proposed context-based-hashing algorithm generates a cryptographic hash based on the extracted feature map of the image. This feature-map-based hashing mechanism enhances security by ensuring that even if the binary representation of the image changes due to routine re-compression or re-encryption, the hash remains stable as long as the clinical content remains unchanged. The hash sequence and encryption key are then recorded in the blockchain's block body, facilitating access control and permission management through smart contracts.
In the medical image integrity verification domain, SSH has turned out as a reliable method to ensure the authenticity and integrity of compressed or pre-processed medical images by generating content-based hashes based on their feature map binary representations. Therefore, even if the binary representation undergoes alterations, this method of hashes will remain consistent if the image content remains unchanged. This method guarantees the authenticity and integrity of medical images, which is essential for accurate diagnosis and treatment. This research introduces a novel subject-sensitive hashing algorithm for blockchain applications, offering a new solution for enhancing data integrity and security in the healthcare domain.
Initially, a deep neural network is employed to extract the most pertinent features from the high-resolution image. These are subsequently normalized using min-max normalization and converted into a binary feature map with a predefined threshold value of 0.6. The second phase of the algorithm involves the transformation of the binary feature map into a 256-bit hash, accomplished through the utilization of the SHA-256 cryptographic hash function. This is a widely recognized cryptographic tool, that plays a pivotal role in data security, digital signatures, and preserving data integrity across diverse digital assets. Furthermore, it serves as an integral component of blockchain technology, a foundational technology behind cryptocurrencies such as Bitcoin, contributing to the security and immutability of transaction data and the generation of unique block identifiers in the blockchain.
Image Retrieval Process
At the image retrieval stage, a user initiates a request to access a previously stored medical image from the IPFS server. Since medical images are stored in an encrypted format for security and privacy reasons, the retrieval process involves multiple steps to ensure secure access and verification before the image can be decrypted and viewed.
Requesting the Encrypted Image from IPFS
When a user requests an image, the IPFS server locates the stored encrypted image file based on its unique content identifier (CID). The server system ensures that the data is retrieved from the closest available node in the decentralized network, optimizing efficiency and availability. Once found, the encrypted image (ciphertext) is transmitted to the user. However, without the corresponding encryption key, the image remains inaccessible in its current form.
Requesting the Encryption Key from the Blockchain
To decrypt the image, the user must also obtain the corresponding AES-256 encryption key. This key is securely stored within the blockchain network and is only accessible under predefined access control policies enforced by smart contracts. The user submits a request to retrieve the key, triggering the execution of the smart contract associated with that specific image.
Smart Contract-Based Validation Using SSH
Before granting access to the encryption key, the smart contract performs a security verification process to ensure the integrity and authenticity of the requested image. This is achieved by leveraging the Subject-Sensitive Hashing algorithm, which was originally used to generate a content-based hash sequence of the image at the time of storage. When a user submits a request, the smart contract retrieves the stored hash sequence from the blockchain and simultaneously computes a new hash from the feature map of the retrieved encrypted image. These two hashes are then compared to verify their consistency. If the hash values match, it confirms that the image has remained unchanged, and the encryption key is released to the user. However, if there is any discrepancy, the system detects potential tampering or corruption, automatically denying access to the key. This validation mechanism ensures that only unaltered and authentic medical images can be accessed, safeguarding against unauthorized modifications.
Architecture of Proposed Deep Learning Model
To extract the feature map from the original image and compressed image the study proposed a novel deep learning model. Figure 2 shows the architecture of our proposed model. The network input size of the model is 512 512 pixels. The input layer passes the image to the next layer without any processing. This architecture contains 14 convolutional layers, multiple residual connections, and 2 concatenation layers. The convolutional layers apply filters to the input image or the previous layer's output to create feature maps. The convolutional layers also use a rectified linear unit (ReLU) activation function to introduce non-linearity and improve the model's performance [49]. Concatenation is often used to combine features from different sources or dimensions [50]. It proves particularly valuable in situations where there is a need to combine multiple data streams or feature representations before further processing takes place. Furthermore, our proposed model contains multiple residual connections to enhance its ability to extract the most relevant features without significantly increasing the scale and computational complexity of the model. For a better understanding of how the model extracts feature maps from an image, we can divide the whole architecture into four different sections such as (1) initial feature extracting block, (2) context preserving block, (3) reduction state block, and (4) context feature map block.
[IMAGE OMITTED. SEE PDF]
Initial Feature Extracting Block (IFEB)
Initial feature extracting block contains three convolution layers with ReLU activation function. At the end of the block output of the three three convolution layers concatenate with each other which allows it to capture complex relationships within the data.
Context Preserving Block (CPB)
At the context preserving block stage, some context from the input image is added with the output from the Initial feature extracting block. Because after passing through multiple convolution layers and activation functions the feature maps may lose context with their original image. CPB contains four convolution layers with ReLU activation function and one max pool layer which reduces the height and width dimension of the image. After concatenating the outputs from the first three convolutional layers of CPB the output goes through another convolutional block and context of the input image which is extracted through another convolutional block added with the output.
Reduction State Block (RSB)
The goal of the RSB is to reduce the size of the feature maps to 64x64x64. RSB contains two convolution layers. Among these two convolutional layers, the stride of the first convolution layer is set to two to reduce the height and width dimensions of the future maps. After batch normalization, the output goes through the max pool layer which further reduces the height and width dimensions of the future maps.
Context Feature Map Block (CFMB)
CFMB produces the final feature map which contains the original features of the input image. CFMB contains three convolution layers with one average pool layer. The stride size of the first convolution is set to 2 and the output is followed by the average pool layer which leads to the final size of the feature map 16x16x8.
Steps in Blockchain
Blockchain technology uses a series of steps to ensure secure and transparent transactions:
Initiating a Transaction
In this phase, different medical service providers or organizations store and share their clients' medical imaging data through our system. At first, the organization uploads the image then the image is encrypted by using the AES-265 algorithm. This is a symmetric encryption technique that works with fixed-size data blocks and is used for protecting sensitive information. This encryption algorithm is used to encrypt each block separately, and the image's feature map is used to generate a hash. The block body of the chain contains the hash sequence and the encryption key and then the user can initiate the transaction Figure 3.
[IMAGE OMITTED. SEE PDF]
Broadcasting the Transaction
After initiating the transaction, it is not only maintained by the sender and receiver. If it happens then there trust issues can arise. As blockchain is a decentralized network, the transaction is not only confined to the sender and receiver but the transaction is also recorded to everyone who is in this peer-to-peer network. So, the transaction is then broadcast to the entire network of participants, also known as nodes, on the blockchain [51].
Verifying the Transaction
Nodes play a vital role in the blockchain network. There are two types of nodes such as full nodes and partial nodes. Full nodes are critical components of a blockchain network as they store a complete copy of the blockchain ledger and independently validate all transactions and blocks against the consensus rules. Unlike lightweight nodes, full nodes do not rely on other nodes for validation; instead, they enforce the blockchain's rules, rejecting any invalid data. However, operating a full node requires significant computational resources, including large storage capacity, substantial processing power, and continuous bandwidth to stay synchronized with the network Examples of full nodes include Bitcoin Core for Bitcoin and Geth for Ethereum. As of 16 January 2025, the Bitcoin blockchain size is approximately 630.54 gigabytes [52]. This consistent growth underscores the expanding volume of transactions and data recorded on the network.
Reaching Consensus
Depending on the specific blockchain protocol, different methods are used to achieve consensus among the nodes. In some cases, Miners validate the transaction and add them to the blockchain for that they have to solve some puzzles that require a significant amount of computational power [53]. The miner is rewarded when the puzzle is solved [54], and the verified block is added to the chain. In other systems, a predetermined set of nodes might be responsible for validating transactions. When we get a new block in the blockchain this block is supposed to be added to the blockchain. That is why we need to follow some consensus algorithm. There are so many consensus algorithms, and one is the proof of work (POW) we used. In POW, all the node spends some computational power like doing some calculations and whoever solves a puzzle adds that block to the blockchain [54]. But it is different from miner because miner requires notable computational power.
Adding the Transaction to a Block
The data is stored in the format of a ledger. It is not just a final value; it has all the transactions. Once a consensus is reached, the verified transaction is bundled together with other validated transactions into the ledger where everyone has access and their copy [55]. This ensures everyone has the same temper-proof record of all transactions. Different blockchain implementations have different time limits. In our case, we have a window of 10 minutes on average. Every 10 min whatever transactions happen will go into a block. So, that ledger goes into a block. In one block we have multiple transactions. And after every 10 min, we get a new block and then we have multiple blocks.
Linking the Blocks
Blockchain consists of a data section that stores the transaction information (from, to, data) and a block header for chain creation. The block header contains a timestamp, version, Merkle Root (hash), previous hash, nonce, and difficulty target [56]. All the blocks are chained; that is why we have to use concepts of cryptography. To connect these blocks, a unique code known as a hash is generated using a hashing algorithm. Each block in the blockchain contains multiple transactions and a single hash. The hash values are combined in levels, forming a hierarchical structure [57] and the final hash value is stored in the present block's hash location. Then the Hash value is stored in the section of “Previous Hash” of the next block, creating a chain of blocks.
Applications/Implications
The integration of blockchain technology within our decentralized medical imaging network provides transformative benefits that enhance security, efficiency, and privacy in healthcare data management. Here, we explore the practical application of our system and how each feature of blockchain technology contributes to the overall enhancement of healthcare image transmission.
- 1.Enhanced Security through Decentralized Storage: The adoption of blockchain technology ensures robust security in our system by decentralizing the storage of medical images across a network of nodes. This architecture inherently protects data against the vulnerabilities of centralized systems, where a single breach could compromise all stored information. In our blockchain model, each transaction must be verified and agreed upon by multiple nodes, which drastically reduces the risk of unauthorized access and makes data tampering extremely difficult.
- 2.Immutable Audit Trails for Regulatory Compliance: Our blockchain-based system automatically records every transaction in an immutable ledger. This feature provides an indisputable audit trail of all activities, including access, transfer, and modification of medical images. This transparency is crucial for compliance with global data protection regulations such as HIPAA and GDPR. It facilitates easy verification of data handling practices and ensures accountability, thereby fostering trust among all stakeholders.
- 3.Ensuring Data Integrity: The implementation of Subject Sensitive Hashing in our system maintains the integrity of medical images. Each image is hashed to produce a unique identifier that is recorded on the blockchain. This process ensures that any alteration of the image data (even post-compression) can be detected, preserving the authenticity necessary for accurate medical diagnosis and treatment.
- 4.Privacy and Confidentiality through Encryption: To safeguard patient privacy and data confidentiality, our system encrypts all medical images before they are uploaded to the blockchain. Access to decryption keys is strictly controlled, ensuring that only authorized individuals can view or use the medical images. This encryption-decryption mechanism significantly enhances the security of sensitive health information, aligning with stringent privacy standards.
- 5.Scalability and Cost Efficiency: Our system is designed to scale efficiently as demand grows, without incurring significant additional costs. The decentralized nature of blockchain allows for the addition of new nodes with minimal impact on existing infrastructure. Furthermore, the use of smart contracts automates many processes such as access management and rights validation, reducing administrative overhead and associated costs.
- 6.Interoperability Across Diverse Healthcare Systems: Blockchain technology enables our system to seamlessly integrate with various healthcare information systems. This interoperability ensures that medical images can be easily shared and accessed across different platforms and institutions, enhancing the continuity and quality of care. The standardized blockchain protocol facilitates this seamless exchange, eliminating the typical barriers associated with disparate healthcare IT systems.
- 7.Advancing Medical Research: Our blockchain-based system also plays a critical role in medical research by enabling the secure and anonymous sharing of medical images with researchers worldwide. This capability allows for more extensive and diverse clinical studies, potentially accelerating medical advancements and the development of new treatments. The traceability and reliability of data shared via blockchain further enhance the validity of research findings.
Feature Map Extractor
The feature map extractor algorithm processes an input image to generate a binary feature map, which is used to create the hash of a block in the blockchain. The algorithm begins by loading the image and resizing it to standardized 512x512 pixels. This resized image is then converted into an array and expanded to match the input shape required by our proposed model. The model predicts multiple feature maps from this array, from which the first feature map is extracted. This extracted map undergoes normalization, adjusting its values to lie between 0 and 1. Subsequently, a binary mask is created by setting all values greater than 0.6 (transform threshold) to 1, and the rest to 0, effectively highlighting the most important features. The resulting binary mask is then returned for subsequent tasks (Algorithm 1).
ALGORITHM
Feature Map Extractor
1: | function featureMapExtractor(img_path) |
2: | img = load_img(img_path, target_size = (512, 512)) |
3: | img_array = img_to_array(img) |
4: | img_array = expand_dims(img_array, axis = 0) |
5: | feature_maps = model.predict(img_array) |
6: | feature_map = feature_maps[0,:,:, 0] |
7: | min_value = min(feature_map) |
8: | max_value = max(feature_map) |
9: | normalized_feature_map = (feature_map - min_value) / (max_value - min_value) |
10: | binary_mask = where(normalized_feature_map >0.6? 1: 0) |
11: | return binary_mask |
12: | end Function |
Integrity Verification
The integrity verification algorithm is crucial for confirming the authenticity and integrity of an uploaded image by comparing its feature map to a reference feature map. Upon receiving an image file, the algorithm saves it locally and uses the feature map extractor to derive its feature map. This extracted feature map is then compared with the reference feature map provided in the request data. If the two maps are identical, the algorithm sets the integrity status to True; otherwise, it sets it to False. The algorithm also records and prints the time taken for the verification process. Finally, it returns a JSON response indicating the integrity status, thereby ensuring the image has not been tampered with (Algorithm 2).
ALGORITHM
Integrity Verification
1: | function integrity_verification |
2: | if ‘file’ not in request.files then |
3: | returnresponse with error ‘No file part’ |
4: | end if |
5: | file = request.files[‘file’] |
6: | save file to file.filename |
7: | image_path = file.filename |
8: | fm = featureMapExtractor(image_path) |
9: | data = request.get_json() |
10: | feature_map = data[‘feature_map’] |
11: | if fm equals feature_map then |
12: | integrity = True |
13: | else |
14: | integrity = False |
15: | end if |
16: | returnJSON response with integrity status |
17: | end Function |
Image Encryption and Decryption
The image encryption and decryption algorithms ensure secure storage and transmission of image data through encryption before storage and decryption upon retrieval. For encryption, a 32-byte random key is generated. The image file is read in binary mode, and an AES cipher in GCM mode is used to encrypt the data, producing ciphertext and an authentication tag. These encrypted components, along with a nonce, are written to a new file with an .enc extension. For decryption, the nonce, tag, and ciphertext are read from the encrypted file. Using the same cipher configuration, the data is decrypted and verified using the provided tag. The original image data is then restored and saved by removing the .enc extension from the filename, completing the decryption process (Algorithm 3).
ALGORITHM
Image Encryption And Decryption
function encrypt_image(image_path) | |
encryption_key = get_random_bytes(32) | |
3: | open file at image_path in ‘rb’ mode as file |
original_data = read file | |
end open | |
6: | cipher = AES.new(encryption_key, AES.MODE_GCM) |
ciphertext, tag = cipher.encrypt_and_digest(original_data) | |
encrypted_file_path = image_path + ‘.enc’ | |
9: | open file at encrypted_file_path in ‘wb’ mode as file |
write cipher.nonce, tag, ciphertext to file | |
end open | |
12: | return encrypted_file_path, encryption_key |
end Function | |
function decrypt_image(encryption_key, encrypted_image_path) | |
15: | open file at encrypted_image_path in ‘rb‘ mode as file |
nonce, tag, ciphertext = read 16, 16, -1 bytes from file, respectively | |
end open | |
18: | cipher = AES.new(encryption_key, AES.MODE_GCM, nonce = nonce) |
original_data = cipher.decrypt_and_verify(ciphertext, tag) | |
decrypted_file_path = encrypted_image_path without last 4 characters (‘.enc’) | |
21: | open file at decrypted_file_path in ‘wb’ mode as file |
write original_data to file | |
end open | |
24: | returndecrypted_file_path |
end Function |
Experimental Results and Analysis
Image Compression and Feature Extraction
In this experiment, we thoroughly evaluated the effectiveness of our proposed deep learning feature extractor for decentralized medical image sharing by conducting image compression and feature extraction.
Image Compression
JPEG compression was chosen for this study due to its widespread adoption, computational efficiency, and ability to achieve significant compression ratios while maintaining acceptable visual quality. While it is true that medical images often have higher depth and resolution, JPEG compression can still be effective in scenarios where storage and transmission efficiency are critical, provided that the compression parameters are carefully selected to preserve diagnostically relevant information [58]. Besides for many applications, such as telemedicine or preliminary screenings, JPEG compression can be applied without significantly compromising diagnostic accuracy. Studies have shown that moderate levels of JPEG compression (e.g., quality factors above 75%) can retain sufficient image quality for diagnostic purposes in modalities like X-rays, ultrasounds, and certain types of MRI. We utilized the PIL library from Python to compress the medical images from the provided datasets using the JPEG compression algorithm. Each image was compressed with a quality parameter set to 100 to minimize loss of diagnostic information while reducing file size for efficient storage and transmission. But format may not always be ideal for every medical use case—especially when preserving fine-grained details is crucial.
Feature Extraction
Feature extraction was performed using our proposed deep learning model specialized specifically for medical image analysis. The model architecture comprised multiple convolutional layers followed by normalization and pooling operations. This architecture enabled the extraction of high-level features crucial for the generation of a universal feature map for both compressed and original images.
Evaluation Metrics
To rigorously evaluate the integrity of the compressed images, we compared the feature maps extracted from both the original and compressed images. A feature map represents the activation of neurons in a particular layer of the neural network and provides insights into the important features present in the image.
Comparison of Feature Maps
True Label
If the feature maps extracted from the compressed images closely matched those of the original images, they were labeled as “True.” This indicated successful preservation of image integrity during compression.
False Label
Conversely, if discrepancies were observed between the feature maps of compressed and original images, they were labeled as ”False.” This suggested potential loss or alteration of information during the compression process.
Image Compression Analysis
Mean Squared Error
Mean squared error (MSE) is a measurement that is used to compare an original image with a compressed image [59]. MSE helps to assess how different the compressed image is, from the original image. When an image is compressed, the file size of the image is reduced by removing some details that may not be noticeable to the human eye. However, this process can result in a loss of quality. MSE helps to quantify this loss by calculating the average of the squared differences between the original image's pixel values and the compressed image's pixel values. The higher the MSE value, the more different the compressed image is from the original. Conversely, a lower MSE indicates a smaller difference between the two images which means the compression has preserved more of the original image's details.
Table 2 presents the MSE scores resulting from the comparison of original images with their compressed counterparts at varying compression levels. As shown in the table, the MSE decreases consistently with increasing compression levels from 8.3563 at 25% compression to 0.0530 at 100% compression. This inverse relationship indicates that higher compression ratios lead to lower MSE values, suggesting improved similarity between the compressed and original images.
TABLE 2 MSE score of original vs compressed image.
Compression level | MSE score |
25% compression level | 8.3563 |
50% compression level | 5.0512 |
75% compression level | 3.1120 |
100% compression level | 0.0530 |
Structural Similarity Index
The structural similarity index measure (SSIM) [60] is used to compare an original image with a compressed image. It helps us understand how similar the two images are regarding their structure and overall quality when an image, the file size is reduced but maintains its visual similarity to the original image. SSIM helps to quantify this similarity by evaluating various aspects of the images, such as brightness, contrast, and structural information. SSIM compares the patterns and structures in the original and compressed images to see how closely they match. It considers factors like sharpness, textures, and edges. The SSIM value ranges from 0 to 1, with 1 indicating a perfect match. By analyzing the SSIM value, it can determine how well the compression algorithm has preserved the structure and details of the original image. A higher SSIM value means the compressed image retains more of the original image's quality, while a lower SSIM value indicates a greater loss of image information.
, are the two images that we are comparing. , are the average values of all the pixels in images and , respectively. , are the variances of the pixel values in images and . Variance is a measure of how spread out the pixel values are. is the covariance of pixel values between images and . Covariance is a measure of how much two variables change together. , are just two constants that are added to avoid division by zero. , are two variables used to stabilize the division with a weak denominator.
is the dynamic range of the pixel values (typically this is 255 for 8-bit per pixel images). = 0.01 and = 0.03 are constants that are part of the stabilizing variables.
Table 3 presents the SSIM scores calculated by comparing original images with their compressed versions at various compression levels. SSIM, a metric used to assess the perceptual similarity between two images, ranges from 0 to 1, where 1 indicates perfect similarity. As shown in the table, the SSIM scores increase monotonically with higher compression levels: from 0.918361 at 25% compression to 0.999385 at 100% compression. This trend suggests that higher compression ratios result in compressed images that are increasingly similar to the original images in terms of structural information.
TABLE 3 SSIM score of original vs compressed image.
Original VS compressed image | SSIM score |
25% compression level | 0.918361 |
50% compression level | 0.944991 |
75% compression level | 0.964544 |
100% compression level | 0.999385 |
Multiscale Structural Similarity
Multiscale structural similarity (MS-SSIM) [61] is an advanced method for comparing an original image with a compressed image. It takes into account the structure and quality of the images at different scales or levels of detail. When we compress an image, we want to keep its overall quality as high as possible, even as we reduce its file size. MS-SSIM helps to assess this by examining the images at multiple levels of detail, from big patterns to finer details. MS-SSIM looks at the image in a way that simulates how human eyes perceive visual information. It evaluates things like contrast, textures, and edges across different scales, or levels of zoom. By doing this, it provides a more comprehensive understanding of how well the compression preserves the quality and structure of the original image. The MS-SSIM value ranges from 0 to 1, with 1 representing a perfect match between the original and compressed images in terms of their multi-scale structure.
represents the structural similarity index measurement calculated at scale between images and . represents the weight assigned to each scale, allowing different levels of importance to be given to different scales in the final MS-SSIM score.
As shown in Table 4, the MS-SSIM scores demonstrate a positive correlation with increasing compression levels: from 0.998631 at 25% compression to 1.000000 at 100% compression. This indicates that higher compression ratios correspond to enhanced multi-scale structural similarity between the compressed and original images.
TABLE 4 MS-SSIM score in different compression levels.
Compression level | MS-SSIM score |
25% | 0.998631 |
50% | 0.999208 |
75% | 0.999558 |
100% | 1.000000 |
Encryption Strength Analysis
Histogram Analysis
A histogram visually shows how many pixels fall within each level of darkness or lightness, revealing the overall contrast and content of the image [62]. Figure 4 shows the histograms of the test images and the encrypted images using the three different modes of the AES algorithm. Histogram analysis between an original image and its encrypted counterpart reveals valuable insights into the transformation undergone during encryption. In the original image, the histogram typically shows distribution peaks corresponding to various pixel intensity levels, representing features like contrast, brightness, and color composition. Encryption scrambles the brightness variations in an image. This results in a more even spread of pixel intensities across all levels, with fewer sharp peaks and dips in the histogram compared to the original image. This indicates that the encryption has effectively randomized the pixel values, making it challenging to observe any meaningful information from the image without the decryption key.
[IMAGE OMITTED. SEE PDF]
Correlation Coefficients Analysis
The image pixel correlation coefficient is a measure of the linear relationship between pixel intensities in an image [63]. It is used to quantify the similarity or correlation between the intensity values of corresponding pixels in two images. Calculating the horizontal, vertical, and diagonal correlation coefficients for image pixels involves comparing the pixel intensities in different directions to assess their linear relationships. These correlation coefficients provide insights into the similarity or correlation between pixel values in the specified directions Figure 5. Horizontal correlation coefficient (HCC) [64], vertical correlation coefficient (VCC) [64] and diagonal correlation coefficient (DCC) [65] can be mathematically expressed.
[IMAGE OMITTED. SEE PDF]
Entropy Analysis
Entropy analysis of an image involves measuring the amount of information or uncertainty present in its pixel values. Entropy is a statistical measure that quantifies the randomness or disorderliness of a system [66]. In the context of image analysis, entropy is computed based on the probability distribution of pixel intensities. The entropy of an image is higher when pixel values are more evenly distributed across the intensity range, indicating higher complexity or randomness. Conversely, lower entropy suggests a more predictable or ordered distribution of pixel values. The entropy of an image can be calculated using Shannon's entropy formula, where is the number of possible intensity levels (typically 256 for an 8-bit grayscale image) and is the probability of occurrence of intensity level i in the image [67].
TABLE 5 Comparison of entropy in different modes (CBC, ECB, and GCM).
Title | Score |
Entropy of original image | 7.39 |
Entropy of encrypted image CBC mode | 7.64 |
Entropy of encrypted image EBC mode | 7.61 |
Entropy of encrypted image GCM mode | 7.63 |
NPCR and UACI Analysis
Normalized pixel change rate (NPCR) and unified average changing intensity (UACI) are two metrics commonly used to evaluate the performance of image encryption algorithms [68]. NPCR measures the percentage of pixel differences between two images after encryption and UACI measures the average intensity difference between corresponding pixels in the plaintext and ciphertext images. NPCR values close to 100% indicate that the encryption algorithm introduces significant changes in the ciphertext for every small change in the plaintext, which is desirable for security purposes and lower UACI values indicate that the encryption algorithm results in minimal changes between corresponding pixels, which could potentially reveal information about the plaintext image. NPCR and UACI are mathematically expressed by
Here, and are the dimensions of the images, and represent the pixel values of the corresponding positions in the ciphertext and plaintext images, respectively, and denotes the XOR operation.
The proposed system's NPCR and UACI results are summarized in Table 6. From the table, it can be seen that GCM mode achieves a height NPCR score of 99.87% with 29.11% UACI. The NPCR values for CBC, ECB, and GCM encryption modes are nearly 1, while the UACI values for these modes remain low, signifying minimal average intensity differences between corresponding pixels in plaintext and ciphertext images. This indicates that the encryption algorithms produce minimal average intensity differences between corresponding pixels in the plaintext and ciphertext images. So, these encryption algorithms are highly effective in producing significant changes in the ciphertext when a small change is made to the plaintext.
TABLE 6 Comparison of NPCR and UACI scores in different modes (CBC, ECB, and GCM).
Mode | NPCR | UACI |
CBC | 0.9986 | 0.2830 |
EBC | 0.9986 | 0.2694 |
GCM | 0.9987 | 0.2911 |
Experimental Results
We conducted comprehensive experiments on three distinct radiology datasets to assess the correctness of our framework:
Dataset 1: VinBigData 1024 JPG Dataset [69]
Correctness Rate (96% Observation): Among the compressed images, 96% retained their original feature maps without significant alteration, indicating effective preservation of image integrity.
Dataset 2: COVIDx CXR-4 [70]
Correctness Rate (99% Observation): Our framework achieved a high correctness rate of 99%, demonstrating its capability to preserve image integrity even in the context of COVID-19 detection images.
Dataset 3: Breast Lesions USG Images and Masks [71]
Correctness Rate (100% Observation): Exceptional performance was observed, with a correctness rate of 100%. This highlights the robustness of our feature map extraction method in preserving image integrity.
Observations and Insights
Table 7 presents the performance of the proposed model across three different datasets. For each dataset, the table reports the total number of images, the counts of true and false predictions, and the calculated correctness (accuracy) as a percentage. The results demonstrate high accuracy across all datasets, ranging from 96.0% on dataset 1 (1024 images) to 100.0% on dataset 3 (500 images). Notably, the model achieves near-perfect performance on Datasets 2 and 3, with accuracy exceeding 99%, as shown in Figure 6. Table 8 compares the accuracy of the proposed model with several established models, namely VGG16, MobileNetV2, and ResNet50, across the same three datasets. Our model consistently outperforms VGG16, MobileNetV2, and ResNet50 across the VinBigData (2019) and COVIDX (2019) datasets due to its superior design and feature extraction capabilities. In the last convolutional layer, VGG16 has a 143 structure, MobileNetV2 has 41280, and ResNet50 has 3512. VGG16 and MobileNetV2 tend to lose the image contextual information, while ResNet50 can preserve it to some extent. Our model, with its 83 structure in the last convolutional layer, excels by concatenating the context, ensuring that no contextual information is lost. This ability to maintain context throughout the processing pipeline allows our model to distinguish complex patterns more effectively. Combined with an optimized data flow that prevents bottlenecks and information loss, these factors result in our model achieving up to 99% accuracy, demonstrating its robustness and versatility across different datasets.
TABLE 7 Performance of the model on different datasets. The ”Correctness” column shows the accuracy of the model on each dataset.
Dataset | No. of Images | True | False | Correctness |
1 | 1024 | 15,000 | 14,403 | 96.0% |
2 | 84,818 | 83,969 | 849 | 99.0% |
3 | 500 | 500 | 0 | 100.0% |
TABLE 8 Accuracy comparison of different models with our model.
Dataset | VGG16 | MobileNetV2 | ResNet50 | Our model |
1 | 81% | 84% | 86% | 96% |
2 | 83% | 87% | 89% | 99% |
3 | 85% | 89% | 92% | 100% |
[IMAGE OMITTED. SEE PDF]
Performance Analysis of Total System
This section provides a detailed performance analysis of our decentralized medical image-sharing system, focusing on two primary processes: image insertion and image retrieval. The analysis includes the time taken for various steps involved in these processes and discusses the influence of internet speed and latency on overall performance.
Image Insertion Time
The image insertion process involves several steps: image compression using the JPEG algorithm, feature extraction using a machine learning model, encryption with AES GCM mode, uploading the encrypted file to IPFS (Pinata), and inserting the IPFS hash, feature map, and encryption key into the blockchain. Table 9 below presents the performance metrics for these operations across ten sample images.
TABLE 9 Performance metrics for image insertion.
SL | Image size (KB) | Total time (ms) | Compression time (ms) | Feature extraction time (ms) | Encryption time (ms) | IPFS upload time (ms) | Blockchain insert time (ms) |
1 | 210 | 6364 | 17.91 | 176 | 2.24 | 3921 | 14 |
2 | 296 | 6344 | 20.36 | 182 | 2.82 | 4332 | 14 |
3 | 253 | 6569 | 20.71 | 182 | 2.66 | 4755 | 13 |
4 | 214 | 5869 | 18.14 | 178 | 2.15 | 4090 | 16 |
5 | 232 | 6154 | 17.73 | 179 | 2.33 | 4363 | 12 |
6 | 227 | 6113 | 17.13 | 177 | 2.32 | 4147 | 13 |
7 | 219 | 7858 | 17.91 | 179 | 2.28 | 5863 | 13 |
8 | 206 | 6266 | 16.61 | 175 | 2.14 | 4466 | 14 |
9 | 319 | 6846 | 20.38 | 173 | 2.52 | 4876 | 13 |
10 | 248 | 7123 | 15.71 | 175 | 2.26 | 5138 | 15 |
Avg | 242.4 | 6550 | 18.26 | 178 | 2.37 | 4595 | 13.7 |
Max | 319 | 7858 | 20.71 | 182 | 2.82 | 5863 | 16 |
Min | 206 | 5869 | 15.71 | 173 | 2.14 | 3921 | 12 |
System Configuration:
- Processor: AMD RYZEN 5600G
- Graphics Card: NVIDIA GeForce RTX 3060 Ti
- Memory: 16 GB 3200MHz RAM
Test Device: The performance test was conducted on a desktop equipped with an AMD RYZEN 5600G processor and an NVIDIA GeForce RTX 3060 Ti graphics card, supported by 16 GB of 3200MHz RAM.
- Total Time: The average total time for image insertion is 6550.6 ms. Internet speed and latency significantly contribute to the time taken for uploading images to IPFS, which is the most time-consuming step. The maximum time recorded is 7858 ms, and the minimum is 5869 ms.
- Compression Time: The average time for compressing images is 18.26 ms, with a maximum of 20.71 ms and a minimum of 15.71 ms, indicating efficient compression.
- Feature Extraction Time: The feature extraction process averages 178.11 ms, with a maximum of 182.99 ms and a minimum of 173.79 ms, demonstrating the deep learning model's effectiveness in extracting relevant features.
- Encryption Time: The encryption step is very fast, averaging only 2.37 ms, with a maximum of 2.82 ms and a minimum of 2.14 ms.
- IPFS Upload Time: Uploading images to IPFS is the most time-consuming step, averaging 4595.62 ms. This step is heavily influenced by internet speed and latency, with a maximum time of 5863.56 ms and a minimum of 3921.51 ms.
- Blockchain Insert Time: Inserting the IPFS hash, feature map, and encryption key into the blockchain averages 13.7 ms, with a maximum of 16 ms and a minimum of 12 ms, which is relatively quick and efficient.
Image Retrieval Time
The image retrieval process involves block retrieval from the blockchain, retrieving the encrypted image from IPFS, decrypting the image with AES-256 GCM, and integrity verification by extracting the feature map again and comparing it with the previously stored feature map. Table 10 below outlines the performance metrics for these operations.
TABLE 10 Performance metrics for image retrieval.
SL | Total time (ms) | Block retrieval time (ms) | Integrity check time (ms) | IPFS encrypted image retrieval time (ms) | Decryption time (ms) |
1 | 5084 | 14 | 207 | 2347 | 2.58 |
2 | 5248 | 14 | 211 | 2275 | 2.47 |
3 | 5539 | 15 | 196 | 2614 | 2.25 |
4 | 5372 | 13 | 218 | 2291 | 2.45 |
5 | 5069 | 14 | 204 | 2241 | 2.24 |
6 | 5521 | 14 | 193 | 2110 | 2.48 |
7 | 5682 | 15 | 217 | 2452 | 2.52 |
8 | 5493 | 13 | 202 | 2352 | 2.24 |
9 | 5421 | 14 | 199 | 2323 | 2.39 |
10 | 5627 | 15 | 225 | 2506 | 2.55 |
Avg | 5405.6 | 14.1 | 207.2 | 2351.54 | 2.42 |
Max | 5682 | 15 | 225 | 2614.22 | 2.58 |
Min | 5069 | 13 | 193 | 2110.58 | 2.24 |
- Total Time: The average total time for image retrieval is 5405.6 ms. Internet speed and latency significantly influence the time taken to retrieve images from IPFS.
- Block Retrieval Time: Block retrieval is very fast, averaging 14.1 ms.
- Integrity Check Time: This step takes an average of 207.2 ms, ensuring the retrieved image has not been tampered with.
- IPFS Encrypted Image Retrieval Time: This is the most time-consuming step in the retrieval process, averaging 2351.54 ms, and is significantly affected by internet speed and latency.
- Decryption Time: The decryption process is quick, averaging 2.42 ms.
Discussion of Performance Analysis
The performance analysis demonstrates that the proposed decentralized image sharing system effectively balances security and efficiency. The time-intensive processes, particularly uploading to and retrieving from IPFS, reflect the inherent latency of decentralized storage solutions and the significant impact of internet speed and latency. However, the encryption and decryption processes are highly efficient, contributing to the overall robustness of the system. This analysis confirms that our framework is well-suited for secure and efficient medical imaging communication, addressing critical challenges in data integrity, privacy, and scalability. Acknowledging the influence of internet speed and latency provides a more comprehensive understanding of the system's performance, highlighting areas for potential optimization in real-world deployment.
Scalability
It is the network's ability to handle a growing amount of work or its potential to accommodate growth. Scalability is impacted by several factors including block size, consensus efficiency, and how well the network manages more transactions. It is generally assessed by how well performance metrics like throughput and latency hold up as the network grows.
- Horizontal Scalability: Assessed by adding more nodes to the network and observing the changes in throughput and latency, which helps in understanding how the network performs under expansion.
- Vertical Scalability: Explored by enhancing the computational resources of existing nodes (increasing CPU and RAM) and studying the performance impact, which is vital for optimizing existing infrastructure.
Conclusion
This paper presents a comprehensive framework for decentralized medical image sharing by leveraging blockchain technology, including SSH and the IPFS. The proposed framework addresses the critical challenges of feature map integrity, privacy, and scalability in medical image sharing and makes significant contributions to the advancement of secure and efficient healthcare data management. The integration of SSH with blockchain technology ensures the authenticity and integrity of medical images, even in the face of compression or pre-processing, by focusing on preserving the integrity of their clinically relevant feature maps. This is crucial for maintaining the trustworthiness of medical data, essential for accurate diagnosis and treatment. The use of IPFS for decentralized storage and transmission further enhances the security and accessibility of medical images, while the proposed deep learning model provides a robust and efficient method for generating content-based hashes.
Our research offers a promising solution for the secure and efficient sharing of medical images, addressing the unique challenges faced in the healthcare domain. However, the research identifies several limitations, including the high computational resources required by the convolutional neural network (CNN) used for feature extraction and hashing, potential performance degradation with different image formats, and sensitivity to common image modifications. Future work aims to optimize the algorithm across various image formats, reduce computational overhead, and improve processing speed. Conducting sensitivity analysis and formal security assessments, as well as comparative studies with other state-of-the-art hashing algorithms, could further enhance the robustness and applicability of the proposed framework.
Author Contributions
Yeasir Arafat: conceptualization, methodology, software development, writing the original draft, reviewing and editing the manuscript. Abu Sayem Md. Siam: conceptualization, methodology, writing the original draft, reviewing and editing. Md Muzadded Chowdhury: data curation, software development, writing the original draft, reviewing and editing. Md Mehedi Hasan: resources, reviewing and editing the manuscript. Sayed Hossain Jobayer: resources, software development, validation, visualization. Swakkhar Shatabda: conceptualization, investigation, supervision. Salekul Islam: formal analysis, investigation, supervision, reviewing and editing the manuscript. Saddam Mukta: conceptualization, formal analysis, investigation, project administration, supervision, reviewing and editing the manuscript.
Conflicts of Interest
The authors declare no conflicts of interest.
Data Availability Statement
All relevant data supporting the findings of this study are publicly available in the cited repositories and can be obtained from the corresponding author upon reasonable request.
A. Haleem, M. Javaid, R. P. Singh, and R. Suman, “Telemedicine for Healthcare: Capabilities, Features, Barriers, and Applications,” Sensors International 2 (2021): 100117.
C. Thapa and S. Camtepe, “Precision Health Data: Requirements, Challenges and Existing Techniques for Data Security and Privacy,” Computers in Biology and Medicine 129 (2021): 104130.
S. Abdullah, J. Arshad, M. M. Khan, M. Alazab, and K. Salah, “PRISED Tangle: A Privacy‐Aware Framework for Smart Healthcare Data Sharing Using IOTA Tangle,” Complex & Intelligent Systems 9, no. 3 (2023): 3023–3041.
K. Y. Yigzaw, S. D. Olabarriaga, A. Michalas, et al., “Health Data Security and Privacy: Challenges and Solutions for the Future,” in Roadmap to Successful Digital Health Ecosystems (Academic Press, 2022), 335–362.
Z. El Ouazzani, H. El Bakkali, and S. Sadki, “Privacy Preserving in Digital Health: Main Issues, Technologies, And Solutions,” in Research Anthology on Privatizing and Securing Data (IGI Global, 2021), 1503–1526.
B. Kaplan, E. J. Davidson, G. Demiris, R. Schreiber, and A. E. Waldman, “Rethinking Health Data Privacy,” in Proceedings of the American Medical Informatics Association Annual Symposium (American Medical Informatics Association, 2019).
M. A. Azad, J. Arshad, S. Mahmoud, K. Salah, and M. Imran, “A Privacy‐Preserving Framework for Smart Context‐Aware Healthcare Applications,” Transactions on Emerging Telecommunications Technologies 33, no. 8 (2022): e3634.
J. W. de Kok, M. A. A. de la Hoz, Y. d. Jong, et al., “A Guide to Sharing Open Healthcare Data Under the General Data Protection Regulation,” Scientific Data 10, no. 1 (2023): 404.
T. Mahler, E. Shalom, A. Makori, Y. Elovici, and Y. Shahar, “A Cyber‐Security Risk Assessment Methodology for Medical Imaging Devices: the Radiologists' Perspective,” Journal of Digital Imaging 35, no. 3 (2022): 666–677.
M. Marwan, F. AlShahwan, F. Sifou, A. Kartit, and H. Ouahmane, “Improving the Security of Cloud‐based Medical Image Storage,” Engineering Letters 27, no. 1 (2019): 175–193.
M. U. Tariq, “Revolutionizing Health Data Management with Blockchain Technology: Enhancing Security and Efficiency in a Digital Era,” in Emerging Technologies for Health Literacy and Medical Practice (IGI Global, 2024), 153–175.
D. Kumari, A. S. Parmar, H. S. Goyal, K. Mishra, and S. Panda, “HealthRec‐Chain: Patient‐centric Blockchain Enabled IPFS for Privacy Preserving Scalable Health Data,” Computer Networks 241 (2024): 110223.
R. Ur. Rasool, H. F. Ahmad, W. Rafique, A. Qayyum, J. Qadir, and Z. Anwar, “Quantum Computing for Healthcare: A Review,” Future Internet 15, no. 3 (2023): 94.
K. Vayadande, V. Singh, K. Sultanpure, et al., “Empowering Data Sovereignty: Decentralized Image Sharing Through Blockchain and Interplanetary File System,” International Journal of Intelligent Systems and Applications in Engineering 12, no. 14s (2024): 223–235.
Y. S. S Sashank, A. Agrawal, R. Bhatia, A. Bhatia, and K. Tiwari, “D‐insta: A Decentralized Image Sharing Platform,” in International Conference on Advanced Information Networking and Applications (Springer, 2023), 206–217.
B. Bhushan, A. Khamparia, K. M. Sagayam, S. K. Sharma, M. A. Ahad, and N. C. Debnath, “Blockchain for Smart Cities: A Review of Architectures, Integration Trends and Future Research Directions,” Sustainable Cities and Society 61 (2020): 102360.
R. Sharad Mangrulkar and P. Vijay Chavan, “Essentials of Blockchain Programming,” in Blockchain Essentials: Core Concepts and Implementations (Springer, 2024), 47–81.
A. Summers, Understanding Blockchain and Cryptocurrencies: A Primer for Implementing and Developing Blockchain Projects (CRC Press, 2022).
M. Attaran, “Blockchain Technology in Healthcare: Challenges and Opportunities,” International Journal of Healthcare Management 15, no. 1 (2022): 70–83.
B. Zhang, B. Rahmatullah, S. L. Wang, A. Zaidan, B. Zaidan, and P. Liu, “A Review of Research on Medical Image Confidentiality Related Technology Coherent Taxonomy, Motivations, Open Challenges and Recommendations,” Multimedia Tools and Applications 82, no. 14 (2020): 21867–21906.
G. Varoquaux and V. Cheplygina, “Machine Learning for Medical Imaging: Methodological Failures and Recommendations for the Future,” NPJ Digital Medicine 5, no. 1 (2022): 48.
P. Kong, A. Li, D. Guo, L. Zhou, and C. Qin, “Joint Lossless Compression and Encryption for Medical Images,” IEEE Transactions on Circuits and Systems for Video Technology (IEEE, 2023).
S. Rathod, M. D. Salunke, M. Yashwante, M. Bhende, S. R. Rangari, and V. D. Rewaskar, “Ensuring Optimized Storage With Data Confidentiality and Privacy‐Preserving for Secure Data Sharing Model Over Cloud,” International Journal of Intelligent Systems and Applications in Engineering 11, no. 3 (2023): 35–44.
J. W. Heo, G. S. Ramachandran, A. Dorri, and R. Jurdak, “Blockchain Data Storage Optimisations: A Comprehensive Survey,” ACM Computing Surveys 56, no. 7 (2024): 179.
M. Sultana, A. Hossain, F. Laila, K. A. Taher, and M. N. Islam, “Towards Developing a Secure Medical Image Sharing System Based On Zero Trust Principles and Blockchain Technology,” BMC Medical Informatics and Decision Making 20, no. 1 (2020): 1–10.
K. Ding, S. Chen, J. Yu, Y. Liu, and J. Zhu, “A New Subject‐Sensitive Hashing Algorithm Based on MultiRes‐RCF for Blockchains of HRRS Images,” Algorithms 15, no. 6 (2022): 213.
R. Haque, H. Sarwar, S. R. Kabir, et al., “Blockchain‐based Information Security of Electronic Medical Records (EMR) in a Healthcare Communication System,” in Intelligent Computing and Innovation on Data Science: Proceedings of ICTIDS 2019 (Springer, 2021), 641–650.
J. Sun, X. Yao, S. Wang, and Y. Wu, “Blockchain‐based Secure Storage and Access Scheme for Electronic Medical Records in IPFS,” IEEE Access 8 (2020): 59389–59401.
N. Vu, A. Ghadge, and M. Bourlakis, “Blockchain Adoption in Food Supply Chains: A Review and Implementation Framework,” Production Planning & Control 34, no. 6 (2023): 506–523.
R. Kumar and R. Tripathi, “Secure Healthcare Framework Using Blockchain and Public Key Cryptography,” in Blockchain Cybersecurity, Trust and Privacy (Springer, 2020), 185–202.
B. Seok, J. Park, and J. H. Park, “A Lightweight Hash‐Based Blockchain Architecture for Industrial IoT,” Applied Sciences 9, no. 18 (2019): 3740.
P. Paul, P. Aithal, R. Saavedra, and S. Ghosh, “Blockchain Technology and Its Types: A Short Review,” International Journal of Applied Science and Engineering (IJASE) 9, no. 2 (2021): 189–200.
E. Strehle, “Public Versus Private Blockchains,” BRL Working Paper (Blockchain Research Lab, 2020).
M. Raikwar, D. Gligoroski, and K. Kralevska, “SoK of Used Cryptography in Blockchain,” IEEE Access 7 (2019): 148550–148575.
V. Velde, F. A. Parvez, and J. Chaitanya, “A Blockchain Enabled System for Security, Non‐Repudiation and Integrity of Judiciary Proceedings,” in 2022 First International Conference on Electrical, Electronics, Information and Communication Technologies (ICEEICT) (IEEE, 2022), 1–5.
R. B. Marqas, S. M. Almufti, and R. R. Ihsan, “Comparing Symmetric and Asymmetric cryptography in Message Encryption and Decryption by Using AES and RSA Algorithms,” Xi'an Jianzhu Keji Daxue Xuebao/Journal of Xi'an University of Architecture & Technology 12, no. 3 (2020): 3110–3116.
K. Koptyra and M. R. Ogiela, “Imagechain—Application of Blockchain Technology for Images,” Sensors 21, no. 1 (2020): 82.
K. Ding, S. Chen, Y. Wang, Y. Liu, Y. Zeng, and J. Tian, “AAU‐Net: Attention‐based Asymmetric U‐Net for Subject‐Sensitive Hashing of Remote Sensing Images,” Remote Sensing 13, no. 24 (2021): 5109.
W. Wu, F. Chen, P. Yuan, et al., “Privacy‐Preserving Pathological Data Sharing Among Multiple Remote Parties,” Blockchain: Research and Applications 5, no. 3 (2024): 100204.
S. A. H. Mohsan, A. Razzaq, S. A. K. Ghayyur, H. K. Alkahtani, Al‐N. Kahtani, and S. M. Mostafa, “Decentralized Patient‐Centric Report and Medical Image Management System Based on Blockchain Technology and the Inter‐Planetary File System,” International Journal of Environmental Research and Public Health 19, no. 22 (2022): 14641.
C. W. Tsai, Y. P. Chen, T. C. Tang, and Y. C. Luo, “An Efficient Parallel Machine Learning‐Based Blockchain Framework,” ICT Express 7, no. 3 (2021): 300–307.
Q. Y. Zhang and G. R. Wu, “Digital Image Copyright Protection Method Based on Blockchain and Perceptual Hashing,” International Journal of Network Security 25, no. 1 (2023): 10–24.
L. Zhang, W. Chen, W. Wang, et al., “Cbgru: A Detection Method of Smart Contract Vulnerability Based on a Hybrid Model,” Sensors 22, no. 9 (2022): 3577.
J. Kleinberg and E. Tardos, Algorithm Design (Pearson Education India, 2006).
A. Alnuaimi, D. Hawashin, R. Jayaraman, K. Salah, and M. Omar, “Trustworthy Healthcare Professional Credential Verification Using Blockchain Technology,” IEEE Access 11 (2023): 109669–109688, https://doi.org/10.1109/ACCESS.2023.3322359.
F. Yu, J. Peng, X. Li, C. Li, and B. Qu, “A Copyright‐Preserving and Fair Image Trading Scheme Based on Blockchain,” Tsinghua Science and Technology 28, no. 5 (2023): 849–861, https://doi.org/10.26599/TST.2022.9010066.
P. Oktivasari, M. Agustin, R. E. M. Akbar, A. Kurniawan, A. R. Zain, and F. A. Murad, “Analysis of ECG Image File Encryption Using ECDH and AES‐GCM Algorithm,” in 2022 7th International Workshop on Big Data and Information Security (IWBIS) (IEEE, 2022), 75–80.
R. Kumar, R. Tripathi, N. Marchang, G. Srivastava, T. R. Gadekallu, and N. N. Xiong, “A Secured Distributed Detection System Based on IPFS and Blockchain for Industrial Image and Video Data Security,” Journal of Parallel and Distributed Computing 152 (2021): 128–143.
N. Kulathunga, N. R. Ranasinghe, D. Vrinceanu, Z. Kinsman, L. Huang, and Y. Wang, “Effects of The Nonlinearity in Activation Functions on the Performance of Deep Learning Models,” arXiv preprint arXiv:2010.07359 (2020).
N. Noreen, S. Palaniappan, A. Qayyum, I. Ahmad, M. Imran, and M. Shoaib, “A Deep Learning Model Based on Concatenation Approach for the Diagnosis of Brain Tumor,” IEEE Access 8 (2020): 55135–55144.
W. Hao, J. Zeng, X. Dai, et al., “Towards a Trust‐Enhanced Blockchain P2P Topology for Enabling Fast and Reliable Broadcast,” IEEE Transactions on Network and Service Management 17, no. 2 (2020): 904–917.
Y. Charts, “Bitcoin Blockchain Size,” accessed 17 January 2025, https://ycharts.com/indicators/bitcoin_blockchain_size.
J. Feng, X. Zhao, K. Chen, F. Zhao, and G. Zhang, “Towards Random‐Honest Miners Selection and Multi‐Blocks Creation: Proof‐of‐Negotiation Consensus Mechanism in Blockchain Networks,” Future Generation Computer Systems 105 (2020): 248–258.
W. Ren, J. Hu, T. Zhu, Y. Ren, and K. K. R. Choo, “A Flexible Method to Defend Against Computationally Resourceful Miners in Blockchain Proof of Work,” Information Sciences 507 (2020): 161–171.
L. Vishwakarma, D. Das, “BlockTree: A Nonlinear Structured, Scalable and Distributed Ledger Scheme for Processing Digital Transactions,” Cluster Computing 24, no. 4 (2021): 3751–3765.
L. Marchesi, M. Marchesi, R. Tonelli, and M. I. Lunesu, “A Blockchain Architecture for Industrial Applications,” Blockchain: Research and Applications 3, no. 4 (2022): 100088.
T. Osterland, G. Lemme, and T. Rose, “Discrepancy Detection in Merkle Tree‐Based Hash Aggregation,” in 2021 IEEE International Conference on Blockchain and Cryptocurrency (ICBC) (IEEE, 2021), 1–9.
I. A. Urbaniak, “Using Compressed JPEG and JPEG2000 Medical Images in Deep Learning: A Review,” Applied Sciences 14, no. 22 (2024): 10524.
S. Benaissi, N. Chikouche, and R. Hamza, “A Novel Image Encryption Algorithm Based on Hybrid Chaotic Maps Using A Key Image,” Optik 272 (2023): 170316.
Y. Chen, R. Xia, K. Zou, and K. Yang, “FFTI: Image Inpainting Algorithm Via Features Fusion and Two‐Steps Inpainting,” Journal of Visual Communication and Image Representation 91 (2023): 103776.
M. Hu, B. Sun, X. Kang, and S. Li, “Multiscale Structural Feature Transform for Multi‐Modal Image Matching,” Information Fusion 95 (2023): 341–354.
X. Huang, Y. Dong, G. Ye, W. S. Yap, and B. M. Goi, “Visually Meaningful Image Encryption Algorithm based On Digital Signature,” Digital Communications and Networks 9, no. 1 (2023): 159–165.
M. Baak, R. Koopman, H. Snoek, and S. Klous, “A New Correlation Coefficient Between Categorical, Ordinal and Interval Variables With Pearson Characteristics,” Computational Statistics & Data Analysis 152 (2020): 107043.
S. Wang, C. Wang, and C. Xu, “An Image Encryption Algorithm Based On a Hidden Attractor Chaos System and the Knuth–Durstenfeld Algorithm,” Optics and Lasers in Engineering 128 (2020): 105995.
A. Pourjabbar Kari, A. Habibizad Navin, A. M. Bidgoli, and M. Mirnia, “A New Image Encryption Scheme Based On Hybrid Chaotic Maps,” Multimedia Tools and Applications 80 (2021): 2753–2772.
B. S. McConnell, “Entropy: Measuring Order and Randomness,” in The Alien Communication Handbook (Springer, 2021), 127–132.
B. Parameshachari, H. Panduranga, S. L. Ullo, et al., “Analysis and Computation of Encryption Technique to Enhance Security of Medical Images,” in IOP Conference Series: Materials Science and Engineering (IOP Publishing, 2020), 012028.
Y. Zhang, “Statistical Test Criteria for Sensitivity Indexes of Image Cryptosystems,” Information Sciences 550 (2021): 313–328.
J. E. K. N. P. C. Dung, N. B. Ha, and Q. Nguyen, “VinBigData Chest X‐ray Abnormalities Detection” (2020).
Y. Wu, H. Gunraj, C.‐e. A. Tai, and A. Wong, “COVIDx CXR‐4: An Expanded Multi‐Institutional Open‐Source Benchmark Dataset for Chest X‐ray Image‐Based Computer‐Aided COVID‐19 Diagnostics,” arXiv preprint arXiv:2311.17677 (2023).
A. Pawłowska, A. Ćwierz‐Pieńkowska, A. Domalik, et al., “Curated Benchmark Dataset for Ultrasound Based Breast Lesion Analysis,” Scientific Data 11, no. 1 (2024): 148.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2025. This work is published under http://creativecommons.org/licenses/by/4.0/ (the "License"). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
ABSTRACT
This research presents a blockchain‐based framework for secure and efficient medical image sharing, prioritizing data integrity and privacy. The framework involves two key phases: image compression with feature extraction and image encryption with storage on the InterPlanetary File System (IPFS). Medical images are compressed using the JPEG algorithm to reduce file size while maintaining diagnostic value. A deep neural network‐based subject sensitive hashing (SSH) algorithm ensures feature map integrity by extracting consistent features from both original and compressed images. Encrypted images, along with SSH‐generated hashes, are securely stored in the IPFS server. The encryption key and hash sequence are used for secure image retrieval, with smart contracts validating access requests based on the hash sequence. This multi‐stage feature extraction approach demonstrates robust image integrity, security, and privacy, as verified by experimental results. Achieving an average correctness rate of 98% across multiple datasets, the framework significantly enhances healthcare data management by addressing the challenges of secure, scalable, and private medical image sharing. This research contributes to the development of more efficient, reliable, and privacy‐conscious solutions for medical image handling in healthcare systems.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details

1 Department of Computer Science and Engineering, United International University, Dhaka, Bangladesh
2 Department of Computer Science and Engineering, BRAC University, Dhaka, Bangladesh
3 Electrical and Computer Engineering Department, North South University, Dhaka, Bangladesh
4 Department of Software Engineering, LUT University, Lappeenranta, Finland