Content area

Abstract

Underwater marine and freshwater environments are vast and mysterious, but our ability to explore them is limited by the inflexibility and inconvenience of monitoring systems. To overcome this problem, in this work, we present a proof-of-concept deployment of a real-time Internet of Underwater Things (IoUT) using blue light-emitting-diode-based visible light communication (VLC). Pulse-amplitude modulation with four levels is employed. To relax the focus point and increase the received power, four avalanche photodiodes (APDs) are adopted. Moreover, to reduce the error rate, the convolutional code with constraint-7 is used, which is the simplest to implement. Encoding and decoding are implemented by a field-programmable gate array. The results are verified by experimental demonstration. A baud rate of 9600 is used, but, unfortunately, we only have a 2 m long tank. System performance is improved when the number of APDs is increased; we investigated the effects of up to four APDs. Notably, bit error-free data transmission can be achieved. Additionally, this method would make underwater monitoring very conventional and dependable, and low-cost real-time monitoring would be possible, with data shown on the Grafana dashboard tool.

Full text

Turn on search term navigation

1. Introduction

Underwater exploration and real-time monitoring are essential for understanding and managing both freshwater and marine environments. Despite technological progress, more than 90% of the ocean remains unexplored [1]. Direct measurements by humans or robotic platforms are often impractical or hazardous, particularly in chemically contaminated areas. Conventional radio frequency (RF) communication, which performs well in terrestrial Internet of Things (IoT) applications [2,3], faces severe attenuation in underwater environments [4,5], limiting transmission range and reliability. Although laser-based underwater optical wireless communication (UWOC) can achieve high data rates and long transmission distances, it requires precise beam alignment and a small receiver aperture, resulting in a narrow communication beam that is difficult to maintain in dynamic underwater conditions.

In contrast, light-emitting diode (LED)-based visible light communication (VLC) systems provide a wider optical beam [6,7], relaxing alignment requirements and simplifying receiver design. However, the relatively low optical power of LEDs limits the achievable communication distance. This trade-off between simplicity and range remains a key challenge for realizing practical Internet of Underwater Things (IoUT) systems [8].

1.1. Motivation

The IoUT concept extends IoT connectivity into underwater environments by linking underwater sensor nodes, autonomous underwater vehicles (AUVs), surface vessels, and terrestrial communication platforms such as 5G and optical networks. Reliable underwater communication can enable autonomous environmental monitoring, pollution detection, and marine ecosystem management using machine learning (ML) and artificial intelligence (AI) while minimizing human exposure to hazardous environments. Existing wired approaches for underwater monitoring are inflexible and prone to signal degradation due to cable noise, especially when multiple sensor nodes are deployed across a wide area. A wireless optical communication approach offers higher flexibility, scalability, and lower maintenance requirements.

For underwater VLC systems, intensity modulation with direct detection (IM/DD) is typically adopted due to its simplicity. Among IM/DD schemes, pulse-amplitude modulation with four levels (PAM-4) offers a higher data rate and better signal-to-noise ratio (SNR) compared to on–off keying (OOK) while maintaining lower complexity than orthogonal frequency division multiplexing (OFDM) [7,8,9,10,11]. These characteristics make PAM-4 a suitable candidate for practical IoUT systems.

1.2. Contributions

In this paper, we present a prototype and experimental demonstration of a real-time underwater monitoring system based on LED-based VLC within the IoUT framework. The key contributions are summarized as follows:

-. A real-time underwater monitoring system is designed and implemented using IM/DD PAM-4 modulation with constraint-7 convolutional coding. The optical transmitter employs a 405 nm, 10 W blue LED, while the receiver utilizes four avalanche photodiodes (APDs) to improve optical sensitivity and relax beam alignment requirements.

-. The convolutional encoder and decoder are implemented on an AMD-Xilinx field-programmable gate array (FPGA), providing a low-complexity and hardware-efficient solution. Although advanced codes like low-density parity check (LDPC) [12] and polar codes (PC) [13] can be employed, they require complex calculations. In contrast, Bose–Chaudhuri–Hocquenghem (BCH) codes have recently been proposed for underwater optical wireless communication [14]. However, despite their low complexity, BCH codes typically provide lower performance. Additionally, frame synchronization is developed to ensure robust data recovery under varying optical conditions.

-. A Grafana dashboard [15] is employed to display sensor data and system performance metrics in real time, demonstrating the practical applicability of the proposed system for underwater monitoring.

-. Experimental results obtained using a 2 m water tank confirm error-free data transmission at a bit rate of 9.6 Mbps, verifying the feasibility of short-range underwater communication using LED-based VLC with multiple APDs.

In summary, the proposed system provides a simple, reliable, and cost-effective solution for short-range underwater wireless communication, paving the way for scalable IoUT deployments in real-world environments.

2. IoUT System Model

In this section, the proposed IoUT using a VLC system is presented. The details are shown in Figure 1. The transmitter (Tx) is located underwater and is equipped with sensors, which perform underwater environment measurements. VLC with PAM-4 is used as the communication system. To reduce the bit error rate (BER), the convolutional code with constraint-7 is employed at the receiver end, which is located above the water’s surface. The APD is used to convert light signals to electrical signals. Additionally, to increase the received power and relax the focus point, four APDs are proposed. The analog electrical measurement signal is sampled and converted to the digital signal domain by using AD09220, 12 bits, and 10 Msps from analog devices. Next, frame synchronization is performed, as detailed in Section 3.1. Then, convolution decoding is performed, implemented by an FPGA, as detailed in Section 3.2. Finally, the decoded data is uploaded to the server using ESP8266. As a result, data management and display online on the dashboard is simplified. The main parameters used in the experiment are shown in Table 1.

The communication system is detailed below. The received signal with four APDs is denoted by y(t), which is expressed as

(1)y(t)=l=03ρlηlx(t)hl(t)+zl(t),

where stands for linear convolution, ρ is the electrical-to-optical conversion coefficient, and η is the photo detector sensitivity. x(t) is the transmitted coded PAM-4 signal and h(t) is the underwater channel. z(t) is a noise component, which is modeled by additive white Gaussian noise (AWGN) with zero mean and variant σz=N0/2. N0=4KTB/R, where K is the Boltzmann constant, T is the temperature in Kelvin, B is the electrical signal bandwidth, and R is the receiver resistance. For simplicity, in this case, l is omitted. All of them exist in the time domain. If there is no loss of light conversion and only AWGN is considered, the theoretical BER of the error correcting code is defined by [16]

(2)PePAM=2M1M log2MQ6log2MM21γs,

hence, PePAM is the probability of the PAM-M bit error rate (BER), where Q(x)=12πxez2/2dz. γs=Eb/N0 is the signal-to-noise ratio (SNR), where Es is the energy per symbol. From (1), after decoding, the BER performance can be obtained by [17]

(3)Pe=1nj=t+1njnjPePAMj1PePAMnj,

where Pe is the bit error probability after decoding, n is the total transmitted bit in each block, t is the bit error correcting capability of the convolution code, and j is the number of bit errors occurring on each block. The theoretical error probability from Equation (3) is plotted in Figure 2, where the n of messages is set to 20. As can be seen, the capability of error correction, t, increases, and the BER dramatically drops as the SNR increases. Greater bit correction leads to a lower BER. This confirms that coding can significantly improve the system’s performance.

3. FPGA Implementation

In this section, the FPGA implementation of IoUT using VLC is discussed and prototyped. The most important processes are frame synchronization and convolution coding. At the transmitter, FPGA Cmod S6, Spartan-6 XC6SLX4-2CPG196 from AMD Xilinx (Santa Clara, CA, USA) is used for the convolution encoder. The receiver performs frame synchronization, Viterbi decoding, and PAM-4 demapping, where the Nexys 4 DDR board and ship ARtix7 XC7A100T, also from AMD Xilinx, is used. The Nexys 4 DDR board is bigger than the Tx since processing is more complex.

3.1. Frame Synchronization

Synchronization is very important, as it is used to enable and reset the decoding process. Five full-scale samples are appended to the front of the encoded PAM-4, as shown in Figure 3A. din(n) is the received electrical PAM-4 input data, and th is the threshold at which synchronization is detected. The synchronization algorithm is performed by summing din(n) three times, and the code is repeated. If the sum is more than the th, the frame is detected. From the above, the synchronization algorithm is expressed as follows.

(4)EN(n)=l=02din(nl)>th,

where EN(n) stands for “enable signal”. A random access memory (RAM) is used for storing input data. The input data will be stored only when EN(n)=1. Then, the RAM is read and fed to the Viterbi decoder, as detailed in the next section.

3.2. Convolution Code

As we already know, the convolution code is very simple to implement, especially in an FPGA, which is very flexible [17,18]. The resource consumption is very light. Research on convolutional codes is ongoing to improve their performance, and new applications are reported in [18,19]. Additionally, the theoretical convolution code can be found in [20]. In this section, only the implementation algorithm is presented, following the algorithms in [21,22]. The error correction capability is dependent on constraint length, denoted by k. The larger the k, the greater the bit error correction. However, the complexity will also be increased. To compromise between error correction capability and complexity, a constraint length of 7, k=7, with a coding rate of 0.5, is considered.

The details of the convolution encoder are shown in Figure 4A. The encoder uses six shift registers (SRs), where each SR is denoted by Z1, and the two-bit outputs are encoded by

(5)c0(n)=din(n)din(n2)din(n3)din(n5)din(n6),

and

(6)c1(n)=din(n)din(n1)din(n2)din(n3)din(n6),

where din(n) is the input data. is the XOR gate.

The decoder processing unit is shown in Figure 4B. The decoding procedure employs two quantities: the Hamming distance (HD) and the path metric (PM). PM is denoted by β. There are two dimensions, which are state, denoted by s, and time of looking back, denoted by t. In this case, s=02k=71=0127 states, and we obtain t=019 since only 20 messages are sent in this work. HD is the measurement of the distance between the received symbol, denoted by ys,t, and the references, indicated by rf(s,t). HD is obtained by hds,t=y2s,trf2s,t. However, for actual implementation and simplicity, the square and square root can be omitted, and only the absolute value (ABS) is needed. Therefore, hds,t can be modified and implemented by

(7)hdms,t=absys,trfs,t,

where hdms,t is the modified HD. abs  is the absolute value operator, and that is very simple to implement. Only two complement operators are needed. Next, hdms,t is accumulated for all PMs and is calculated by

(8)hdacm(s,t)=hdacm(s,t1)+hdm(s,t).

where hdacm(s,t) is the current accumulated by the modified HD on each path. There are 128 paths in total. In each state, there are two path candidates from hdacm(s,t), and they are called survivor paths. Therefore, the path metric is selected by

(9)β(s˜,t)=minhdacm(s,t),hdacm(s+1,t),

in this case, s˜=02k/21=063. Please see Figure 4B for more details. The dashed line is decoded for bit ‘1’, while the solid line is decoded for bit ‘0.’ Moreover, each tth, the β(s˜,t), and the path number are saved into RAM for the decoded bit by the look back module in the following process. The RAM depth is 20 × 64 states. Only the minimum accumulated Hamming distance on each PM and selected part is saved to RAM. For hdacm(s+1,t), however, only the minimum value is used, and the final unit of the Viterbi decoder is the look back process. Firstly, the minimum modified Hamming distance at the last t=19 from the RAM bank is first calculated. The minimum Hamming location is used to indicate the starting point of the look back state. Each stage has the lowest Hamming distance from the incoming two candidates. Additionally, the stage has its own specific path; therefore, saving the next look back time with the lowest Hamming distance into the RAM is very simple, where the address is an indicator for the look back time, (t1). The output of a certain address will be mapped with ‘0’ when the address is less than 31; otherwise, it will be decoded to ‘1’.

3.3. PAM-4 Generator

A PAM-4 generator is a circuit that converts digital signals into four-level pulse-amplitude modulation (PAM-4) signals by combining two digital signals, c0 and c1, using a summing circuit. The summing circuit [22] is typically implemented using an operational amplifier (Op-Amp); in this work, the LM741 is used. The power gain of the PAM-4 is manually adjusted with a 10 W blue LED bias voltage to achieve symmetrical four-level PAM-4, by adjusting the current through the LED, which in turn changes the voltage drop across the LED. The voltage drop across the LED is then used to bias the Op-Amp in the summing circuit. To generate PAM-4, the voltage of the FPGA output must be divided by half using a voltage divider. Four resistors, which are connected in series between the FPGA output and the Op-Amp in the summing circuit, are used for this purpose. The values of R1, R2, R3 and R4 =100 Ω are typically chosen to be equal, so that the voltage divider divides the FPGA output voltage by a factor of 2. The power gain of PAM-4 voltage is manually adjusted with a 10 w blue LED bias to achieve symmetrical four-level PAM-4. The input to the summing circuit is carried out using resisters R5=680 Ω, R6=1k and Rf=600 Ω. The details of the received PAM-4 signal will be discussed in the Section 5.

4. Experimental Setup

The experimental setup is shown in Figure 5. The Arduino reads the sensor values from a pH and a temperature sensor. The measured values were then fed to the FPGA, Cmod S6 with XC6SLX4-2CPG196 from AMD-Xilinx, to encode the data according to Equations (5) and (6). The c0(n) and c1(n) outputs of the encoders were combined to form a PAM-4 signal that has a baud rate of 9600 bps. A nonlinear PAM-4 signal was avoided by tuning the gain of the op-amp 741, as shown in Figure 6. The electrical PAM-4 signal drives a 10 W blue LED to convert it to a light signal. A single transistor (NSS1C201LT1G) was used to drive the LED, since the speed is slow. The maximum collector current of the transistor is 2 A with a maximum Vce of 100 Vdc. The light was transmitted to a freshwater tank of 50 × 200 × 70 cm. The attenuation of the water tank is about 14 dBm/m at a room temperature of 30 °C. Please refer to the experimental results.

At the receiver, a convex lens was used to focus the light. The received light was converted to an electrical signal by an avalanche photodetector (APD) model S1223-1 from Hamamatsu. Four APDs were used to relax the focus point and increase the light power. The electrical signal was amplified by an OPA-2380AIDGKT from TI with a voltage gain of 35 dB. Rf=10 kΩ was used. The amplified signal was fed to an ADC, AD9220 from analog devices, to convert it to a digital signal at a speed of 10 Msps with 12-bit resolution. The digital signal was fed to the FPGA, Nexys 4DDR, ARtix7 ship XC7A100T from AMD-Xilinx. The FPGA performs frame synchronization and convolution decoding on the digital signal. The received signal amplitude was controlled in the digital domain inside the FPGA. The received signal was multiplied by a control gain by observing the receiver amplitude manually.

The decoded signal was fed to the ESP-8266 Wi-Fi to upload the measured data to a MySQL database. The data was then displayed on the Grafana dashboard tool. The monitoring measurement process was performed in real time, and no offline processing was required.

5. Experimental Results

In this section, experimental results are discussed and detailed. A bit error rate (BER) is mainly used as a performance metric. To measure the BER, 20 sensor messages are transmitted in 3 s, and the bit count of 1,000,000 is used for each test. The received bit data after FPGA decoding is fed to the Arduino to be saved into the computer and calculate the BER offline. The LED light beam is adjusted to be broad enough to cover four APDs with the lens in order to achieve flexible focusing. Nevertheless, the tank for this experiment is only 2 m long.

First, the received wave forms after amplification are observed, and the number of APDs is investigated. The results are shown in Figure 7. The amplitude values at a distance of 1 m for one, two, and four APDs are shown in Figure 7A–C. The voltage for one APD is 0.74 v (=1.36 dBm), for two APDs 1.72 v (=17.71 dBm), and for four APDs 2.5 v (=21.02 dBm). At 1.8 m, the amplitude values for one, two, and four APDs are depicted in Figure 7D,E,F, respectively. In this case, the voltage is reduced to only 0.13 v (=−13.74 dBm) for one APD, 0.38 v (=−4.42 dBm) for two APDs, and 0.49 v (=−2.21 dBm) for four APDs. As can be seen, the wave form is very clear for 1 m for all numbers of APDs, and then BER-free transmission can be achieved. Additionally, the amplitude increases with the increase in the number of APDs. However, for 2 m, some noise is induced.

Next, the BER versus distance with various numbers of APDs is investigated, as shown in Figure 8. The BER for two APDs is lower than that for one APD because the received light power is higher, as evidenced in the figure. At 1 m, there is no error for any number of APDs and, especially, for four APDs, the transmission is BER-free for all distances. Additionally, all results are recorded continuously for more than 3 h.

A dashboard is used to display the measurement data from the system. The dashboard is implemented using the Grafana dashboard tool, as shown in Figure 9. It shows the sensor values in real time; these can be viewed from anywhere, which makes the system very flexible. The results from the dashboard confirm that the system works well and does not require human monitoring in real time. Additionally, the communication distance for a four-APD receiver can be extended to more than 2 m, e.g., to 4 m or 6 m. This is a significant improvement over previous systems, and it could be very helpful in the future.

6. Conclusions

In this paper, we present an LED-based IoT system using VLC PAM-4 signaling for underwater monitoring. The system employs a convolution code with constraint-7 to correct the BER at the receiver end. The code is implemented on an FPGA for both the encoder and decoder. The system also includes a wireless communication module. Real-time system performance is evaluated using experimental demonstrations. Four APDs are employed to relax the focus point and increase the received power. The results show that BER-free communication can be achieved. This makes the system convenient for farmers and officers who need to monitor crops or other objects underwater. The collected data can be used to create big data, which can then be used to manage the system using AI and ML in the future. For example, AI can be used to identify patterns in the data, and ML can be used to predict future events to allow for more efficient and effective monitoring of underwater environments. In addition, these advancements mean that real-time monitoring of underwater environments is no longer limited by human availability and, thus, that data can be collected 24/7 to provide a more comprehensive view of the underwater environment. Additionally, the use of four APDs to increase received power and relax the focus point offers the high reliability and data rate needed to support these real-time mobility applications in the future.

Author Contributions

Conceptualization, K.P.; software, K.P.; formal analysis, K.P. and W.W.; writing—original draft preparation, K.P. and W.W.; writing—review and editing, K.P. and W.W.; verification, K.P. and W.W.; project administration, K.P.; funding acquisition, K.P. All authors have read and agreed to the published version of the manuscript.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Acknowledgments

The authors would like to thank Tanatip Bubpawan, Bussakorn Bunsri, and Yaowarat Pittayong from Rajamangala University of Technology Isan, Khon Kaen campus, for preparing the experimental set.

Conflicts of Interest

The authors declare no conflict of interest.

Footnotes

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Figures and Table

Figure 1 System model of proposed IoUT using VLC with four APDs.

View Image -

Figure 2 BER comparison of error correction capability of PAM-4 in the AWGN channel.

View Image -

Figure 3 Proposed frame synchronization for underwater communication: (A) frame synchronization pattern, (B) synchronization method, and (C) synchronization algorithm.

View Image -

Figure 4 Convolution code for (A) encoder algorithm and (B) state diagram for K=7 and r=1/2.

View Image -

Figure 5 Experimental setup for the proposed IoUT system.

View Image -

Figure 6 Proposed PAM-4 generator using a summing circuit.

View Image -

Figure 7 The received signal for (A) 1 m and one APD, (B) 1 m and two APDs, (C) 1 m and four APDs, (D) 2 m with one APD, (E) 2 m with two APDs, and (F) 2 m with four APDs.

View Image -

Figure 8 BER versus distance for IoUT with coded VLC systems where one and two APDs are considered.

View Image -

Figure 9 An example of real-time Grafana dashboard for pH and temperature sensors for underwater measurement.

View Image -

The main parameters used in the experiment.

Name Parameters
LED Wavelength 460 nm
Forward current 12 V
Photodiode Hamamatsu (S1223) BW = 20 MHz
Amplifier OPA-2380AIDGKT BW = 90 MHz
Water Tank Dimension 0.5 × 2 × 1 (m)
ADC AD09220 10 MHz
FPGA (Tx) Cmods6 -
FPGA (Rx) Artix-7 -
Water tank length - 2 m
Visualization tool Grafana dashboard -

References

1. Available online: https://education.nationalgeographic.org/resource/ocean/ (accessed on 3 June 2024).

2. Gubbi, J.; Buyya, R.; Marusic, S.; Palaniswami, M. Internet of Things (IoT): A vision, architectural elements, and future directions. Future Gener. Comput. Syst.; 2013; 29, pp. 1645-1660. [DOI: https://dx.doi.org/10.1016/j.future.2013.01.010]

3. Asghari, P.; Rahmani, A.M.; Javadi, H.H.S. Internet of Things applications: A systematic review. Comput. Netw.; 2019; 148, pp. 241-261. [DOI: https://dx.doi.org/10.1016/j.comnet.2018.12.008]

4. Takizawa, K.; Suga, R.; Matsuda, T.; Kojima, F. Experiment on MIMO Communications in Seawater by RF Signals. Proceedings of the 2021 IEEE 93rd Vehicular Technology Conference (VTC2021-Spring); Helsinki, Finland, 25–28 April 2021; pp. 1-5. [DOI: https://dx.doi.org/10.1109/VTC2021-Spring51267.2021.9449016]

5. Kaushal, H.; Kaddoum, G. Underwater Optical Wireless Communication. IEEE Access; 2016; 4, pp. 1518-1547. [DOI: https://dx.doi.org/10.1109/ACCESS.2016.2552538]

6. Al-Kinani, A.; Wang, C.-X.; Zhou, L.; Zhang, W. Optical Wireless Communication Channel Measurements and Models. IEEE Commun. Surv. Tutor.; 2018; 20, pp. 1939-1962. [DOI: https://dx.doi.org/10.1109/COMST.2018.2838096]

7. Ali, M.F.; Jayakody, D.N.K.; Li, Y. Recent Trends in Underwater Visible Light Communication (UVLC) Systems. IEEE Access; 2022; 10, pp. 22169-22225. [DOI: https://dx.doi.org/10.1109/ACCESS.2022.3150093]

8. Jahanbakht, M.; Xiang, W.; Hanzo, L.; Azghadi, M.R. Internet of Underwater Things and Big Marine Data Analytics—A Comprehensive Survey. IEEE Commun. Surv. Tutor.; 2021; 23, pp. 904-956. [DOI: https://dx.doi.org/10.1109/COMST.2021.3053118]

9. Kong, M.; Chen, Y.; Sarwar, R.; Sun, B.; Xu, Z.; Han, J.; Chen, J.; Qin, H.; Xu, J. Underwater wireless optical communication using an arrayed transmitter/receiver and optical superimposition-based PAM-4 signal. Opt. Express; 2018; 26, pp. 3087-3097. [DOI: https://dx.doi.org/10.1364/OE.26.003087] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/29401841]

10. Nakamura, K.; Mizukoshi, I.; Hanawa, M. Optical wireless transmission of 405 nm, 1.45 Gbit/s optical IM/DD-OFDM signals through a 4.8 m underwater channel. Opt. Express; 2015; 23, pp. 1558-1566. [DOI: https://dx.doi.org/10.1364/OE.23.001558] [PubMed: https://www.ncbi.nlm.nih.gov/pubmed/25835913]

11. Xu, L.; He, J.; Zhou, Z.; Xiao, Y. Underwater optical wireless communication performance enhancement using 4D 8PAM trellis-coded modulation OFDM with DFT precoding. Appl. Opt.; 2022; 61, pp. 2483-2489. [DOI: https://dx.doi.org/10.1364/AO.453153]

12. Li, J.; Yang, B.; Ye, D.; Wang, L.; Fu, K.; Piao, J.; Wang, Y. A Real-Time, Full-Duplex System for Underwater Wireless Optical Communication: Hardware Structure and Optical Link Model. IEEE Access; 2020; 8, pp. 109372-109387. [DOI: https://dx.doi.org/10.1109/ACCESS.2020.3001213]

13. Falk, M.; Bauch, G.; Nissen, I. On Channel Codes for Short Underwater Messages. Information; 2020; 11, 58. [DOI: https://dx.doi.org/10.3390/info11020058]

14. Ramavath, P.; Udupi, A.; Krishnan, P. Experimental demonstration and analysis of underwater wireless optical communication link: Design, BCH coded receiver diversity over the turbid and turbulent seawater channels. Microw. Opt. Technol. Lett.; 2020; 62, pp. 2207-2216. [DOI: https://dx.doi.org/10.1002/mop.32311]

15. Available online: https://grafana.com/ (accessed on 15 May 2024).

16. Forney, G.D.; Ungerboeck, G. Modulation and coding for linear Gaussian channels. IEEE Trans. Inf. Theory; 1998; 44, pp. 2384-2415. [DOI: https://dx.doi.org/10.1109/18.720542]

17. Sklar, B. Digital Communications: Fundamentals and Applications. Prentice Hall; 2nd ed. Prentice Hall: Hoboken, NJ, USA, 2001.

18. Reeve, J.S.; Amarasinghe, K. A FPGA implementation of a parallel Viterbi decoder for block cyclic and convolution codes. Proceedings of the IEEE International Conference on Communications (IEEE Cat. No.04CH37577); Paris, France, 20–24 June 2004; Volume 5, pp. 2596-2599. [DOI: https://dx.doi.org/10.1109/ICC.2004.1313001]

19. Jiang, Y. Analysis of Bit Error Rate Between BCH Code and Convolutional Code in Picture Transmission. Proceedings of the 3rd International Conference on Electronic Communication and Artificial Intelligence (IWECAI); Zhuhai, China, 14–16 January 2022; pp. 77-80. [DOI: https://dx.doi.org/10.1109/IWECAI55315.2022.00023]

20. Astharini, D.; Asvial, M.; Gunawan, D. Performance of signal detection with trellis code for downlink non-orthogonal multiple access visible light communication. Photonic Netw. Commun.; 2022; 43, pp. 185-192. [DOI: https://dx.doi.org/10.1007/s11107-021-00957-5]

21. Nassar, C. Telecommunications Demystified (Demystifying Technology Series). LLH Technology Pub; 1st ed. Newnes: Boston, MA, USA, 2001.

22. AN-31 Amplifier Circuit Collection; Application Report; Revised Version Texas Instruments: Dallas, TX, USA, 2020.

© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.