1. Introduction
The paradigm-shifting advancements in deep learning architectures have fundamentally reshaped artificial intelligence applications across modern technological ecosystems. As the computational cornerstone of neural networks, activation functions serve as critical nonlinear transformers that empower hierarchical feature extraction—a capability indispensable for solving real-world problems with intricate nonlinear patterns. Among contemporary activation paradigms, the Gaussian Error Linear Unit (GELU) [1] has emerged as a seminal innovation demonstrating exceptional versatility across diverse artificial intelligence domains.
Distinct from conventional rectified linear units, GELU introduces a probabilistic interpretation through its unique integration with Gaussian statistics, enabling adaptive input modulation based on relative magnitudes within normal distribution parameters [2]. This sophisticated mechanism inherently incorporates stochastic regularization properties during activation, effectively balancing deterministic computation with noise-induced robustness—a dual functionality that enhances model generalization without explicit regularization terms. Empirical studies [1,2,3] have quantitatively demonstrated GELU’s superior performance in mitigating overfitting while maintaining gradient stability across deep architectures.
The practical significance of GELU manifests through its pervasive adoption in state-of-the-art models spanning multiple AI disciplines. In natural language processing, it powers transformer-based giants including BERT [4], T5 [5], and GPT-2 [6], where its non-monotonic nature proves crucial for contextual representation learning. Computer vision systems equally benefit from GELU’s smooth gradient transitions, with benchmark implementations in Vision Transformers [7] and Swin Transformers [8] demonstrating consistent performance gains over traditional activation functions. Cross-domain analyses reveal GELU’s architectural agnosticism, showing competitive results in multimodal architectures (CLIP [9]) and speech recognition systems (Whisper [10]).
Currently, numerous accelerator circuit implementations have incorporated Gaussian GELU function computing modules, particularly in dedicated Vision Transformer (ViT) accelerator circuits [11,12,13,14,15,16,17,18] and general-purpose Transformer accelerator designs [19,20,21,22]. In these architectures, a direct hardware implementation of the GELU function is challenging; most designs, therefore, rely on approximation methods to realize GELU in circuitry. This prevalence highlights that, within the current field of hardware-accelerator research, GELU computation blocks are used extensively—and that designing highly accurate GELU-approximation algorithms, which can be implemented with minimal resource overhead, has exceptional practical value.
However, the mathematical expression of the GELU activation function is relatively complex, involving several hardware-unfriendly operations such as exponentiation, division, and multiplication. When designing domain-specific accelerators for networks that incorporate the GELU activation function, these operations can significantly degrade the circuit’s performance, power consumption, and area efficiency.
Consequently, many prior works have employed approximation techniques to convert hardware-unfriendly operations into hardware-friendly ones, such as linear and bit-shift operations. Within an acceptable error range, these approaches simplify circuit design and enhance performance. The primary approximation methods include Taylor expansion [23,24], lookup tables [25,26], and piecewise linear functions [27,28]. Among these, the piecewise linear function method achieves the desired approximation with minimal storage and computational resources. Despite existing advancements, there remains significant room for improvement in accelerating GELU computation.
We propose a hardware-friendly algorithm capable of automatically identifying segmentation points. Unlike prior studies that approximate the function directly without leveraging its internal properties, our approach exploits the characteristics of sub-functions for both algorithm and circuit design. Furthermore, most piecewise linear function methods [29,30,31,32] rely on manually defined segments without providing a reliable segmentation algorithm. Additionally, many existing approaches adopt fixed-point data formats [29,30,31,32,33], leading to substantial quantization errors and compatibility issues. In contrast, this study employs the BF16 data format for circuit design, which reduces precision loss and enhances circuit versatility.
To accelerate the computation of the GELU function, we propose a novel algorithm and implement a corresponding hardware circuit utilizing the widely adopted BF16 data format in the deep learning field. Our main contributions are as follows: We propose the internal symmetry piecewise approximation (ISPA). Instead of using the symmetry of the entire GELU activation function, we use the symmetry of the GELU’s internal Gauss error function (erf) to achieve a piecewise approximation of the positive and negative parts. We propose an Error Peak Search Strategy (EPSS), an automated framework for determining optimal segmentation schemes in piecewise approximation tasks. Extensive experimental results demonstrate that EPSS achieves superior performance compared to conventional optimization methods, including but not limited to the Nelder-Mead simplex algorithm and Newton-CG (Newton Conjugate Gradient) method. The proposed method is verified on three ViT models (Res-ViT, VIT-B, and VIT-L) with different configurations and demonstrated lossless precision. Hardware implementation on the FPGA platform achieves lower resource costs (LUT, Register and BRAM) and higher work frequency compared with the existing advanced method.
The rest of this paper is organized as follows: Section 2 presents prior research efforts on hardware-accelerated computation of the GELU function. Section 3 elaborates on the algorithmic principles underlying the ISPA and EPSS methods. Section 4 details the hardware circuit implementations derived from ISPA and EPSS methodologies. Section 5 evaluates the proposed algorithms and circuits through comprehensive performance analysis. Finally, Section 6 closes with a summary and conclusion.
2. Background and Prior Research
While the GELU function can be computationally approximated, prevailing approximation methodologies necessitate the computation of nonlinear functions such as hyperbolic tangent (tanh), rendering these approaches hardware-unfriendly in implementation. Therefore, to enhance the operational efficiency of neural networks within specialized architectures, there is an urgent need to propose a hardware-friendly approximation method for the GELU activation function.
Some prior work has addressed the approximation and hardware implementation of the GELU activation function. For instance, ref. [29] directly approximates GELU using a piecewise linear function, but this method exhibits low computational accuracy and inefficient circuit design, resulting in high hardware resource consumption. In another approach, ref. [34] designs a circuit that supports both softmax and GELU activation function calculations by leveraging their shared computational properties and optimizing logarithmic and exponential calculations through mathematical transformations. However, this design requires a significant amount of LUT and register resources, limiting its efficiency. Ref. [25] employs a lookup table method for nonlinear function calculations and introduces a new LUT structure (t-LUT), though the design ultimately consumes considerable memory resources and fails to achieve high accuracy.
Considering the balance between hardware resource consumption and computational accuracy, we chose to use the BF16 data format. BF16, introduced by Google, is a floating-point format optimized for deep learning applications. Compared to the traditional FP16 format, BF16 increases the exponent to eight bits while reducing the mantissa to seven bits. This gives BF16 a dynamic range comparable to FP32, effectively avoiding the reduced representable range often seen with lower bit-width formats. By using BF16 instead of a fixed-point format, we can significantly reduce errors associated with lower bit-width data formats and improve the accuracy of GELU activation function approximations using piecewise linear functions.
3. Algorithm Design
This section introduces ISPA and EPSS, a novel piecewise approximation method for GELU and an automatic interval search strategy.
3.1. Internal Symmetry Piecewise Approximation Method
The GELU activation function is a Gaussian-based activation function, and its mathematical representation is denoted as follows:
(1)
where (x) represents the standard normal cumulative distribution function (CDF) of the input x. The CDF is written as(2)
where the erf is the represented error function, and it can be calculated according to the following equation:(3)
The erf is an odd function with symmetry at the zero point, and this property leads to the transformation for CDF:(4)
where > 0, < 0, and = .To leverage the symmetry of the erf function within the GELU activation to reduce computational complexity, we propose a symmetric transformation method called the ISPA. We directly apply a piecewise approximation of its internal erf function. Our method can be described as
(5)
After defining the symmetric transformation and approximation method of GELU, we implement the piecewise approximation on the erf in Equation (5). This constitutes a distinctive component of our algorithm compared to other existing works, as we implement piecewise approximation solely on a specific segment of the entire formula rather than approximating the complete GELU computation formula.
The erf is an odd function with zero point symmetry. We divided it into segmentation intervals, each characterized by distinct coefficients and , where . As Equation (6) indicated, erf is approximated by calculating for each interval. The approximation result of erf can be seen in Figure 1a.
(6)
After approximating the erf function, the GELU formulation can be represented through the derived analytical expression in Equation (7).
(7)
Utilizing the approximation parameters obtained from the erf analysis, we establish an accurate functional representation of GELU. The resulting approximation of the GELU function is presented in Figure 1b, demonstrating high agreement with the original function through visual inspection. Quantitative evaluation of the approximation accuracy will be systematically examined in Section 5.
3.2. Error Peak Search Strategy
We utilize piecewise linear approximation for the erf function to facilitate simplified circuit implementation. In piecewise linear approximations, breakpoint selection constitutes a critical factor, as optimal positioning achieves enhanced approximation accuracy with reduced segment counts while controlling hardware complexity. To address this challenge, we develop an automated breakpoint search framework—the Error Peak Search Strategy (EPSS)—specifically designed for high-precision approximation, enabling efficient identification of optimal segmentation points.
The EPSS determines new breakpoints by analyzing approximation errors generated during piecewise linear fitting of nonlinear functions. We analyze this approach through a case study where piecewise linear approximations first fit the erf function and subsequently implement GELU approximation. As illustrated in Figure 2a, the absolute error between the ISPA-based piecewise linear approximation and the original erf function displays symmetry about zero, a consequence of exploiting the erf function’s intrinsic symmetry during fitting. This symmetry permits EPSS optimization to concentrate exclusively on the [0, 8] interval, with optimized breakpoints automatically mirrored to [−8, 0]. Analysis of the erf curve reveals that outputs asymptotically approach for inputs exceeding 3. Therefore, we truncate the fitting domain at x = 3 and approximate all values in as constant , establishing [0, 3] as the initial segmentation interval.
The initial interval is divided into six segments with lengths constrained to , a design choice motivated by hardware implementation requirements. This quantization scheme ensures breakpoint coordinates are exactly representable in BF16 formats, minimizing parameter storage errors. EPSS identifies the dominant error peak () and compares adjacent peaks in both positive () and negative () directions. The interval containing the higher-magnitude peak undergoes refinement through midpoint insertion. Figure 2b demonstrates this process: with six initial breakpoints, the peak dominates, prompting refinement of (, ). Subsequent error analysis (Figure 2c) shows significant error reduction in the modified region when progressing to seven breakpoints. Further optimization to eight breakpoints (Figure 2d) eliminates the peak while achieving comprehensive error suppression, validating EPSS’s interval optimization efficacy. The iterative breakpoint identification process is formalized in Algorithm 1.
Algorithm 1 Error Peak Search Strategy |
InitSegments, MaxSegNums NewSegments 1:. 2:. 3:. while do 4:. 5:. if then 6:. 7:. else 8:. 9:. end if 10:. 11:. 12:. end while |
4. Hardware Architecture and Implementation Details
This section presents the hardware circuit implementing ISPA, which we proposed in Section 3, utilizing the BF16 data format. The discussion covers the overall circuit framework and the internal structures of the multiplier and adder specifically designed to support the BF16 data format.
4.1. Overall Architecture
The overall block diagram of the accelerator is shown in Figure 3. Each GELU function computation requires two clock cycles.
Firstly, the calculation of Equation (6) is executed. The input x is evaluated to determine the interval to which it belongs, and the corresponding coefficients and are retrieved from LUT and sent to the multiplier and adder, respectively. The final output from the adder is temporarily stored in a register.
Based on Equation (7), the subsequent steps are executed using the result from stage one. The value and the result from stage one are passed to the multiplier. By leveraging the BF16 format, the multiplication by does not require a multiplier; instead, the exponent of x is reduced by a constant to compute , thus reducing the computation load. The value of x and the output from the multiplier are then passed to the adder. Finally, the MUX unit selects either the multiplier or adder output to pass to the register based on the sign of the input x.
(8)
Additionally, as Equation (8) shows, the constant can be incorporated into the coefficient , which is stored in the circuit as . Figure 4 presents this approach, which reduces the number of addition operations required during computation.
Furthermore, since the initial interval length received by the EPSS necessarily satisfies being an integer multiple of , and when adding new partition points, the newly generated points are always positioned at the midpoints of existing intervals, the resulting sub-intervals maintain lengths that remain integer multiples of . This mathematical property ensures that all numerical values representing partition points in circuit design can be exactly represented in the BF16 data format without incurring rounding errors during value storage.
4.2. Basic Calculate Unit
To perform multiplication and addition operations in the BF16 data format, we designed the corresponding multiplier and adder circuits. We analyzed the computation process of BF16 data, handling the sign bit, exponent, and mantissa separately to design the multiplier and adder. To improve the operating frequency of the circuit, the internal structure of both the multiplier and adder was designed using two-stage pipelining techniques.
The arithmetic units for BF16 floating-point operations employ a unified two-stage processing pipeline with tailored computational steps for multiplication and addition, as illustrated in Figure 5a and Figure 5b, respectively. Both implementations share fundamental normalization and overflow handling mechanisms while differing in their initial computational approaches.
For multiplication, the first stage combines the exponents through addition and computes the mantissa product through binary multiplication, generating a sixteen-bit intermediate result with seven higher bits preserved for rounding precision. Conversely, the adder’s initial phase aligns exponents by shifting the smaller-magnitude operand’s mantissa based on exponent differences, followed by mantissa addition/subtraction.
The second stage demonstrates architectural convergence through three essential operations: normalization, rounding, and overflow management. The multiplier performs normalization through bit-shifting and subtractive exponent adjustment to maintain the leading one convention, while the adder resolves carry propagation and mantissa realignment through similar shift operations. Both units incorporate overflow detection mechanisms—the multiplier limits output within representable ranges, whereas the adder employs a fail-safe zero-output strategy for overflow conditions.
5. Experiments and Performance Evaluation
This section evaluates ISPA, EPSS, and the designed circuit from several aspects, including algorithmic error, the actual computational performance of the deep learning network (DNN), and the implementation results of the hardware circuit. To evaluate the fitting accuracy of the piecewise functions obtained by EPSS under different segmentation counts and the area consumption of the ISPA computational circuit, assessments were conducted for the cases with eight segments (ISPA-8) and sixteen segments (ISPA-16).
5.1. Quantitative Error Characterization
As shown in Figure 6, we compared the approximation results obtained by directly applying EPSS to the GELU function versus applying ISPA. Due to the order-of-magnitude difference in accuracy between the two fitting methods, a logarithmic axis is used in the figure. The results demonstrate that piecewise linear fitting of the internal erf yields higher accuracy. We attribute this to the relatively simpler curve structure of the erf compared to the GELU. Additionally, retaining the 0.5× multiplication after approximating the erf preserves part of the GELU calculation process, which further contributes to accuracy improvement.
As shown in Table 1, the mean square error (MSE) and max absolute error (MAE) between the approximated results and the exact results are presented for comparison. The segment number column denotes the number of segments in the piecewise approximation method. The results indicate that our fitted GELU function achieved higher accuracy than other methods with fewer segments. Table 1 also presents a comparison of the fitting accuracy between ISPA-8 and ISPA-16. It is evident that after using EPSS to identify new segmentation points, increasing the number of segments in the piecewise function from 8 to 16 leads to a significant improvement in fitting accuracy.
5.2. DNN Accuracy Test
To test the potential impact of our ISPA on the actual application of DNNs, we selected the ViT (Vision Transformer) [35] for evaluation. As illustrated in Figure 7, the architectural framework of ViT primarily consists of Transformer encoder modules. The GELU activation function is implemented within the MLP module of the encoder and is invoked multiple times throughout the entire computational process of ViT. We utilized Google’s pre-trained ViT model based on ImageNet21K and fine-tuned it on the CIFAR-100 dataset. After completing the training, we replaced the GELU function in the ViT network with our proposed fitted function and then performed inference to test whether the inference accuracy was affected by the fitted GELU function.
Table 2 and Table 3 show the test results. We evaluated three different ViT configurations: Base (ViT-B), Large (ViT-L), and ResNet Backbone (Res-ViT) with ISPA-8 and ISPA-16. The results indicate that using ISPA-8 for ViT model inference achieves minimal accuracy loss while using ISPA-16 does not result in any accuracy loss. This demonstrates that EPSS can identify the segmentation points required for high-precision fitting and validate the effectiveness of ISPA.
5.3. Hardware Resource Evaluation
We used the Vivado to perform synthesis and implementation of the hardware circuit, with the selected device being XCZU9EG. Table 4 shows the parameters of XCZU9EG. Table 5 presents the resource consumption of the circuit and compares it with that of other GELU accelerator circuits. Compared to other designs, our design consumes fewer logic resources and registers, utilizes no digital signal processor (DSP) for computation, and achieves a higher operating frequency. Table 5 indicates that there is no significant difference in the resource consumption of the ISPA computational circuit across different segmentation counts. Therefore, under varying application scenarios, the choice of circuit configuration can primarily be determined based on the required computational accuracy.
6. Conclusions
In this study, we present a systematic investigation of activation function approximation through a novel methodology named ISPA. The core innovation lies in exploiting the inherent odd function property of the error function to construct a piecewise linear approximation for Gaussian Error Linear Unit activation, effectively combining analytical approximation with an automated piecewise segmentation strategy.
Furthermore, we implement a hardware-efficient architecture on FPGA platforms. The proposed design demonstrates superior resource efficiency, requiring only 337 LUTs and 185 FFs for ISPA-16 implementation. The implementation demonstrates that using the internal symmetry of erf to approximate GELU can achieve higher fitting accuracy and save more resources compared with the existing approximation method. Compared with [25,29,30,33], our work achieves lower hardware resource utilization and a higher operating frequency without employing any DSPs. The proposed techniques establish a new paradigm for activation function implementation that harmonizes mathematical precision with hardware pragmatism.
Conceptualization, J.H.; methodology, J.H.; software, J.H.; validation, J.H.; writing—original draft, J.H.; writing—review and editing, J.H., Y.W., M.Z. and J.Z. All authors have read and agreed to the published version of the manuscript.
The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.
The authors declare no conflict of interest.
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
Figure 1 (a) erf(
Figure 2 EPSS Execution Process. (a) The absolute error between the piecewise linear fitting function with six segmentation points and the original function over the interval [−8, 8]. (b) The absolute error obtained by inputting six segmentation points during EPSS initialization. (c) Comparison of absolute errors obtained after EPSS added a new segmentation point. (d) After two runs of EPSS, the absolute error in the highlighted blue region significantly decreased.
Figure 3 Overall structure of GELU accelerator.
Figure 4 The optimized circuit eliminates the area consumption of one BF16 adder.
Figure 5 The internal structure of the basic computation unit. (a) BF16Mul. (b) BF16Add.
Figure 6 Comparison of Accuracy of Fitting GELU and ERF.
Figure 7 Structure of Vision Transformer.
Comparison of Algorithm Error.
Method | Input Interval | Segment Number | MSE | MAE |
---|---|---|---|---|
[ | [−8,8] | 16 | 1.19 | 1.95 |
[ | [−4,4] | 10 | 8.31 | N/A |
[ | [−4,4] | N/A | 7.10 | 1.13 |
[ | [−8,8] | 8 | 1.54 | 4.06 |
ISPA-8 | [−8,8] | 8 | 3.97 | 2.74 |
ISPA-16 | [−8,8] | 16 | 4.29 | 1.07 |
Accuracy evaluation of ViT with ISPA-8.
Res-ViT | ViT-B | ViT-L | ||||
---|---|---|---|---|---|---|
TOP-1 | TOP-5 | TOP-1 | TOP-5 | TOP-1 | TOP-5 | |
Baseline | 90.97 | 99.03 | 92.17 | 99.10 | 93.32 | 99.30 |
Fitted NN | 90.94 | 99.03 | 92.16 | 99.10 | 93.29 | 99.30 |
Acc. Loss | −0.03 | 0.00 | −0.03 | 0.00 | −0.03 | 0.00 |
The unit of accuracy is a percentage.
Accuracy evaluation of ViT with ISPA-16.
Res-ViT | ViT-B | ViT-L | ||||
---|---|---|---|---|---|---|
TOP-1 | TOP-5 | TOP-1 | TOP-5 | TOP-1 | TOP-5 | |
Baseline | 90.97 | 99.03 | 92.17 | 99.10 | 93.32 | 99.30 |
Fitted NN | 90.97 | 99.03 | 92.17 | 99.10 | 93.32 | 99.30 |
Acc. Loss | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
The unit of accuracy is a percentage.
XCZU9EG Resources.
Device | LUT | Slice Register | DSP | BRAM (Mb) |
---|---|---|---|---|
XCZU9EG | 274,080 | 548,160 | 2520 | 32.1 |
Comparison of hardware resource.
Method | Device | LUT | Slice | DSP | BRAM | Frequency |
---|---|---|---|---|---|---|
[ | XC7S50 | 176 * | 0 | 0 | 11,264 | 50 |
[ | XCVU9P | 2940 | 2951 | 16 | 0 | 250 |
[ | XC7Z045 | 324 | 318 | 1 | 0 | 410 |
[ | XC7Z010 | 219 | 247 | 0.5 | 0 | 312.5 |
ISPA-8 | XCZU9EG | 295 | 194 | 0 | 0 | 450 |
ISPA-16 | XCZU9EG | 337 | 185 | 0 | 0 | 450 |
* The original design used 11,264 LUT bits, equal to 176 LUTs.
1. Hendrycks, D.; Gimpel, K. Gaussian error linear units (gelus). arXiv; 2016; arXiv: 1606.08415
2. Lee, M. Mathematical analysis and performance evaluation of the gelu activation function in deep learning. J. Math.; 2023; 2023, 4229924. [DOI: https://dx.doi.org/10.1155/2023/4229924]
3. Dubey, S.R.; Singh, S.K.; Chaudhuri, B.B. Activation functions in deep learning: A comprehensive survey and benchmark. Neurocomputing; 2022; 503, pp. 92-108.
4. Devlin, J.; Chang, M.; Lee, K.; Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Proceedings of the Proc. NAACL-HLT; Albuquerque, NM, USA, 29 April–4 May 2019; pp. 4171-4186.
5. Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; Liu, P.J. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. J. Mach. Learn. Res.; 2020; 21,
6. Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.; Sutskever, I. Language models are unsupervised multitask learners. OpenAI Blog; 2019; 1, 9.
7. Zhang, P.; Dai, X.; Yang, J.; Xiao, B.; Yuan, L.; Zhang, L.; Gao, J. Multi-Scale Vision Longformer: A New Vision Transformer for High-Resolution Image Encoding. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV); Montreal, QC, Canada, 11–17 October 2021; pp. 2978-2988.
8. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021; Montreal, QC, Canada, 10–17 October 2021; pp. 9992-10002.
9. Radford, A.; Kim, J.W.; Hallacy, C.; Ramesh, A.; Goh, G.; Agarwal, S.; Sastry, G.; Askell, A.; Mishkin, P.; Clark, J.
10. Radford, A.; Kim, J.W.; Xu, T.; Brockman, G.; McLeavey, C.; Sutskever, I. Robust Speech Recognition via Large-Scale Weak Supervision. Proceedings of the International Conference on Machine Learning, ICML 2023; Honolulu, HI, USA, 23–29 July 2023; PMLR: Westminster, UK, 2023; Volume 202, pp. 28492-28518.
11. Wang, T.; Gong, L.; Wang, C.; Yang, Y.; Gao, Y.; Zhou, X.; Chen, H. ViA: A Novel Vision-Transformer Accelerator Based on FPGA. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst.; 2022; 41, pp. 4088-4099. [DOI: https://dx.doi.org/10.1109/TCAD.2022.3197489]
12. Nag, S.; Datta, G.; Kundu, S.; Chandrachoodan, N.; Beerel, P.A. ViTA: A Vision Transformer Inference Accelerator for Edge Applications. Proceedings of the 2023 IEEE International Symposium on Circuits and Systems (ISCAS); Monterey, CA, USA, 21–25 May 2023; pp. 1-5.
13. You, H.; Sun, Z.; Shi, H.; Yu, Z.; Zhao, Y.; Zhang, Y.; Li, C.; Li, B.; Lin, Y. ViTCoD: Vision Transformer Acceleration via Dedicated Algorithm and Accelerator Co-Design. Proceedings of the 2023 IEEE International Symposium on High-Performance Computer Architecture (HPCA); Montreal, QC, Canada, 25 February–1 March 2023; pp. 273-286.
14. Dumoulin, J.; Houshmand, P.; Jain, V.; Verhelst, M. Enabling Efficient Hardware Acceleration of Hybrid Vision Transformer (ViT) Networks at the Edge. Proceedings of the 2024 IEEE International Symposium on Circuits and Systems (ISCAS); Singapore, 19–22 May 2024; pp. 1-5.
15. Marino, K.; Zhang, P.; Prasanna, V.K. ME-ViT: A Single-Load Memory-Efficient FPGA Accelerator for Vision Transformers. Proceedings of the 2023 IEEE 30th International Conference on High Performance Computing, Data, and Analytics (HiPC); Goa, India, 18–21 December 2023; pp. 213-223.
16. Dong, P.; Zhuang, J.; Yang, Z.; Ji, S.; Li, Y.; Xu, D.; Huang, H.; Hu, J.; Jones, A.K.; Shi, Y.
17. Parikh, D.; Li, S.; Zhang, B.; Kannan, R.; Busart, C.; Prasanna, V. Accelerating ViT Inference on FPGA through Static and Dynamic Pruning. Proceedings of the 2024 IEEE 32nd Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM); Orlando, FL, USA, 5–8 May 2024; pp. 78-89.
18. Tian, S.; Szafranski, C.; Zheng, C.; Yao, F.; Louri, A.; Chen, C.; Zheng, H. VITA: ViT Acceleration for Efficient 3D Human Mesh Recovery via Hardware-Algorithm Co-Design. Proceedings of the 61st ACM/IEEE Design Automation Conference, DAC ’24; San Francisco, CA, USA, 23–27 June 2024.
19. Han, Y.; Liu, Q. HPTA: A High Performance Transformer Accelerator Based on FPGA. Proceedings of the 2023 33rd International Conference on Field-Programmable Logic and Applications (FPL); Gothenburg, Sweden, 4–8 September 2023; pp. 27-33.
20. Zhou, M.; Xu, W.; Kang, J.; Rosing, T. TransPIM: A Memory-based Acceleration via Software-Hardware Co-Design for Transformer. Proceedings of the 2022 IEEE International Symposium on High-Performance Computer Architecture (HPCA); Seoul, Republic of Korea, 2–6 April 2022; pp. 1071-1085.
21. Luo, Y.; Yu, S. H3D-Transformer: A Heterogeneous 3D (H3D) Computing Platform for Transformer Model Acceleration on Edge Devices. ACM Trans. Des. Autom. Electron. Syst.; 2024; 29, pp. 1-19. [DOI: https://dx.doi.org/10.1145/3649219]
22. Wang, H.Y.; Chang, T.S. Row-wise Accelerator for Vision Transformer. Proceedings of the 2022 IEEE 4th International Conference on Artificial Intelligence Circuits and Systems (AICAS); Incheon, Republic of Korea, 13–15 June 2022; pp. 399-402.
23. Nilsson, P.; Shaik, A.U.R.; Gangarajaiah, R.; Hertz, E. Hardware implementation of the exponential function using Taylor series. Proceedings of the 2014 NORCHIP; Tampere, Finland, 27–28 October 2014; pp. 1-4.
24. Qin, Z.; Qiu, Y.; Sun, H.; Lu, Z.; Wang, Z.; Shen, Q.; Pan, H. A Novel Approximation Methodology and Its Efficient VLSI Implementation for the Sigmoid Function. IEEE Trans. Circuits Syst. II Express Briefs; 2020; 67, pp. 3422-3426. [DOI: https://dx.doi.org/10.1109/TCSII.2020.2999458]
25. Xie, Y.; Joseph Raj, A.N.; Hu, Z.; Huang, S.; Fan, Z.; Joler, M. A Twofold Lookup Table Architecture for Efficient Approximation of Activation Functions. IEEE Trans. Very Large Scale Integr. (VLSI) Syst.; 2020; 28, pp. 2540-2550. [DOI: https://dx.doi.org/10.1109/TVLSI.2020.3015391]
26. Leboeuf, K.; Namin, A.H.; Muscedere, R.; Wu, H.; Ahmadi, M. High Speed VLSI Implementation of the Hyperbolic Tangent Sigmoid Function. Proceedings of the 2008 Third International Conference on Convergence and Hybrid Information Technology; Busan, Republic of Korea, 11–13 November 2008; Volume 1, pp. 1070-1073.
27. Chiluveru, S.R.; Gyanendra,; Chunarkar, S.; Tripathy, M.; Kaushik, B.K. Efficient Hardware Implementation of DNN-Based Speech Enhancement Algorithm With Precise Sigmoid Activation Function. IEEE Trans. Circuits Syst. II Express Briefs; 2021; 68, pp. 3461-3465. [DOI: https://dx.doi.org/10.1109/TCSII.2021.3082941]
28. Choi, K.; Kim, S.; Kim, J.; Park, I.C. Hardware-Friendly Approximation for Swish Activation and Its Implementation. IEEE Trans. Circuits Syst. II Express Briefs; 2024; 71, pp. 4516-4520. [DOI: https://dx.doi.org/10.1109/TCSII.2024.3394806]
29. Sadeghi, M.E.; Fayyazi, A.; Azizi, S.; Pedram, M. PEANO-ViT: Power-Efficient Approximations of Non-Linearities in Vision Transformers. Proceedings of the 29th ACM/IEEE International Symposium on Low Power Electronics and Design; Newport Beach, CA, USA, 5–7 August 2024; pp. 1-6.
30. Hong, Q.; Liu, Z.; Long, Q.; Tong, H.; Zhang, T.; Zhu, X.; Zhao, Y.; Ru, H.; Zha, Y.; Zhou, Z.
31. Li, L.; Zhang, S.; Wu, J. An Efficient Hardware Architecture for Activation Function in Deep Learning Processor. Proceedings of the 2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC); Chongqing, China, 27–29 June 2018; pp. 911-918.
32. Liu, K.; Shi, W.; Huang, C.; Zeng, D. Cost effective Tanh activation function circuits based on fast piecewise linear logic. Microelectron. J.; 2023; 138, 105821. [DOI: https://dx.doi.org/10.1016/j.mejo.2023.105821]
33. Li, Y.; Cao, W.; Zhou, X.; Wang, L. A Low-Cost Reconfigurable Nonlinear Core for Embedded DNN Applications. Proceedings of the 2020 International Conference on Field-Programmable Technology (ICFPT); Maui, HI, USA, 9–11 December 2020; pp. 35-38.
34. Li, T.; Zhang, F.; Xie, G.; Fan, X.; Gao, Y.; Sun, M. A high speed reconfigurable architecture for softmax and GELU in vision transformer. Electron. Lett.; 2023; 59, e12751. [DOI: https://dx.doi.org/10.1049/ell2.12751]
35. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Abstract
The Gaussian Error Linear Unit (GELU), a crucial component of the transformer model, poses a significant challenge for hardware implementation. To address this issue, this paper proposes internal symmetry piecewise approximation (ISPA) and error peak search strategy (EPSS) for high-precision and high-efficiency implementation of the GELU activation function. ISPA only approximates the positive axis of the erf in GELU and then leverages its internal symmetry to calculate the negative axis part. With ISPA, the mean square error (MSE) between the fitted result and the true value can reach
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer