Content area
Medical endoscopic video processing requires real-time execution of color component acquisition, color filter array (CFA) demosaicing, and high dynamic range (HDR) compression under low-light conditions, while adhering to strict thermal constraints within the surgical handpiece. Traditional hardware-aware neural architecture search (NAS) relies on fixed hardware design spaces, making it difficult to balance accuracy, power consumption, and real-time performance. A collaborative “power-accuracy” optimization method is proposed for hardware-aware NAS. Firstly, we proposed a novel hardware modeling framework by abstracting FPGA heterogeneous resources into unified cell units and establishing a power–temperature closed-loop model to ensure that the handpiece surface temperature does not exceed clinical thresholds. In this framework, we constrained the interstage latency balance in pipelines to avoid routing congestion and frequency degradation caused by deep pipelines. Then, we optimized the NAS strategy by using pipeline blocks and combined with a hardware efficiency reward function. Finally, color component acquisition, CFA demosaicing, dynamic range compression, dynamic precision quantization, and streaming architecture are integrated into our framework. Experiments demonstrate that the proposed method achieves 2.8 W power consumption at 47 °C on a Xilinx ZCU102 platform, with a 54% improvement in throughput (vs. hardware-aware NAS), providing an engineer-ready lightweight network for medical edge devices such as endoscopes.
Details
; Weizhi, Xian 1
; Xuekai, Wei 1 ; Qin, Yi 1
1 College of Computer Science, Chongqing University, Chongqing 400044, China; [email protected] (C.Z.); [email protected] (G.W.); [email protected] (T.G.); [email protected] (J.Y.); [email protected] (W.X.); [email protected] (X.W.)
2 College of Computer Science, Chongqing University, Chongqing 400044, China; [email protected] (C.Z.); [email protected] (G.W.); [email protected] (T.G.); [email protected] (J.Y.); [email protected] (W.X.); [email protected] (X.W.), East China Institute of Digital Medical Engineering, Shangrao 334000, China
3 College of Computer Science, Chongqing University, Chongqing 400044, China; [email protected] (C.Z.); [email protected] (G.W.); [email protected] (T.G.); [email protected] (J.Y.); [email protected] (W.X.); [email protected] (X.W.), School of Computing and Data Engineering, NingboTech University, Ningbo 315100, China
4 College of Computer Science, Chongqing University, Chongqing 400044, China; [email protected] (C.Z.); [email protected] (G.W.); [email protected] (T.G.); [email protected] (J.Y.); [email protected] (W.X.); [email protected] (X.W.), School of Instrumentation Science and Opto-Electronics Engineering, Beihang University, Beijing 100083, China