Abstract
Computer vision algorithms, specifically convolutional neural networks (CNNs) and feature extraction algorithms, have become increasingly pervasive in many vision tasks. As algorithm complexity grows, it raises computational and memory requirements, which poses a challenge to embedded vision systems with limited resources. Heterogeneous architectures have recently gained momentum as a new path forward for energy efficiency and faster computation, as they allow for the effective utilisation of various processing units, such as Central Processing Unit (CPU), Graphics Processing Unit (GPU), and Field Programmable Gate Array (FPGA), which are tightly integrated into a single platform to enhance system performance. However, partitioning algorithms over each accelerator requires careful consideration of hardware limitations and scheduling. We propose two low-high power heterogeneous systems and a method of partitioning CNNs and a feature extraction algorithm (SIFT) onto the hardware. We benchmark feature detection and image classification algorithms on heterogeneous systems and their discrete accelerator counterparts. We demonstrate that both systems outperform FPGA/GPU-only accelerators. Experimental results show that for the SIFT algorithm, there is 18% runtime improvement over the GPU. In the case of MobilenetV2 and ResNet18 networks, the high power system achieves 17.75%/5.55% runtime and 6.25%/2.08% energy improvements respectively, against their discrete counterparts. The low-power system achieves 6.32%/16.21% runtime and 7.32%/3.27% energy savings. The results show that effective partitioning and scheduling of imaging algorithms on heterogeneous systems is a step towards better efficiency over traditional FPGA/GPU-only accelerators.
You have requested "on-the-fly" machine translation of selected content from our databases. This functionality is provided solely for your convenience and is in no way intended to replace human translation. Show full disclaimer
Neither ProQuest nor its licensors make any representations or warranties with respect to the translations. The translations are automatically generated "AS IS" and "AS AVAILABLE" and are not retained in our systems. PROQUEST AND ITS LICENSORS SPECIFICALLY DISCLAIM ANY AND ALL EXPRESS OR IMPLIED WARRANTIES, INCLUDING WITHOUT LIMITATION, ANY WARRANTIES FOR AVAILABILITY, ACCURACY, TIMELINESS, COMPLETENESS, NON-INFRINGMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Your use of the translations is subject to all use restrictions contained in your Electronic Products License Agreement and by using the translation functionality you agree to forgo any and all claims against ProQuest or its licensors for your use of the translation functionality and any output derived there from. Hide full disclaimer
Details
1 University of Strathclyde, Department of Electronic and Electrical Engineering, Glasgow, UK (GRID:grid.11984.35) (ISNI:0000 0001 2113 8138); STMicroelectronics (R&D) Ltd., Sensor Technology Group, Imaging Division, Edinburgh, UK (GRID:grid.11984.35)
2 Newcastle University, School of Computing, Newcastle upon Tyne, UK (GRID:grid.1006.7) (ISNI:0000 0001 0462 7212)
3 STMicroelectronics (R&D) Ltd., Sensor Technology Group, Imaging Division, Edinburgh, UK (GRID:grid.1006.7)




