表題番号:2021C-147 日付:2022/04/06
研究課題エッジコンピューティングに向け高いエネルギー効率をもつDNN回路設計技術の創出
研究者所属(当時) 資格 氏名
(代表者) 理工学術院 基幹理工学部 教授 史 又華
(連携研究者) 理工学術院 助手 葉静浩
研究成果概要

Driven by the explosive growth of available data and powerful computing resources, deep neural networks (DNNs) have achieved remarkable breakthrough recently. As DNN models become more diverse for various applications, how to obtain an optimal accelerator design for specific NN models while maintaining high energy efficiency with limited hardware resources becomes an emerging challenge. Unfortunately, few systematic approaches have been proposed yet. To address this design challenge, a model-defined energy efficient DNN accelerator design through design space exploration and architecture optimization is proposed. Firstly, two dual data reuse approaches are proposed to improve on-chip data utilization efficiency. Secondly, a layer-wise design space exploration framework is developed to precisely determine the optimal tiling configuration and the corresponding data reuse strategy for target neural network models even with on-chip hardware resource constraints, which can minimize the amount of data movement between off-chip DRAM and on-chip GLB. Thirdly, an energy efficient accelerator design with on-chip dual data reuse, centered ifmap/weight buffers, distributed psum buffers, and optimal resource configuration techniques is presented for GLB access reduction and energy efficiency improvement. Compared with the state-of-the-art accelerators, the proposed design can leverage the energy efficiency by up to 2.7X and 3.6X for AlexNet and VGG, respectively.