Download PDFOpen PDF in browserPerformance Models and Energy-Optimal Scheduling of DNNs on Many-Core Hardware with Dynamic Power ManagementEasyChair Preprint 96515 pages•Date: February 2, 2023AbstractProcessing of deep neural networks (DNNs) at the edge may be limited by power or energy constraints of the used embedded hardware system. It is therefore desirable for the compiler to create efficient executables for given DNN models meeting the specific constraints. Here, we consider a low-power many-core hardware with 152 processing elements (PE), each containing an ARM M4F processor, 128 KB SRAM and a custom accelerator for DNN inference. Dynamic power management allows each core to switch between a high-speed and a low-power mode within tens of nanoseconds. For an energy-optimal parallelization of DNNs on the hardware, we first develop analytical performance models to predict the time and energy for executing a DNN layer with the custom accelerator. The models are fitted and validated using measurements on a prototype chip. In a second step we develop concepts for the energy-optimal parallelization of DNNs under latency constraints and evaluate them deploying the performance models: By dynamically switching between the operating modes more than \SI{10}{\%} of energy can be saved compared to the case running at high-speed mode only. The presented methodology and concepts are easily transferable to other many-core edge processors. Keyphrases: Deep Neural Networks, Edge Computing, Parallelization, many-core hardware, performance model, power management
|