Title |
Knowledge Distillation Based on Adaptive Quantization for Artificial Neural Network Model Compression |
Authors |
이조은(Jo Eun Lee) ; 한태희(Tae Hee Han) |
DOI |
https://doi.org/10.5573/ieie.2020.57.9.37 |
Keywords |
Adaptive quantization; Knowledge distillation; Model compression |
Abstract |
Deep neural networks (DNNs) have been used in various applications such as image classification and computer vision. However, as the depth and complexity of the neural networks increase, the limitation of deployment on resource-constrained environments like embedded systems occurs, and the research on compressing neural networks has been conducted. It includes the quantization technique that reduces the precision of neural network parameters and the knowledge distillation technique that trains a small network using training data of a large one. This paper focuses on knowledge distillation combining the quantization to optimize the computational complexity and storage usage of the neural network model. We propose an adaptive quantization-based knowledge distillation that processes the precision of each data according to the amount of value. As a result of experimenting with the ResNet model on CIFAR10 and CIFAR100 datasets, the proposed method had an average accuracy increase compared to the quantization method, and the neural network model size decreased by an average of 69.29% compared to the full-precision model. |