Mobile QR Code
Title Channel Pruning using Scaling Factor of Batch Normalization in Compact Networks
Authors 최재훈(Jaehoon Choi) ; 김대영(Daeyeong Kim) ; 양동원(DongWon Yang) ; 이준희(Junhee Lee) ; 김도경(Dokyoung Kim) ; 김창익(Changick Kim)
DOI https://doi.org/10.5573/ieie.2019.56.3.52
Page pp.52-60
ISSN 2287-5026
Keywords Convolutional Neural Network ; Pruning ; Classification ; and Embedded Systems
Abstract The existing convolutional neural network-based deep learning models are difficult to apply to real-world applications due to its large data size and high computational complexity. To overcome these limitations, compact networks such as MobileNet and MobileNetV2 are proposed. However, these compact networks still have redundant parameters that can be pruned in an embedded system. Therefore, in order to classify objects into a small number of classes required in a real environment, it is necessary to study on the lightening deep learning models. In this paper, we propose a method to lighten the network by pruning channels of MobileNet and MobileNetV2, which are representative compact networks for efficient object classification. We prune the channel of the compact networks based on the scaling factor of the batch normalization used in Network Slimming. We also constructed a dataset consisting of people, vehicles, and other data to evaluate the object classification performance of a small class. We can confirm the proposed algorithm reduces the model size and the computational complexity while maintaining the classification performance in the above dataset, CIFAR, and SVHN datasets.