Download PDFOpen PDF in browser

Nonconvex Regularization for Network Slimming: Compressing CNNs Even More

EasyChair Preprint no. 4380

14 pagesDate: October 12, 2020

Abstract

In the last decade, convolutional neural networks (CNNs) have evolved to become the dominant models for various computer vision tasks, but they cannot be deployed in low-memory devices due to its high memory requirement and computational cost. One popular, straightforward approach to compressing CNNs is network slimming, which imposes an $\ell_1$ penalty on the channel-associated scaling factors in the batch normalization layers during training. In this way, channels with low scaling factors are identified to be insignificant and are pruned in the models. In this paper, we propose replacing the $\ell_1$ penalty with the $\ell_p$ and transformed $\ell_1$ (T$\ell_1$) penalties since these nonconvex penalties outperformed $\ell_1$ in yielding sparser satisfactory solutions in various compressed sensing problems. In our numerical experiments, we demonstrate network slimming with $\ell_p$ and T$\ell_1$ penalties on VGGNet and Densenet trained on CIFAR 10/100. The results demonstrate that the nonconvex penalties compress CNNs better than $\ell_1$. In addition, T$\ell_1$ preserves the model accuracy after channel pruning, and $\ell_{1/2, 3/4}$ yield compressed models with similar accuracies as $\ell_1$ after retraining.

Keyphrases: Batch Normalization, Channel Pruning, CIFAR 10/100, Convolutional Neural Networks, L1 regularization, Lp regularization, nonconvex optimization, sparse optimization

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@Booklet{EasyChair:4380,
  author = {Kevin Bui and Fredrick Park and Shuai Zhang and Yingyong Qi and Jack Xin},
  title = {Nonconvex Regularization for Network Slimming: Compressing CNNs Even More},
  howpublished = {EasyChair Preprint no. 4380},

  year = {EasyChair, 2020}}
Download PDFOpen PDF in browser