Convolutional Neural Networks with fixed binary kernels.


Abstract

In this work, we propose the local binary convolutional neural networks (LBCNN) module, which is a ground-up re-design of the CNN counterpart. The design principles of the LBCNN are motivated by the local binary patterns (LBP) descriptor. Specifically, dense real-valued learnable filters in CNN are replaced by pre-defined sparse local binary filters which do not need to be updated during the training process. The learning of the network now translates to how to intelligently combine the pre-defined non-mutable sparse binary filters, rather than learning the convolutional filters themselves. By doing this, LBCNN enjoys a 9x to 81x savings in the number of parameters needed to be learned compared to CNN, as well as 32x savings in model size, thanks to binary weights instead of floating point ones. LBCNN is less prone to overfitting due to much lower model complexity. Theoretical analysis shows that the LBCNN can very well approximate the activations in the CNN. We have shown experimentally that LBCNN reaches state-of-the-art performance on a range of visual datasets (MNIST, SVHN, and CIFAR-10) while enjoying significant utility savings. ***

Overview

lbp alt text

We draw inspiration from local binary patterns that have been very successfully used for facial analysis.

lbcnn

Our LBCNN module is designed to approximate a fully learnable dense CNN module.

sparsity alt text alt text

Binary Convolutional Kernels with different sparisty levels.

theorem


Contributions

  • Convolutional Kernels inspired by local binary patterns.
  • Convolutional Neural Network architecture with fixed randomized sparse binary convolutional kernels.
  • Lightweight CNN with massive computational and memory savings.

References

  1. Felix Juefei-Xu, Vishnu Naresh Boddeti and Marios Savvides, Local Binary Convolutional Neural Networks, Arxiv, 2016