The Lottery Ticket Hypothesis for Object Recognition
CVPR 2021
Sharath Girish
Shishira R Maiya
Kamal Gupta
Hao Chen
Larry Davis
Abhinav Shrivastava
[Paper]
[GitHub]
[Poster]

Abstract

Recognition tasks, such as object recognition and keypoint estimation, have seen widespread adoption in recent years. Most state-of-the-art methods for these tasks use deep networks that are computationally expensive and have huge memory footprints. This makes it exceedingly difficult to deploy these systems on low power embedded devices. Hence, the importance of decreasing the storage requirements and the amount of computation in such models is paramount. The recently proposed Lottery Ticket Hypothesis (LTH) states that deep neural networks trained on large datasets contain smaller subnetworks that achieve on par performance as the dense networks. In this work, we perform the first empirical study investigating LTH for model pruning in the context of object detection, instance segmentation, and keypoint estimation. Our studies reveal that lottery tickets obtained from ImageNet pretraining do not transfer well to the downstream tasks. We provide guidance on how to find lottery tickets with up to 80% overall sparsity on different sub-tasks without incurring any drop in the performance. Finally, we analyse the behavior of trained tickets with respect to various task attributes such as object size, frequency, and difficulty of detection.



In the graph below, we show the performance of directly pruning the entire network for various tasks, across backbones and sparsities. We can see that pruned version either performs better or maintains a comparable accuracy until 80% sparsity.

Resnet 18 backbone
Resnet 50 backbone


Talk



Method



Paper

S. Girish, S. Maiya, K. Gupta,
H. Chen, L. Davis, A. Shrivastava.
The Lottery Ticket Hypothesis for Object Recognition
CVPR 2021
(hosted on CVF)


[Bibtex]


Acknowledgements

This template was originally made by Phillip Isola and Richard Zhang for a colorful ECCV project; the code can be found here.