The next PGR seminar is taking place this Friday 25th April at 2PM in JC 1.33a
Below are the Title and Abstract for Dhananjay’s talk – Please do come along if you are able.
Title: Signal Collapse in One-Shot Pruning: When Sparse Models Fail to Distinguish Neural Representations
Abstract: The deep learning breakthrough in 2012, marked by AlexNet’s success on the ImageNet challenge, ushered in an era of increasingly large neural networks. Modern models now hold tens of millions to billions of parameters, enabling remarkable capabilities but creating serious challenges for deployment in real-world, resource-constrained environments. This has led to growing interest in model compression, with network pruning emerging as a widely adopted method to reduce computational and memory demands. Iterative pruning—based on repeated prune-retrain cycles—can retain accuracy but becomes infeasible at scale due to high computational cost. One-shot pruning, which removes parameters in a single step without retraining, offers a more scalable alternative but often results in severe accuracy degradation. For instance, pruning 80% of the parameters from RegNetX-32GF (a 100M+ parameter model) drops ImageNet accuracy from 80% to 1%, rendering the model unusable. This talk uncovers a new and fundamental bottleneck behind such failures: signal collapse, a previously overlooked phenomenon that disrupts the network’s ability to distinguish between inputs. To address this, a simple and efficient method called REFLOW is introduced, enabling sparse networks to recover strong performance without retraining or gradient computation. On RegNetX-32GF, REFLOW lifts accuracy from 1% to 73% at 80% sparsity—in under 15 seconds. These findings reframe the challenges of one-shot pruning and open new opportunities for practical and efficient deployment of deep learning models.