Neural Pruning Effects on Decision Boundary Geometry
Keywords:
Neural Pruning, Decision Boundary Geometry, Model Compression, Robustness, Lottery Ticket Subnetworks, Structured SparsityAbstract
Neural pruning, the selective removal of parameters from trained neural networks, has become a central
method for model compression and efficiency optimization. However, its impact extends beyond
parameter count and latency reduction, influencing the structure and stability of decision boundaries that
determine class separability. This study examines how different pruning strategies magnitude-based
unstructured pruning, structured neuron and filter removal, and lottery-ticket subnet identification affect
decision boundary geometry across multiple neural architectures. Experimental results show that
moderate pruning preserves margin width and cluster separability, while aggressive sparsification
increases decision surface curvature and fragmentation, reducing robustness to perturbations. Structured
and lottery-ticket-based methods were found to maintain smoother boundaries relative to unstructured
pruning, highlighting the importance of representational alignment in preserving classifier stability. These
findings demonstrate that pruning must be evaluated not only in terms of computational efficiency but
also in its geometric implications for reliability in real-world inference environments.