Adversarial Perturbation Sensitivity in Production ML Models
Keywords:
Adversarial Robustness; Inference Stability; Production Machine LearningAbstract
Adversarial perturbations pose a significant challenge to the stability and reliability of production
machine learning systems, where inference decisions influence automated workflows, user
interactions, and business-critical transactions. This study examines how small but deliberately
structured perturbations can destabilize model outputs even when the perturbations are imperceptible
in magnitude and do not alter the semantic interpretation of input data. By evaluating model responses
across baseline inputs, random noise influences, and gradient-targeted adversarial perturbations, the
work identifies distinct divergence patterns in prediction confidence and decision boundaries. The
analysis further reveals that perturbation impact can propagate across batch processing pipelines and
multi-step user interaction workflows, amplifying adversarial effects beyond the initially perturbed
instance. Additionally, system resource variability such as fluctuating CPU availability or memory
pressure was shown to intensify perturbation sensitivity, indicating that operational environment
stability is a key determinant of robustness. These findings emphasize the need for holistic adversarial
defense strategies that combine model-level resilience, runtime monitoring, infrastructure
determinism, and application workflow safeguards to ensure dependable real-world deployment of
machine learning models.