Adversarial Perturbation Sensitivity in Production ML Models

Authors

  • Marina Devon, Christopher Hale

Keywords:

Adversarial Robustness; Inference Stability; Production Machine Learning

Abstract

Adversarial perturbations pose a significant challenge to the stability and reliability of production
machine learning systems, where inference decisions influence automated workflows, user
interactions, and business-critical transactions. This study examines how small but deliberately
structured perturbations can destabilize model outputs even when the perturbations are imperceptible
in magnitude and do not alter the semantic interpretation of input data. By evaluating model responses
across baseline inputs, random noise influences, and gradient-targeted adversarial perturbations, the
work identifies distinct divergence patterns in prediction confidence and decision boundaries. The
analysis further reveals that perturbation impact can propagate across batch processing pipelines and
multi-step user interaction workflows, amplifying adversarial effects beyond the initially perturbed
instance. Additionally, system resource variability such as fluctuating CPU availability or memory
pressure was shown to intensify perturbation sensitivity, indicating that operational environment
stability is a key determinant of robustness. These findings emphasize the need for holistic adversarial
defense strategies that combine model-level resilience, runtime monitoring, infrastructure
determinism, and application workflow safeguards to ensure dependable real-world deployment of
machine learning models.

Downloads

Published

2022-10-23

How to Cite

Marina Devon, Christopher Hale. (2022). Adversarial Perturbation Sensitivity in Production ML Models . Journal of Green Energy and Transition to Sustainability, 1(2), 7–12. Retrieved from https://theeducationjournals.com/index.php/JGETS/article/view/270

Issue

Section

Articles