Class Imbalance Sensitivity Analysis in Real-World ML Deployments
Keywords:
Class Imbalance, Fraud Detection, Deployment SensitivityAbstract
Class imbalance is a pervasive challenge in machine learning systems deployed in real-world
enterprise environments, where high-value minority events, such as fraudulent transactions, occur
with low frequency relative to majority-class activity. This study conducts a deployment-focused
sensitivity analysis of imbalance effects in a cloud-based fraud detection pipeline, examining how
imbalance influences model representation geometry, gradient learning dynamics, confidence
calibration, and decision threshold stability. Using production inference logs and incremental
retraining cycles, the analysis reveals that imbalance compresses decision boundaries, suppresses
minority-class gradients, and restricts viable threshold tuning ranges, leading to operational fragility
in detection performance. The results further show that imbalance sensitivity varies over time and
across workflow routing channels, indicating that mitigation requires both algorithm-level correction
and deployment-aware adaptive policies. These findings emphasize that class imbalance is not only a
modeling issue but a systemic characteristic of real-world ML operations, requiring continuous
monitoring and dynamic calibration.