Regularization Robustness in Machine Learning Models with Limited Data

Authors

  • Sophia Caldwell, Benjamin Roark

Keywords:

Data Scarcity, Model Regularization, Representation Stability

Abstract

Machine learning models trained under data scarcity often suffer from unstable representations, poor generalization, and memorization-driven failure modes. This article investigates the effectiveness of different categories of regularization strategiesstructural, feature-space, and learning-dynamicin mitigating these challenges. A multi-phase evaluation approach is used to examine model behavior across varying levels of training data availability and incremental learning conditions. Structural regularization methods such as weight sharing and low-rank factorization produced the most consistent stability, while feature-space constraints enhanced representational coherence and transferability. Learning-dynamic strategies provided partial benefits but required adaptive control to avoid suppressing meaningful learning signals. The results indicate that robust generalization under data scarcity is best supported by regularization approaches that shape internal feature geometry rather than simply constraining parameter magnitudes. This study provides practical insights for deploying models in real-world conditions where data availability is inherently limited.     

Downloads

Published

2026-02-05

How to Cite

Sophia Caldwell, Benjamin Roark. (2026). Regularization Robustness in Machine Learning Models with Limited Data. Turquoise International Journal of Educational Research and Social Studies, 7(2), 11–15. Retrieved from https://theeducationjournals.com/index.php/tijer/article/view/408

Issue

Section

Articles