Federated Model Accuracy Variance Under Non-IID Data Regimes

Authors

  • Rowan Whitcombe, Stefan Haversley

Keywords:

Federated Learning, Non-IID Data, Accuracy Variance, Distributed Optimization

Abstract

Federated learning enables collaborative model training across distributed data sources without centralizing
sensitive information, but its performance is highly affected by non-IID data conditions. When client data
distributions diverge, local update directions become misaligned, causing instability in global aggregation,
slower convergence, and uneven generalization across participants. This study evaluates the accuracy
variance characteristics of federated models under controlled non-IID regimes and compares the
effectiveness of aggregation stabilization strategies. Results show that standard FedAvg performs reliably
only under mild heterogeneity, while methods such as FedProx and FedDyn significantly reduce accuracy
variance and improve convergence consistency as non-IID severity increases. Variance dispersion across
clients proved to be a more sensitive indicator of training stability than global accuracy alone. The findings
underscore the importance of variance-aware evaluation frameworks and drift-mitigating optimization
techniques for deploying federated learning in enterprise and distributed cloud environments where data
heterogeneity is inherent.

Downloads

Published

2021-08-10

How to Cite

Rowan Whitcombe, Stefan Haversley. (2021). Federated Model Accuracy Variance Under Non-IID Data Regimes. Journal of Artificial Intelligence in Fluid Dynamics, 2(2), 1–5. Retrieved from https://theeducationjournals.com/index.php/jaifd/article/view/256

Issue

Section

Articles