Dimensionality Reduction Biases in Embedding Feature Spaces

Authors

  • Adrian Wexford

Keywords:

dimensionality reduction, embedding bias, feature representation

Abstract

Dimensionality reduction methods are widely used to convert high-dimensional scientific and sensor
data into compact embedding spaces for interpretation, monitoring, and diagnostic decision support.
However, these transformations introduce structural biases that influence how similarity, continuity, and
clustering relationships are perceived in the reduced space. This study analyzes how PCA, t-SNE,
UMAP, and autoencoder-based embeddings redistribute variance under different normalization
conditions and multi-sensor fusion configurations. Results show that PCA preserves global structure but
suppresses subtle regime transitions, while manifold learning exaggerates local separations.
Autoencoders provide transitional stability but can smooth abrupt state changes. These behaviors are
strongly affected by preprocessing strategies, which can amplify acquisition artifacts or mask physically
meaningful variance. The findings emphasize that embeddings are not neutral representations but
selective transformations that must be aligned with interpretive and monitoring objectives to avoid
analytical misinterpretation.

Downloads

Published

2024-11-15

How to Cite

Adrian Wexford. (2024). Dimensionality Reduction Biases in Embedding Feature Spaces. Journal of Artificial Intelligence in Fluid Dynamics, 3(2), 1–7. Retrieved from https://theeducationjournals.com/index.php/jaifd/article/view/341

Issue

Section

Articles