Knowledge Representation Limits in Large-Scale Neural Symbolic Systems

Authors

  • Adrian Whitmore, Clara Voss

Keywords:

Neural-Symbolic Models, Knowledge Representation, Compositional Reasoning

Abstract

Neural symbolic systems aim to integrate the perceptual generalization strengths of neural networks
with the structural reasoning capabilities of symbolic logic. However, this study finds that the internal
representations formed by large-scale neural components are inherently limited in their ability to
preserve symbolic identity, compositional structure, and rule invariance across transformations.
Through controlled evaluation of representational load, referential continuity, context perturbation,
domain transfer, and embedding drift over scale, we show that neural representations remain context
dependent and correlation-driven, leading to systematic breakdowns when deeper logical abstraction or
cross-domain consistency is required. These findings indicate that performance on symbolic tasks in
familiar contexts does not imply stable knowledge representation. Therefore, achieving reliable neural
symbolic reasoning requires architectures that incorporate explicit symbolic binding and structural
grounding mechanisms, rather than relying solely on distributed neural encoding.

Downloads

Published

2023-04-04

Issue

Section

Articles