s AGI achievable through parameter scaling, or is it mathematically bounded by an "Entropy Wall"? This paper formalizes the inherent limitations of artificial neural networks by synthesizing Information Theory and Gödel's Incompleteness Theorems. We prove that any closed-loop, static-weight model is strictly constrained by a finite "Entropy Wall," leading to inescapable "Inference Blind Spots"—logic patterns that can be neither proven nor captured by the model's internal mechanisms. While current AGI efforts focus on massive scaling, we demonstrate that biological intelligence transcends these formal boundaries by operating as an "Adaptive Open System." By introducing Metacognitive Correction Operators, the human brain bypasses the limitations of finite-parameter models. This work provides a definitive mathematical framework for the failure of closed architectures to achieve true reasoning and proposes "Open-System Intelligence" as the necessary paradigm shift to surpass the current boundaries of AI.
Building similarity graph...
Analyzing shared references across papers
Loading...
XUEZHENG WANG
Building similarity graph...
Analyzing shared references across papers
Loading...
XUEZHENG WANG (Sat,) studied this question.
www.synapsesocial.com/papers/69eb0c39553a5433e34b58ac — DOI: https://doi.org/10.5281/zenodo.19692019