This paper proposes a conceptual map of AI governance as a systems architecture rather than a policy framework. It argues that governance in complex AI systems emerges from the coupling of four distinct but interdependent fields: authority, structural constraint, execution control, and dynamic stability. Each field originates from a different disciplinary tradition — law and institutional governance, safety-critical systems engineering, software and runtime systems, and control theory. Current governance approaches tend to address these domains separately, producing frameworks that are internally coherent but structurally incomplete. The paper introduces a four-field model that organizes these perspectives and identifies governance failures primarily as coupling failures between them. Its goal is not to prescribe a governance system but to map the architectural terrain in which such systems must operate. The work is intended as a conceptual reconnaissance for researchers, system architects, and policymakers working at the intersection of AI governance, complex systems, and safety-critical infrastructures.
Building similarity graph...
Analyzing shared references across papers
Loading...
Ricardo Rubio Albacete
Building similarity graph...
Analyzing shared references across papers
Loading...
Ricardo Rubio Albacete (Sun,) studied this question.
www.synapsesocial.com/papers/69d34e949c07852e0af982fc — DOI: https://doi.org/10.5281/zenodo.18906019
Synapse has enriched 5 closely related papers on similar clinical questions. Consider them for comparative context: