Prevailing approaches to artificial intelligence safety frame risk as a function of misaligned objectives, excessive autonomy, or superhuman intelligence. This paper advances a different thesis: artificial intelligence becomes dangerous not through intelligence itself, but through control over irreversible record finality. When AI outputs crystallize into binding records that constrain future action faster than human contestation can operate, domination emerges without malice, intent, or awareness. Building on a structural analysis of decision systems, the paper formulates a general architectural theory of non-dominating AI systems grounded in the strict separation of cognition from authority. It introduces Normative Causal Integrity (NCI) as a central safety invariant and identifies decoupling—the condition in which norms persist symbolically while losing causal power over outcomes—as the primary failure mode of advanced AI deployments. The proposed architecture preserves democratic governance by design through delayed finality, mandatory contestability, distributed authorization, and human accountability. This framework reframes AI safety as an institutional and architectural problem rather than an alignment or capability problem, enabling large-scale intelligence amplification without authority capture or loss of agency.
Building similarity graph...
Analyzing shared references across papers
Loading...
SPIROS P Kalalis
Epic Systems (United States)
Building similarity graph...
Analyzing shared references across papers
Loading...
SPIROS P Kalalis (Sat,) studied this question.
www.synapsesocial.com/papers/6966f2fb13bf7a6f02c006d0 — DOI: https://doi.org/10.5281/zenodo.18209623
Synapse has enriched 5 closely related papers on similar clinical questions. Consider them for comparative context: