Artificial intelligence systems are increasingly deployed in high-risk and regulated domains such as finance, healthcare, defense, and public governance. Despite significant advances in model capability, existing AI architectures lack a formal and enforceable mechanism to define who holds decision authority, under what conditions decisions may be executed, and how accountability is preserved. This absence creates systemic legal, operational, and ethical risks. This paper introduces Decision Authority Infrastructure (DAI), a deterministic, governance-first framework that formally separates intelligence generation from decision authorization. The proposed approach enforces human-final authority, non-autonomous execution, uncertainty-aware AUTO-LOCK mechanisms, and deterministic decision replay for audit-grade traceability. Decisions are permitted only when authority, risk thresholds, and governance constraints are explicitly satisfied. Unlike prevailing AI safety paradigms that emphasize alignment, ethics, or probabilistic oversight, DAI treats decision authority as a computable systems layer that can be verified, audited, and regulated. The framework establishes Decision Authority as a foundational infrastructure requirement for deploying AI in irreversible and accountability-critical environments, and proposes it as a new scientific and engineering field within AI governance.
Building similarity graph...
Analyzing shared references across papers
Loading...
YASIN KALAFATOGLU
Chinese Academy of Governance
Building similarity graph...
Analyzing shared references across papers
Loading...
YASIN KALAFATOGLU (Sun,) studied this question.
www.synapsesocial.com/papers/698435b9f1d9ada3c1fb4e2e — DOI: https://doi.org/10.5281/zenodo.18473123
Synapse has enriched 5 closely related papers on similar clinical questions. Consider them for comparative context: