The Governance of Human Capacity in the AI Age, Vol. 1: The Burnout of Possibility — Capability-Bandwidth Mismatch in the Early AI EraCivilization Physics — Human Systems & AI Integration Series This paper introduces Capability-Bandwidth Mismatch (CBM) as a structural condition defining the early AI era. CBM arises when generative AI expands the executable surface of work—plans, drafts, code, and parallel task streams—faster than humans and organizations can supply the governing bandwidth required to manage it. Governing bandwidth includes attention, working memory, judgment, verification, integration, and accountability. The result is a distinct form of cognitive strain driven not by scarcity of options, but by overexposure to actionable possibilities that must be evaluated and coordinated . The analysis begins by reframing productivity gains from generative AI. While empirical evidence shows increased speed and output in domains such as writing, customer support, and software development, these gains shift rather than eliminate workload. Execution becomes inexpensive, but governance becomes the bottleneck. Humans move from producing outputs to supervising, validating, and integrating them, creating a new distribution of effort where cognitive control replaces manual execution as the limiting factor. CBM is formally defined as the ratio between AI-expanded option space and human governance capacity. As AI generates more candidate actions, the burden of selection, verification, and alignment increases. This mismatch produces a characteristic fatigue profile: continuous context-switching, verification under uncertainty, and responsibility for decisions across multiple parallel streams. Unlike traditional burnout, which emerges from prolonged stress, CBM can arise rapidly from structural overload in decision and coordination processes. The paper identifies several interacting mechanisms underlying CBM: Expansion of executable surface — AI rapidly generates multiple viable actions and alternatives, increasing decision volume. Verification burden under uncertainty — fluent outputs require validation due to potential hallucination or inconsistency. Frame execution vs. frame preservation — AI executes within given structures, while humans remain responsible for maintaining coherence, goals, and accountability. Cognitive control bottlenecks — working memory limits and task-switching costs constrain the ability to manage parallel processes. Trust calibration dynamics — overreliance reduces monitoring, while underreliance increases redundant checking, both raising cognitive load. These mechanisms align with established human factors research on automation, including increased monitoring demands, out-of-the-loop risks, and the paradox that automation can intensify the difficulty of remaining human tasks. The paper distinguishes CBM from related constructs. Unlike information overload, which focuses on data volume, CBM emphasizes actionable options that require decision and integration. Unlike decision fatigue, CBM does not depend on depletion models but on structural limits of cognitive control. Unlike technostress, CBM is specific to generative AI’s ability to expand the action surface while maintaining probabilistic uncertainty in outputs. Empirical evidence supports both sides of the mismatch. Studies show that AI assistance improves speed and output quality in certain tasks, particularly for less experienced workers. At the same time, research in human–AI interaction and automation highlights increased monitoring demands, reduced sense of control, and altered cognitive engagement. In domains such as software development and design, rapid generation of alternatives increases evaluation and integration burden, consistent with cognitive load theory and task-switching research. The paper proposes a multi-level governance framework to address CBM: Foundational theory — treat AI outputs as probabilistic proposals requiring verification proportional to risk. Industry doctrine — define acceptable error tolerance, review depth, and escalation pathways. Organizational workflow — design processes that budget attention, separate generation from evaluation, and assign clear accountability. Personal cognitive governance — adopt practices such as batching, option limits, externalized task tracking, and predefined stopping rules. These interventions treat governance work as primary rather than residual, aligning system design with human cognitive constraints. The paper concludes that the central challenge of the AI era is not capability expansion but capacity governance. As AI amplifies what can be done, the limiting factor becomes what can be responsibly managed. Systems that fail to account for this shift risk unsustainable workloads, degraded decision quality, and loss of coherence. Within the Civilization Physics framework, CBM represents a general law of system imbalance: when execution scales faster than governance, instability emerges at the human level. Sustainable integration of AI therefore depends on protecting and structuring human cognitive bandwidth as a critical resource. Keywords: Capability-Bandwidth Mismatch · Cognitive Load · Human-AI Interaction · Automation Paradox · Decision Complexity · Governance Framework · AI Productivity · Cognitive Control · System Design · Civilization Physics
Building similarity graph...
Analyzing shared references across papers
Loading...
Xiangyu Guo
Building similarity graph...
Analyzing shared references across papers
Loading...
Xiangyu Guo (Mon,) studied this question.
www.synapsesocial.com/papers/69e07e242f7e8953b7cbf1e5 — DOI: https://doi.org/10.5281/zenodo.19563034