This paper examines a structural gap in contemporary AI governance: the separation between human oversight and identifiable authorship of consequential institutional decisions. While many regulatory frameworks emphasize oversight mechanisms, they often fail to ensure that decisional authority remains clearly attributable to accountable human actors. The paper introduces the concept of High-Impact Algorithmic Systems (HIAS) to identify algorithmic systems whose outputs materially structure consequential outcomes in domains such as public administration, credit allocation, employment, and regulatory enforcement. Unlike conventional “high-risk AI” classifications, the HIAS framework focuses on institutional function rather than technological label, highlighting how consequential algorithmic authority can exist even where systems are not formally categorized as artificial intelligence. Through a comparative analysis of ASEAN governance instruments and the EU AI Act, the study demonstrates that many governance frameworks institutionalize oversight primarily as a risk-management mechanism while leaving the condition of identifiable authorship comparatively under-articulated. The paper argues that anchoring governance triggers to consequential institutional authority rather than system classification would strengthen accountability in algorithmically structured decision systems without requiring major structural redesign of existing regulatory frameworks.
Building similarity graph...
Analyzing shared references across papers
Loading...
Iftikhar Mahmud
Film Independent
Building similarity graph...
Analyzing shared references across papers
Loading...
Iftikhar Mahmud (Tue,) studied this question.
www.synapsesocial.com/papers/69aa70b8531e4c4a9ff5ac71 — DOI: https://doi.org/10.5281/zenodo.18863771