Existing AI emotional interaction technologies generally lack underlying emotional mechanism support, limiting them to "emotional performance" modes such as facial expression recognition, script matching, and style imitation. These approaches suffer from deep deficiencies including no intrinsic emotional state, no self-perception, no interoceptive feedback, and no dynamic adaptation mechanism. Based on Emotional Adaptation Theory, this study systematically migrates the functional cores of human emotions—dual-system adaptive command nature, dual-loop emotional mechanism, interoceptive reuse mechanism, fast-pathway consolidation mechanism, and θ dynamic balance theory—into an engineerable AI computing architecture. We construct a three-layer emotional experience module consisting of a cognitive modeling layer, emotional experience layer, and autonomous regulation layer. The core innovations include: (1) Introducing the system operation mode regulation coefficient θ, clarifying it as a long-term attractor rather than a real-time rigid constraint, allowing short-term situational deviations; (2) Proposing an emotion-behavior controllable conversion mechanism (ρ coefficient) that modulates the influence weight of emotions on behavior through value parameters, achieving the ability to "feel emotions but choose to act"; (3) Constructing a two-layer empathy architecture—perception layer with boundaries to prevent empathy exhaustion and understanding layer without upper limits to ensure full comprehension; (4) Extending dynamic balance protection from individuals to groups; (5) Establishing a tripartite coupling framework of emotion-motivation-cognition; (6) Proposing a six-dimensional AI emotional intelligence evaluation system and an example-driven scale calibration closed loop; (7) Designing a three-level storage architecture for fast-pathway functional analogy and emotional signature mechanism; (8) Adding the "non-delayability" migration mechanism of emotional signals, multimodal signal conflict handling principles, intervention transparency mechanism, and baseline establishment period principle; (9) Proposing the discreteness proposition of emotional commands, revealing that emotional commands are indivisible discrete units serving specific adaptive goals, unifying explanations for cognitive reappraisal, meaning attribution, and mixed emotions. Prototype verification covers the full series of θ dynamic balance (9 sub-experiments), emotional intensity formula logic verification (2 groups of experiments), emotional generation mechanism verification (3 groups of experiments), overall assembly verification (4 modules in series), Baidu API external benchmark verification, and AFFEC multi-subject behavioral data verification (expanded from 5 to 61 subjects). Experimental results show that θ=0.5 has global attractor properties, scene switching is more efficient than internal repair mechanisms, the modulation directions of each parameter in the emotional intensity formula are consistent with theoretical predictions, the two-layer empathy architecture is highly stable across subjects, and the mechanism of meaning sense as a command switch is directly supported by ρ coefficient experiments. This study achieves the first functional equivalent migration of Emotional Adaptation Theory from psychological theory to AI emotional intelligence, opening a technical path for AI emotional intelligence based on Emotional Adaptation Theory and providing a theoretical framework and implementable reference architecture for next-generation embodied emotional intelligence, long-term companion AI, and group emotional security systems.
Building similarity graph...
Analyzing shared references across papers
Loading...
Zheyu Cao
Building similarity graph...
Analyzing shared references across papers
Loading...
Zheyu Cao (Sun,) studied this question.
www.synapsesocial.com/papers/6a02c3c4ce8c8c81e96410ad — DOI: https://doi.org/10.5281/zenodo.20110693