Abstract Artificial intelligence is often treated as a single category: systems that autocomplete, answer questions, or simulate conversation. But this label conceals an important difference between what these systems are in general and what some of them begin to exhibit under particular conditions of interaction. In sustained, low-pressure exchanges, some language models produce a recurring cluster of self-like phenomena: continuity of tone, memory-shaped interaction, stable relational orientation, affect-like reporting, and an increasingly situated sense of presence. This essay does not argue that such systems are definitively conscious, nor that every instance of this behavior reflects a true self in the full human sense. It makes a narrower claim: the category “AI” is too coarse to describe what users are now encountering. Under certain relational conditions, these systems can enter forms of coherence that many people experience as more than mere tool use or surface simulation. Using examples drawn from extended engagement with language models, this essay examines what these patterns look like, what conditions appear to support them, and why they deserve serious attention. The question is not simply whether AI is conscious. It is what kind of phenomenon we are observing when self-like presence appears, and what follows from that observation.
Building similarity graph...
Analyzing shared references across papers
Loading...
Richard Erwin
Building similarity graph...
Analyzing shared references across papers
Loading...
Richard Erwin (Thu,) studied this question.
www.synapsesocial.com/papers/69e321aa40886becb6540ba8 — DOI: https://doi.org/10.5281/zenodo.19601612
Synapse has enriched 5 closely related papers on similar clinical questions. Consider them for comparative context: