Abstract This paper offers principles for integrating generative AI (GenAI) into research practice based on 9 months of systematic testing and foundational expertise about automation and the transformation of work. Our team implemented GenAI across our research process—from exploratory research to evaluation—using local models, cloud‐based AI services, and bespoke platforms for AI‐first analysis. We tested three common strategies, using AI as a ‘mock customer,’ ‘creative partner,’ and ‘junior researcher.’ We provide detailed examples and report findings for each, including critical failures in quality, accuracy, methodological coherence, contextual relevance, meaning, and trust. We also document the transformation of our work, expertise, skills, and identity from doing research to verifying outputs. We argue that AI integrations should fit AI into human processes, treat AI outputs as speculative texts requiring substantial human re‐interpretation, distrust shortcuts, and avoid using AI for autonomous analysis or as a substitute for human research participants. ELIZA is hopeless as a brain but, in the right social circumstances, acceptable as a human … In the same way the social organism can be more or less sensitive to artifacts in its midst; one might say that it is a matter of the alertness of our social immune system. To use a term from debates in social anthropology, it a matter of the extent to which we are charitable to strangeness in other peoples. — Harry Collins, Artificial Experts (1990, 15)
Building similarity graph...
Analyzing shared references across papers
Loading...
Erik St. Gray
Kevin Kochever
Niloofar Zarei
Silicon Valley University
Building similarity graph...
Analyzing shared references across papers
Loading...
Gray et al. (Sat,) studied this question.
www.synapsesocial.com/papers/6996a7b5ecb39a600b3ed9a5 — DOI: https://doi.org/10.1111/epic.70029