Large Language Models (LLMs) are increasingly used to support complex reasoning tasks, yet their fluent textual explanations often obscure underlying assumptions and intermediate reasoning steps, making it difficult for users to verify correctness or confidently rely on the results. Existing explanation, visualization, and knowledge-graph–based reasoning approaches primarily remain read-only or model-centric, offering limited support for direct human intervention in the reasoning process. To address this gap, we present InteractiveKG, a visual analytics system that externalizes LLM-generated reasoning as persistent, editable knowledge graphs. InteractiveKG enables users to inspect, curate, and iteratively refine reasoning by directly manipulating nodes and edges, adjusting abstraction levels, and accessing contextual explanations within a unified human-in-the-loop workflow. We evaluated InteractiveKG through a controlled user study comparing text-only LLM outputs with graph-based interactions across error-correction and verification scenarios. Results show that InteractiveKG significantly improves users’ ability to identify and refine problematic reasoning, strengthens trust calibration, increases decision confidence, and enhances perceived control over the reasoning process. By transforming reasoning from transient text into a manipulable graph artifact, InteractiveKG demonstrates how transparency and user agency can be systematically integrated into LLM-assisted reasoning, highlighting the importance of interactive, user-controllable representations for trustworthy human–AI collaboration.
Building similarity graph...
Analyzing shared references across papers
Loading...
Minjung Kim
Yanjie Zhao
Jaeseong Ju
Information Visualization
Seoul National University
Building similarity graph...
Analyzing shared references across papers
Loading...
Kim et al. (Mon,) studied this question.
www.synapsesocial.com/papers/69d8946e6c1944d70ce0550d — DOI: https://doi.org/10.1177/14738716261435574