Large Language Models (LLMs) excel at many tasks but often struggle with complex, multi-step reasoning, leading to inconsistencies and hallucinations. Consequently, we propose a neural-symbolic integration framework that enhances LLM reasoning by incorporating formal knowledge—such as logical rules, ontologies, and knowledge graphs—into their CoT process. Our approach retrieves and integrates symbolic information to guide logical inference, resulting in more accurate and interpretable outputs. Experiments on compositional reasoning benchmarks demonstrate significant improvements over standard LLM methods. This work highlights the potential of neural-symbolic integration for developing more reliable and explainable AI systems in high-stakes applications.
Building similarity graph...
Analyzing shared references across papers
Loading...
Ngoc-Khuong Nguyen
Viet-Ha Nguyen
Anh-Cuong Le
Journal of Intelligent & Fuzzy Systems
Ton Duc Thang University
Hai Phong University of Management and Technology
Building similarity graph...
Analyzing shared references across papers
Loading...
Nguyen et al. (Mon,) studied this question.
www.synapsesocial.com/papers/69401f062d562116f28fa0ca — DOI: https://doi.org/10.1177/18758967251394597
Synapse has enriched 5 closely related papers on similar clinical questions. Consider them for comparative context: