Report Abuse

Basic Information

This repository contains the source code, datasets, and experimental scripts for Graph Chain-of-Thought (Graph-CoT), a method and ACL 2024 paper that augments large language models by enabling stepwise reasoning on text-attributed graphs. It provides the Graph-CoT implementation and the Graph Reasoning Benchmark (GRBench) with ten real-world graphs across academic, e-commerce, literature, healthcare, and legal domains. The repo includes instructions for environment setup and dependencies, data placement conventions for processed graphs and question-answer sets, and guidance for running Graph-CoT and related baselines. It groups model code into folders for Graph-CoT, open-source LLM baselines, and GPT variants, and is intended to support reproducible research and experimentation on augmenting LLMs with structured graph knowledge.

Links

Categorization

App Details

Features
Implements the Graph-CoT iterative framework where each iteration consists of three sub-steps: reasoning (LLM proposes conclusions and what information is needed), interaction (LLM generates graph queries such as node or neighbor lookups), and execution (graph queries are executed and results returned). Ships GRBench, a benchmark of ten real-world text-attributed graphs with easy, medium and hard questions requiring single-hop, multi-hop, or inductive reasoning. Provides code folders for Graph-CoT models and baselines including RAG variants, evaluation scripts, support for rule-based metrics (EM, BLEU, ROUGE) and model-based evaluation with GPT-4, and environment setup instructions with conda and dependency details.
Use Cases
This repository helps researchers and practitioners evaluate and develop methods that ground LLMs in structured textual graphs to reduce hallucination and improve multi-hop reasoning. It demonstrates an interaction-oriented approach that lets LLMs traverse graphs step-by-step rather than ingesting whole subgraphs, provides a standardized benchmark across five domains for controlled evaluation, and includes scripts to run models, baselines, and automated evaluations to enable reproducible comparisons. The provided processed data, downloadable graph environment files, and setup instructions streamline experimentation and study of LLMs as agents interacting with graph environments.

Please fill the required fields*