AI_Gen_Novel

Report Abuse

Basic Information

This repository explores the limits of AI for writing long-form online novels by combining large language models and multi-agent techniques. It documents motivations, design ideas and a prototype application that aims to automate web-novel generation using a cognitive writing-process model: planning, translating and reviewing. The project addresses known LLM weaknesses in planning, long-memory and originality by compressing long text into short memory representations, optimizing prompts, and orchestrating multiple agent roles. It adopts the core idea of RecurrentGPT to iteratively generate arbitrarily long text through language-based recurrent computation and applies domain priors for web-novel structure. The README concludes that current LLMs are not yet sufficient for full-length novels and outlines future needs such as longer context windows, stronger reasoning, and human-evaluated RL training.

Links

Categorization

App Details

Features
Code and workflow to prototype automated novel generation with LLMs and multi-agent collaboration. A modular LLM interface requires implementing chatLLM in LLM.py so users can plug in their model. Scripts include demo.py which auto-generates stories and saves results to novel_record.md, and app.py which launches a Gradio-based visual demo. Architectural features include memory compression of long text into compact representations, prompt engineering and multi-agent discussion to boost originality, and an iterative RecurrentGPT-style loop for recurrent text construction. The repository also provides a quick start with pip install -r requirements.txt and references an online ModelScope studio for interactive experience.
Use Cases
The project provides a practical starting point for researchers, hobbyists and developers interested in automated long-form creative writing. It packages runnable examples, a simple LLM adapter point and a visual Gradio demo so users can reproduce experiments and test different models or prompts. By documenting design choices and limitations it clarifies where current models fail for long novels and suggests concrete directions for improvement, such as memory strategies and multi-agent iteration. The codebase can be used as a baseline to compare LLMs, to prototype multi-agent writing workflows, and to explore extensions like longer context, structured plot priors or human-in-the-loop reinforcement learning.

Please fill the required fields*