Report Abuse

Basic Information

This repository provides the official code and materials for the Agentic Process Automation (APA) work and implements ProAgent, an LLM-based system that generates and executes automated workflows from human instructions. It demonstrates how large language model agents can design workflow structures, make dynamic decisions, and coordinate specialized sub-agents to carry out multi-step automation. The README describes setup steps, dependency installation, required OpenAI credentials and the use of GPT4-0613, and integration with a self-hosted n8n instance or reproduced recorded cases. It includes configuration modes (development, refine, production), example cases in ./apa_case, and a record system that saves runs under ./records for later refinement or reproduction. The codebase is intended for researchers and developers exploring agent-driven automation and reproducing the experiments in the associated paper.

Links

App Details

Features
ProAgent centers on an APA paradigm where LLM agents synthesize workflows and coordinate specialized agents to handle complex decision-making. The codebase provides modes for development, refine, and production to create, refine, or replay workflows. It integrates with n8n for connecting real-world apps and includes utilities to export and load n8n credentials and workflows to ./ProAgent/n8n_tester/credentials. A built-in n8n-compiler translates workflows to the project format, though compatibility notes warn about n8n version differences. Runs generate readable records in ./records and an example case set is provided in ./apa_case. The system uses GPT4-0613 and documents an HCI feature that allows ProAgent to proactively ask humans for help via a function-call mechanism.
Use Cases
ProAgent helps by automating the design and execution of multi-step workflows that normally require human intelligence, shifting RPA toward agentic automation. It demonstrates practical integration patterns for LLM-based orchestration with app connectors through n8n, enabling experiments that either interact with live services or replay recorded runs for reproducibility. The three modes support iterative refinement and production replay, and the record system makes experiment auditing and reuse straightforward. Researchers can reproduce the paper"s cases, evaluate agent decision-making, and test human-in-the-loop behaviors thanks to the HCI function-call feature. The repository thus assists developers and researchers in building, testing, and iterating agentic automation systems.

Please fill the required fields*