Report Abuse

Basic Information

KwaiAgents is an open-source collection of agent-related research and tooling released by KwaiKEG at Kuaishou Technology that implements a generalized information-seeking agent system using large language models. The repository packages a lightweight system implementation called KAgentSys-Lite, a family of Agent-capable language models (KAgentLMs) tuned with a Meta-agent tuning method, a large instruction fine-tuning dataset (KAgentInstruct) with over 200k entries, and an evaluation suite (KAgentBench) with over 3,000 human-edited test cases. The project provides benchmarks and human evaluation results comparing planning, tool use, reflection, concluding and profiling capabilities across multiple models. The README documents installation, model serving options for GPU and CPU, command-line usage of the KAgentSys-Lite agent, and notes on limitations such as a reduced tool set and lack of memory mechanisms in the lite system.

Links

Categorization

App Details

Features
Includes KAgentSys-Lite, a lighter implementation of the agent system that supports multi-iteration agent execution and configurable tools. Provides KAgentLMs, MAT-tuned language models in multiple sizes and flavors that emphasize planning, reflection, and tool use. Ships KAgentInstruct, a large instruction tuning dataset aimed at agent behaviors, and KAgentBench, a benchmark suite for automated and human-edited evaluation. Contains step-by-step guides for serving models with vLLM and FastChat for GPU deployment and llama.cpp for CPU deployment. Offers a CLI interface to run the agent, example scripts for custom tools, benchmark inference and evaluation scripts, and reported benchmark tables and human evaluation summaries for comparison.
Use Cases
This repository helps researchers and engineers develop, deploy, and evaluate information-seeking agents by providing pretrained agent-oriented models, extensive instruction tuning data, and a standardized benchmark. The included KAgentSys-Lite enables rapid experimentation with agent workflows and tool integration while offering CLI examples for querying both cloud LLMs and locally served models. Deployment instructions support GPU and CPU inference backends enabling reproduction and testing. The KAgentBench evaluation and provided scripts let teams measure planning, tool use, reflection, conclusion, and profiling performance and compare models. Example custom tools and guidance on external dependencies accelerate integration and benchmarking in development and research workflows.

Please fill the required fields*