Report Abuse

Basic Information

LLM Functions is a developer-oriented project designed to make it easy to build LLM tools and agents using simple scripts in Bash, JavaScript, and Python. It leverages function calling to connect language models directly to custom code so models can execute system commands, run scripts, call APIs, and access documents. The repository provides a convention-based layout for tools and agents, auto-generates JSON function declarations from comments, and integrates with a CLI tool called AIChat to run agents and tools. It also includes components to expose tools via the Model Context Protocol so external systems can use LLM-Functions. The README documents required prerequisites such as argc and jq and explains workflows for building, linking, and running tools and agents with minimal setup.

Links

Categorization

App Details

Features
The project features language-agnostic tool support where each tool is a script or module placed in a tools directory. Tool declarations are auto-generated from annotated comments producing a functions.json file. Build tooling uses argc commands and a tools.txt and agents.txt manifest to assemble a bin and functions.json. Agents are defined by a simple index.yaml that bundles a prompt, tools, and documents for retrieval-augmented workflows. There are examples for Bash, JavaScript, and Python tools, and commands to link a chosen web_search tool. Integration helpers include CLI linking into AIChat, environment variable support for function dirs, and MCP server and bridge components to expose and consume tools over Model Context Protocol.
Use Cases
LLM Functions reduces integration friction when exposing programmatic capabilities to LLMs, enabling developers to turn existing scripts into callable tools quickly. The auto-generation of function schemas from comments simplifies keeping model-visible interfaces in sync with code. The agent structure makes building reusable assistants straightforward by combining prompt templates, tools, and documents for RAG. CLI integration with AIChat and simple argc-driven build and check commands let you iterate locally and deploy agents into interactive sessions. MCP support allows tools to be served or consumed across processes and systems. Overall it accelerates prototyping, orchestration, and reuse of model-driven automation without deep platform-specific wiring.

Please fill the required fields*