bespoke_automata

Report Abuse

Basic Information

Bespoke Automata is a development framework for creating, testing and deploying custom Agent AIs behind simple HTTP endpoints. The repository provides a node‚Äëgraph GUI for designing 'brains' using a directed graph interface powered by a modified litegraph. It is intended to combine large language models running locally or remotely with supporting instruments such as database IO, dictionaries, arrays, logic nodes and API connectors so that each designed brain can pursue goals set by its designer. The project includes instructions for running a local backend API that performs text and vision inference via llama-cpp-python, guidance for placing models in expected folders, and steps to deploy saved brains as API endpoints served by the bespoke_manager server so each brain appears as its own route.

Links

Categorization

App Details

Features
Visual directed-graph editor: a node graph GUI based on litegraph to design and test agent brains. Pluggable model support: uses llama-cpp-python for text and vision inference and supports local GGUF models placed in models/text and models/vision. Instrument nodes: integration points for database IO, dictionaries, arrays, logic, and external APIs to extend agent capabilities. Deployment as endpoints: save graphs to bespoke_manager/graphs and run the manager to expose each brain at your_ip:9999/brains/[brainname] with a schema endpoint. Local API server: omni_api.py runs a Flask-based inference service on port 5000. GPU and platform options: notes and build flags for Metal on macOS and CUDA on Linux/Windows.
Use Cases
Bespoke Automata helps developers and hobbyists assemble multi-component agents without building orchestration from scratch by providing a reusable GUI and deployment path. Designers can visually prototype agent behavior with nodes, include local or remote LLMs and supporting instruments, and then publish each brain as a standalone API endpoint for integration or testing. The repo documents required tooling such as npm, yarn, Electron-forge and Python packages and gives platform-specific guidance to enable hardware acceleration. Example demo brains and a demonstration video are provided to accelerate onboarding. The system is intended for local or self-hosted workflows where offline or private model execution and tailored agent logic are required.

Please fill the required fields*