self hosted ai starter kit

Report Abuse

Basic Information

This repository is an open-source Docker Compose template designed to quickly create a local, self-hosted AI and low-code development environment. It bundles the n8n workflow automation platform with complementary components such as Ollama for local LLM hosting, Qdrant as a vector database, and PostgreSQL for persistent storage. The starter kit provides preconfigured network and storage settings, example workflows and AI nodes, and profiles for different hardware setups including Nvidia and AMD GPUs, CPU-only, and Mac/Apple Silicon guidance. The project is aimed at developers and teams who want to prototype AI workflows and agents locally, run models and vector search without sending sensitive data to third-party cloud services, and experiment with n8n’s hundreds of integrations and advanced AI nodes. The README includes installation, upgrade steps, tips for accessing shared files from the n8n container, recommended learning resources, and community support information.

Links

Categorization

App Details

Features
The kit centers on a Docker Compose configuration that brings together key self-hosted components. Included are self-hosted n8n for low-code workflows and AI nodes, Ollama for running local LLMs, Qdrant as a high-performance vector store, and PostgreSQL for durable data. The compose file provides profiles for gpu-nvidia, gpu-amd, and cpu-only deployments plus Mac-specific instructions and an option to connect to a local Ollama service. It offers a mounted shared folder for accessing host files from n8n at /data/shared, a ready example workflow to try the Chat workflow, and integration with n8n’s AI Agent, Text classifier, and Information Extractor nodes. The README documents upgrade commands, recommended tutorials and templates, video walkthroughs, and community support. The project is licensed under Apache 2.0.
Use Cases
This starter kit speeds up proof-of-concept development by providing an integrated, local AI stack for building and testing workflows and agents. Developers can run LLMs locally via Ollama and perform semantic search with Qdrant while orchestrating logic and integrations through n8n’s visual editor and more than 400 connectors. The included example workflows and AI nodes lower the barrier to try tasks like scheduling agents, summarizing internal PDFs, enhancing Slack bots, and analysing private financial documents without cloud data exposure. Hardware-specific compose profiles and Mac guidance make it adaptable to different machines. Shared folder mounting simplifies working with local files. The documentation and templates help users learn core AI concepts, import sample workflows, and iterate quickly on self-hosted AI prototypes.

Please fill the required fields*