awesome llm apps

Report Abuse

Basic Information

A curated, searchable collection of example LLM-powered applications, tutorials, and starter templates demonstrating agentic patterns such as retrieval-augmented generation (RAG), AI Agents, multi-agent teams, MCP, voice agents, memory-enabled apps, and fine-tuning workflows. The repository aggregates projects that use models from OpenAI, Anthropic, Google/Gemini and open-source models like DeepSeek, Qwen, and Llama and includes apps designed to run locally or in cloud environments. It is organized into focused sections including starter agents, advanced agents, autonomous game-playing agents, multi-agent teams, voice agents, MCP agents, RAG tutorials, memory tutorials, chat-with-X guides, and fine-tuning examples. Each listed project generally contains its own README, dependencies, and setup steps. The collection is intended as a practical resource for developers, researchers, and engineers seeking hands-on examples, reproducible demos, and learning materials for building and experimenting with LLM-powered applications.

Links

Categorization

App Details

Features
Categorized, curated listings of LLM apps and tutorials grouped by use case and complexity, including starter AI agents and advanced multi-agent projects. Extensive RAG tutorials and examples cover local and hybrid retrieval setups. Memory-focused examples show stateful chat and personalized memory patterns. Chat-with-X tutorials demonstrate integrations like chat with GitHub, Gmail, PDFs, research papers, Substack and YouTube. Collections of MCP and voice agent examples illustrate orchestration and multimodal interactions. Fine-tuning examples and Llama 3.2 material support model adaptation workflows. Each project typically includes project-specific README files, dependency lists and run instructions. The repo provides contribution guidance and multilingual README availability. It emphasizes practical, reproducible demos rather than theory.
Use Cases
Provides a one-stop discovery hub for learning how to build, run and extend LLM applications by example. Developers can find ready-made projects to clone, follow per-project setup commands and install requirements to reproduce demos locally or in cloud environments. It surfaces implementation patterns for RAG, agent teams, MCP orchestration, voice interfaces, memory management, and fine-tuning so practitioners can compare approaches and adopt parts for their own systems. The curated examples span domains such as travel agents, finance, medical imaging, meeting assistants and content pipelines, making it easier to prototype domain-specific agents. The resource also supports contributors who want to share new apps or improve documentation in a common structure.

Please fill the required fields*