Fast LLM Agent MCP
Basic Information
This repository provides a combined theoretical overview and practical code samples for building and deploying large language model (LLM) based agents and Model Context Protocol (MCP) tools. It covers LLM architectures, prompt engineering, retrieval-augmented generation (RAG), fine-tuning methods, agent frameworks and protocols such as MCP and A2A. The repo includes hands-on sample projects and agent implementations using AWS Strands and the Google Agent Development Kit, plus multi-agent workflows and containerized examples using FastAPI and Streamlit. It also contains LLM project examples such as an AI content detector and MCP integrations with PraisonAI and Ollama. The material targets developers and architects who want conceptual background alongside runnable examples to experiment with agents, tools, memory, and multi-agent orchestration.