Features
Unified LLM interface with consistent methods for embed, complete and chat across providers and a list of supported providers including OpenAI, Anthropic, Google Gemini, Cohere, Mistral, HuggingFace, Ollama, Replicate and others. Prompt management with PromptTemplate and FewShotPromptTemplate and save/load from JSON/YAML. Output parsers include StructuredOutputParser and OutputFixingParser for schema-constrained responses. Vector search integrations for Chroma, Hnswlib, Milvus, Pinecone, Pgvector, Qdrant, Weaviate and Elasticsearch with helpers to create schemas, add texts/files, and perform similarity search and HyDE. Assistant class that manages conversation threads, tool execution, streaming handlers, built-in tools (calculator, DB, filesystem, GoogleSearch, NewsRetriever, code interpreter, weather, Wikipedia, vectorsearch) and an extensible ToolDefinition API. Evaluation tools include Ragas for RAG metrics.
Use Cases
Langchain.rb simplifies building production LLM applications in Ruby by abstracting provider differences so code can switch backends without rewriting logic. It accelerates RAG systems by integrating common vector databases and file parsers to index and retrieve documents, and it supplies prompt templates and structured output parsing to make model outputs predictable. The Assistant framework combines conversation state, tool orchestration and streaming support to build chatbots and multi-step agents. Built-in tools and a clear pattern for custom tools reduce plumbing when adding external APIs or database access. Evaluation helpers help measure faithfulness and relevance of RAG pipelines. The gem also documents installation, examples, logging and development workflows to help teams adopt and extend it in Rails or plain Ruby projects.