easy llm cli

Report Abuse

Basic Information

Easy LLM CLI is an open-source command-line AI agent and workflow tool designed to help developers and technical users interact with codebases and automation tasks using large language models. It connects to multiple LLM providers including Google Gemini and OpenAI, and supports any OpenAI-compatible custom LLM via environment configuration. The project provides both an interactive CLI (invoked via npx easy-llm-cli or a global elc install) and a programmatic API (ElcAgent) for embedding agent behavior into Node.js projects. It supports tool calling, multimodal inputs, large context windows, and MCP server extensions to connect external capabilities. The README documents quickstart steps requiring Node.js v20 or later, examples for creating or analyzing projects, and configuration options for switching providers without changing workflows.

Links

App Details

Features
The repository offers a CLI front end and a programmatic interface (ElcAgent) for integration into Node.js code. It supports multiple LLM providers and a custom LLM mode configured by environment variables such as CUSTOM_LLM_API_KEY, CUSTOM_LLM_ENDPOINT, and model settings. The tool exposes MCP server extensions for adding local or external tools, options to exclude specific tools, and example extension entries. It advertises multimodal and tool-calling capabilities and reports compatibility test results across many models. Documentation includes programmatic API reference, CLI command guides, troubleshooting, and example prompts to explore or modify repositories. Quickstart instructions and global install via npm are provided.
Use Cases
Easy LLM CLI helps developers accelerate common engineering tasks by enabling natural-language queries against large codebases, generating project scaffolds, and producing drafts for issues or migrations. It automates operational workflows such as querying pull requests, handling complex rebases, and building slide decks or web apps from repository data. Multimodal support and MCP servers let you connect system tools for actions like image conversion or PDF organization. The programmatic API lets applications run the agent to generate charts, run commands, or integrate with CI workflows. Environment-based provider switching and custom LLM support make it possible to test or scale with different models without rewriting workflows.

Please fill the required fields*