ChatGPT Telegram Bot

Report Abuse

Basic Information

This repository provides TeleChat, a full Telegram bot implementation that connects Telegram chats to a variety of large language models and related multimodal services. It is designed to let users interact with models such as GPT-3.5/4/4o/5, DALL·E 3, Claude 2/3 series, Gemini 1.5, Groq Mixtral, LLaMA2-70b and other supported backends directly from Telegram. The project bundles a plugin architecture and a separate submodule for API request and conversation management. It includes extensive configuration via environment variables, support for conversation isolation and multi-user modes, and detailed deployment instructions for Docker, Replit, Koyeb, Zeabur and fly.io. The codebase targets operators who want an easy-to-deploy, extensible Telegram interface to multiple LLM providers and multimodal capabilities, while giving end users a conversational assistant inside Telegram.

Links

Categorization

App Details

Features
Supports many mainstream models and backends, including multiple GPT, Claude, Gemini, Mixtral and LLaMA variants and image generation via DALL·E 3. Provides multimodal question answering for voice, audio, images and document formats such as PDF, TXT, MD and code files. Offers a model grouping system for organizing available models and quick in-chat model switching. Includes a plugin system for search, URL summarization, arXiv paper summarization and a code interpreter. Implements group chat topic mode, multi-user or global configuration modes, message streaming and typewriter-like output, long message merging and automatic splitting to respect Telegram limits. Supports whitelist/blacklist and admin controls, inline mode for @-mentions, multi-language UI, follow-up question suggestions and asynchronous multi-threaded message processing.
Use Cases
TeleChat lets Telegram users and operators access powerful LLM and multimodal capabilities without building an LLM stack from scratch. End users can ask questions, upload documents and images for context, generate images, and perform web-augmented searches through enabled plugins. Operators benefit from extensive environment configuration to control models, plugins, conversation persistence, language and access lists. Deployment recipes and prebuilt Docker images make hosting straightforward on multiple platforms and the repo documents one-click and manual deployment flows. The plugin architecture and the included API submodule enable developers to extend capabilities, add custom tools and adapt new model providers while preserving conversation isolation and configurable user preferences to manage cost and privacy.

Please fill the required fields*