comfyui_LLM_party

Report Abuse

Basic Information

ComfyUI LLM Party provides a comprehensive set of custom ComfyUI nodes and example workflows to build, orchestrate and run LLM and VLM driven pipelines inside the ComfyUI visual frontend. The project is aimed at users who want to integrate large language models, multimodal models, retrieval-augmented generation, and tool plugins into image-generation and streaming workflows. It enables fast assembly of single-agent assistants, multi-agent topologies (radial and ring interaction modes), and industry-specific knowledge bases managed locally. The repo supports API-based and local model usage (including GGUF, llama.cpp and ollama), offers MCP integration to expose external tools to models, supplies image hosting and TTS support, and includes installation and configuration guidance for ComfyUI environments. Prebuilt workflows and node documentation help users connect models, tools and social app connectors for real use cases.

Links

Categorization

App Details

Features
A broad node library for LLM and VLM loaders supporting API, local and GGUF models. Streaming output mode for LLM API nodes and a reasoning_content output that separates reasoning from responses. Built-in support for many API styles and providers and compatibility layers for aisuite and ollama. VLM support for vision-capable models, distributed local model usage, and examples for generating Stable Diffusion prompts from LLMs. MCP tool integration to convert remote tools into LLM-usable tools. Nodes for image hosting and example workflows for media pipelines. Ready-made workflows and manager integration for easy installation. Agent orchestration features including single-agent pipelines, multi-agent radial and ring interaction modes, RAG and GraphRAG knowledge management, and social app connectors and TTS for streaming and chat integrations.
Use Cases
This project lowers the barrier to building integrated LLM workflows inside the ComfyUI visual environment. It lets creators quickly prototype assistants, multi-agent systems and LLM-driven image workflows without writing glue code by providing ready nodes and example workflows. Support for local GGUF and transformer models plus API adapters makes it flexible for privacy-sensitive or offline setups while MCP and tool nodes enable access to external capabilities. Streaming outputs and debugging-oriented interfaces help researchers and developers tune parameters and observe model behavior in real time. Provided configuration files, manager install options and community workflows help students, content creators and streaming workers assemble one-stop LLM+TTS+image pipelines and manage domain knowledge with RAG features.

Please fill the required fields*