genai processors

Report Abuse

Basic Information

GenAI Processors is a lightweight Python library designed for developers to build modular, asynchronous, and composable pipelines for generative AI workloads. It defines the Processor abstraction, which processes streams of ProcessorPart objects that represent pieces of content such as text, images, audio, or JSON. The library targets both turn-based and streaming interactions and offers primitives for orchestrating concurrent tasks, handling streaming model output, and wrapping Gemini model calls. It includes example notebooks and runnable examples demonstrating real-time agents, research agents, and live commentary workflows. The project is provided as a Python package requiring Python 3.10+, and it is intended to be used as infrastructure for assembling, extending, and running AI processing components rather than as a consumer-facing application.

Links

Categorization

App Details

Features
The README highlights modular Processor and PartProcessor units that can be chained or parallelized to form complex data flows. It supports ProcessorPart metadata with MIME type, role, and custom attributes and multiple content types including text, images, and audio. The library integrates with GenAI APIs via built-in processors such as GenaiModel for request/response usage and LiveProcessor for streaming. It is asynchronous and concurrent, built on Python asyncio, and provides stream management utilities for splitting, concatenating, and merging asynchronous streams. It is extensible through inheritance and simple decorators, includes a core set of built-in processors and a contrib area for community additions, and ships example notebooks and scripts demonstrating common patterns.
Use Cases
GenAI Processors helps developers reduce boilerplate when building generative AI pipelines by providing a consistent abstraction for content parts and processors, making it easier to compose, parallelize, and reuse processing steps. The asynchronous design enables efficient handling of network I/O and concurrent tasks, which is useful for real-time or streaming agents. Built-in processors for turn-based and live interactions simplify integration with Gemini model calls, and stream utilities make it straightforward to transform and route content parts. Example notebooks and example agents show how to create audio-in audio-out live agents, research-oriented agents, and live commentary setups, accelerating prototyping and production of agentic workflows.

Please fill the required fields*