Report Abuse

Basic Information

This repository provides a Python library, CLI tools, documentation and examples to enable AI models to interact with and modify Jupyter Notebooks. It equips agents with notebook-level tools such as adding code cells, inserting markdown, executing code and other actions so models can operate on the entire notebook rather than only on single cells. The implementation integrates with JupyterLab and its real time collaboration system using the Jupyter NbModel Client and Jupyter Kernel Client, and can also run separately from the Jupyter server via RTC. The LangChain agent framework is used to orchestrate model-to-tool interactions. The project includes installation instructions, CLI examples for prompt-driven and explain-error agents, guidance for running a server, and development and testing workflows.

Links

Categorization

App Details

Features
Tools and actions targeted at notebooks including execute, insert_cell and markdown insertion, plus the ability to operate at the whole-notebook level rather than only per-cell. Real time Collaboration integration using Jupyter RTC via the Jupyter NbModel Client and Jupyter Kernel Client enabling live updates in JupyterLab. LangChain is used as the agent framework to manage model-tool interactions. CLI commands for common workflows such as a prompt agent and an explain-error agent are provided. Support for multiple model providers is documented and demonstrated with an Azure OpenAI example. Packaging, development install options, tests and a make server target for running a deployment are included. Notes and workarounds for pycrdt compatibility and JupyterLab collaboration prerequisites are documented.
Use Cases
The project lets users augment Jupyter workflows by allowing AI models to create, edit and run cells across an entire notebook, which can speed prototyping, generate example code and assist debugging. Built-in CLI examples show how to request code generation or ask an agent to explain errors in a notebook. Real time collaboration support means changes made by agents can be observed immediately in JupyterLab sessions. Running agents outside the server via RTC enables integration with external clients and deployment scenarios. LangChain-based orchestration and multi-provider model support make it easier to connect different model backends. Development and testing workflows, plus packaging guidance, help teams adopt and extend the library.

Please fill the required fields*