oreilly ai agents

Report Abuse

Basic Information

This repository contains the code and notebooks that accompany an O'Reilly live course and video series titled AI Agents A-Z and Modern Automated AI Agents. It is intended as a practical, hands-on companion for learners who want to understand, implement, evaluate, and deploy AI agents from prototypes to production. The content walks through foundational concepts and more advanced agent paradigms while demonstrating multiple third-party frameworks and tools. The README documents setup instructions requiring Python 3.11 and how to run Jupyter notebooks. The collection includes introductory examples for frameworks such as SmolAgents, CrewAI, Autogen, OpenAI Agents and Swarm, LangGraph, and shows how to build a simple custom framework. It also contains notebooks on evaluation, bias analysis, plan-and-execute and reflection agents, and notes about deployment examples like Streamlit.

Links

Categorization

App Details

Features
A comprehensive set of Jupyter notebooks covering many agent frameworks and patterns is included. Notebooks provide introductions to SmolAgents, CrewAI, Microsoft Autogen, OpenAI Agents and Swarm, LangGraph workflows, and examples using local LLMs. There are practical examples for deploying Crew with Streamlit, LangGraph ReAct agents, LangGraph chess agents, plan-and-execute and reflection agent patterns, and a section on building a simple custom framework called Squad Goals. Evaluation-focused notebooks cover rubric-based evaluation, alignment assessment, and experiments measuring tool-selection and positional bias across models. The repo documents environment setup with Python 3.11, a requirements file, and notes example scripts such as nova_apt.py and a cautionary notebook that enables local machine control by an agent.
Use Cases
The repository is designed to teach practitioners and students how to design, implement, test, and iterate on AI agents using hands-on code examples. Learners can reproduce tutorials in an isolated Python 3.11 virtual environment and run interactive notebooks to compare frameworks and model behaviors. The materials demonstrate deployment patterns, evaluation rubrics, and experiments that quantify tool selection bias, which help in choosing architectures and LLMs. The content also contrasts open and closed source options and discusses cost and best practices, supporting informed decisions for projects and production use. Instructor notes and example projects, including building a minimal agent framework and Streamlit demos, accelerate practical learning. A clear warning is provided for notebooks that permit an agent to control a local machine.

Please fill the required fields*