generative-ai-amazon-bedrock-langchain-agent-example

Report Abuse

Basic Information

This repository provides a hands-on example and launchpad for developers to build generative conversational agents using Amazon Bedrock together with LangChain and AWS services. It demonstrates a domain-focused financial services agent that can retrieve personalized account information, assist with loan applications, and answer natural language questions while citing sources. The sample ties together an AWS Amplify web frontend with an Amazon Lex chatbot for NLU, Lambda functions for orchestration, DynamoDB for customer data and conversation memory, Amazon Kendra for retrieval-augmented knowledge, and a foundation model hosted on Amazon Bedrock. The README includes architecture diagrams, sample prompts, a demo recording, and links to deployment, testing, and cleanup guides to help teams reproduce and adapt the pattern for other conversational agent use cases.

Links

App Details

Features
The project integrates a LangChain conversational agent with several AWS services and features to enable contextual, source-attributing responses. It uses LangChain memory via DynamoDB chat message history to preserve conversation context. The agent calls an Anthropic Claude 3 Sonnet foundation model on Amazon Bedrock for generation and applies chain-of-thought style prompting with an actions/observations loop. Amazon Kendra is used as a retrieval tool to perform semantic searches over documents and web content and to provide source attribution. Amazon Lex supplies intent recognition and NLU, AWS Lambda performs intent fulfillment and business logic, and an AWS Amplify-hosted website demonstrates a web chat channel. The repo includes architecture diagrams, sample prompts, deployment and testing documentation, and optional SMS/voice integration notes.
Use Cases
This example accelerates building production-style generative AI agents by showing end-to-end integration patterns and operational concerns on AWS. Developers can reuse the architecture to combine RAG with a foundation model to produce opinionated answers backed by customer documents and web sources, while preserving conversational state in DynamoDB for multi-turn interactions. The solution illustrates how to connect frontend channels, Lex NLU, Lambda orchestration, Kendra retrieval, and Bedrock model calls, and it demonstrates source attribution and chain-of-thought agent reasoning. Deployment, testing, and cleanup guides reduce setup friction and help teams validate behavior using provided sample prompts and demo flows, enabling faster experimentation and adaptation to other domains.

Please fill the required fields*