Report Abuse

Basic Information

This repository implements a tiny Model Context Protocol (MCP) server intended for developers building interoperable AI models and tools that need secure, real-time messaging. It is an Express.js application that exposes an SSE endpoint and a message posting endpoint so LLMs or other services can connect and exchange context. The server bundles cryptographic tooling to generate SJCL P-256 key pairs, derive shared secrets, and perform AES-CCM encryption and decryption, enabling end-to-end encrypted exchanges between agents. The README includes a worked example of a Sonnet LLM thread demonstrating key exchange, derivation of a shared secret, encryption of a short message, and decryption. The project is packaged with npm scripts for development and production and is configurable via a PORT environment variable. It is licensed under MIT.

Links

App Details

Features
The server provides a compact set of features focused on secure agent communication. It can generate SJCL P-256 key pairs without exposing private keys. It derives shared secrets from a private key and a peer public key to enable symmetric encryption. It encrypts and decrypts messages using SJCL AES-CCM. It offers server-sent events for realtime connections and a POST API to send messages to specific connections. The codebase runs on Express.js and includes npm scripts for development, build and production start. The project is powered by the Stanford Javascript Crypto Library and documents environment configuration. The README contains an end-to-end example showing tool use calls and tool results to illustrate the workflow.
Use Cases
This MCP server makes it easier for developers to add secure, standardized communication between AI models and external tools. By implementing key generation, shared secret derivation and AES-CCM encryption/decryption, it reduces the effort of building end-to-end encrypted agent interactions and avoids reinventing cryptographic plumbing. The SSE endpoint and message API enable real-time context streaming and targeted message delivery for multi-agent workflows. Example conversations in the README demonstrate how an LLM can call cryptographic tools to perform a secure key exchange and then encrypt and decrypt payloads. The project is lightweight, configurable via environment variables, and includes development and production workflows so teams can run it locally or deploy it as part of a larger agent orchestration stack.

Please fill the required fields*