Report Abuse

Basic Information

my-neuro is a toolbox and desktop application project intended to create a personalized AI companion that approximates a human-like character by combining voice, personality, appearance and memory. The repo is inspired by neuro sama and provides packaged tools, scripts and guidance so users can deploy either fully local or mixed local/closed‚ÄëAPI systems on Windows. It supports training or cloning TTS voices, swapping Live2D avatars, integrating visual recognition, and tuning LLM behavior via an included LLM-studio folder. The project targets both non-experts with one-click bat deployment and experienced users who want to run or fine-tune open-source models locally. Hardware guidance and optional cloud API usage are documented. The README documents step-by-step Windows setup, conda environment commands, required model downloads and optional RAG and diagnostic utilities.

Links

App Details

Features
Dual-model support for both open-source local models and closed-source API models. Low-latency local inference with conversational latency claims under one second. Subtitle and synchronized speech output. TTS cloning and training support built on GPT-SoVITS with guidance for audio input and resource requirements. MCP integration and support for real-time interruptions. Visual ability integration for image recognition and conditional activation. Live2D avatar replacement and desktop pet executable for an interactive UI. Long-term memory support and proactive dialogue based on context. Streaming integration (example: Bilibili) and Android app support for mobile chat. Deployment helpers including Batch_Download.py, one-click .bat installers, Game-starts.bat, diagnostic_tool.py and explicit conda/python setup instructions. Optional RAG module for enhanced memory at higher VRAM cost.
Use Cases
The repository helps users build a customizable, persistent conversational agent that can act as a desktop companion, streamer character or voiceable assistant. It enables cloning or training of a personalized voice, swapping avatars, and preserving long-term user memory so the model remembers preferences and personality traits. Local inference options improve responsiveness and privacy while the included LLM-studio folder and fine-tuning notes let advanced users adapt language behavior. One-click deployment and stepwise instructions lower the barrier for non-technical users, while diagnostic tools and configuration files assist troubleshooting. The project documents hardware needs and API-key use for closed models, and provides features useful for streaming, live interaction and mobile access.

Please fill the required fields*