Report Abuse

Basic Information

Sbnb Linux is a minimalist, immutable Linux distribution designed to boot bare-metal servers and quickly enable remote access and AI workloads. It targets environments from home labs to distributed data centers and provides an operating environment that runs entirely in memory, establishes secure tunnels for remote management, and can host GPU-accelerated workloads inside containers or virtual machines. The project includes tooling and documentation to boot systems from USB or network iPXE, attach Nvidia GPUs to guests using vfio-pci, and deploy popular AI stacks such as vLLM, SGLang, RAGFlow, LightRAG, and Qwen2.5-VL. It supports Confidential Computing via AMD SEV-SNP when hardware permits and emphasizes immutable images, reproducible builds via Buildroot, and automation-friendly customizations for infrastructure as code workflows.

Links

App Details

Features
Sbnb Linux boots a Unified Kernel Image (UKI) and runs in-memory for an immutable, resilient host. The initramfs includes BusyBox, systemd, Tailscale for automatic secure tunnels, Docker engine for containers, and QEMU/KVM for VMs. It provides USB and iPXE boot methods, scripts to create bootable media on Windows, macOS, and Linux, and sbnb-cmds.sh for low-level boot-time customization. GPU passthrough via vfio-pci is demonstrated for Nvidia GPUs. The distro is built with Buildroot using br2-external, supports developer mode through sbnb-dev-env.sh which launches a full dev container, applies CPU and security processor microcode at boot, supports AMD SEV-SNP confidential VMs, and uses A/B update logic with a watchdog to revert failed updates.
Use Cases
Sbnb Linux shortens the path from bare-metal hardware to running AI workloads by providing a minimal, preconfigured runtime that is easy to deploy and manage remotely. Running in memory and using immutable images reduces configuration drift and restores a known good state after power cycles, which is useful for remote or unattended sites. Built-in Tailscale tunnels simplify secure access, while Docker and QEMU allow flexible workload placement including GPU-accelerated containers and confidential computing VMs for stronger isolation. The project includes step-by-step guides, IaC examples, monitoring integration references, and reproducible build instructions so teams can automate provisioning, recover from failed updates automatically, and standardize AI infrastructure across home labs and production fleets.

Please fill the required fields*