Apr 21, 2026 - eRacks Ships AI Rackmount Servers With Pre-Installed Open-Source AI Stack, No Vendor Lock-In

Fremont, CA - Apr 21, 2026

FOR IMMEDIATE RELEASE

eRacks Ships AI Rackmount Servers With Pre-Installed Open-Source AI Stack, No Vendor Lock-In

Every AILSA, AIDAN, AINSLEY, and AISHA ships with vanilla Ubuntu 26.04 LTS plus Ollama, Open WebUI, vLLM, PyTorch, CUDA, and Docker - ready to run Llama 4, Qwen 3, and DeepSeek V3.2 in 10 minutes

eRacks Open Source Systems, the rackmount Linux server builder serving businesses and research labs since 1999, today announced that every server in its AI rackmount lineup now ships with a complete, fully open-source AI software stack pre-installed and ready to run the latest large language models out of the box.

The stack combines Ubuntu 26.04 LTS, Ollama, Open WebUI, vLLM, PyTorch with CUDA, Docker with the NVIDIA Container Toolkit, and the standard Linux development toolchain. Every component is a stock upstream release - not a forked, rebranded, or custom-packaged variant. Customers can update any piece of the stack independently on the same cadence as the open-source community, without waiting for vendor firmware bundles or proprietary update channels.

"Our competitors in the boutique AI server space ship proprietary Linux forks and vendor-locked management tools," said Joe Wolff, founder and CEO of eRacks. "That trades convenience today for lock-in later. We believe the hardware should be ours to build well, and the software should be the customer's to own, audit, fork, and replace. Our customers' teams already know Ubuntu. They already know apt. That's the point."

The pre-installed stack is designed for a ten-minute path from "rack the server" to "chat with a local LLM." After powering on and setting a password over SSH, users open a browser to Open WebUI, sign up as the first administrator, select a model from a dropdown pre-sized to their GPU VRAM, and begin a conversation. The ready-to-run model list includes Meta's Llama 4 Scout (17B active parameters with a 10-million-token context window), Alibaba's Qwen 3 series, DeepSeek V3.2, and Mistral - spanning use cases from code generation to reasoning to document analysis.

Customers who want different configurations can request alternatives at order time at no additional cost: Debian 12, Rocky Linux 9, Proxmox VE, llama.cpp instead of Ollama, Hugging Face Text Generation Inference, SGLang, or JupyterLab with a research stack. The eRacks technical team pre-installs and tests any requested configuration before shipment.

The eRacks AI server lineup spans four models at four price points: AILSA (2U, low-profile GPU, from $5,995), AIDAN (2U dual-EPYC, up to 3 GPUs, from $9,995), AINSLEY (4U Threadripper, up to 4 GPUs, from $16,995), and AISHA (4U dual-Xeon, up to 8 GPUs, from $19,995). All ship with full warranty, US-based technical support, and no subscription or licensing fees tied to the AI software stack.

Further details are available at https://blog.eracks.com/2026/04/your-ai-server-your-software-stack/ and the full product lineup at https://eracks.com/products/ai-rackmount-servers/.

About eRacks Open Source Systems

eRacks Open Source Systems designs and builds custom Linux rackmount servers, NAS storage, and AI servers from its Santa Cruz, California facility. Founded in 1999, the company specializes in open-source-first hardware for organizations that require long-term operating system support, data sovereignty, and freedom from vendor lock-in. Customer segments include law firms, healthcare practices, financial services, research universities, media production studios, and IT consultancies.

Media Contact

Joe Wolff eRacks Open Source Systems joe@eracks.com https://eracks.com