eRacks/AIDAN
Meet eRacks/AIDAN - "The RAMstack": The Smarter AI Server for Real Workloads
Tired of AI servers that cost more than your car and come preloaded with GPUs you can't upgrade, don't need, or can't even buy? Meet the RAMstack, a new kind of rackmount AI server—designed for developers, researchers, and AI engineers who value real memory, practical performance, and open-source flexibility over hype.
đź§ Big Brain, Not Big Budget
eRacks/AIDAN, aka "The RAMstack", is built to handle modern AI workloads—especially large language models (LLMs) and memory-intensive inference—by focusing on total system RAM, not just raw GPU horsepower. That means fewer compromises, less swapping, and better performance on workloads that demand fast access to massive in-memory datasets.
- Up to 3TB DDR5 ECC RAM
- Dual AMD EPYC CPU architecture for massive parallel memory access and enterprise-grade reliability
- Optimized for retrieval-augmented generation (RAG), vector search, and local LLMs
đź’ˇ GPUs You Can Actually Get
Forget those $20k H100s. RAMstack is built around commercial off-the-shelf (COTS) GPUs—like NVIDIA RTX and AMD ROCm cards—that are powerful, available, and replaceable. Perfect for fine-tuning, quantized inference, and mid-scale training.
- Up to 3 GPUs: Compatible with widely available RTX 4000, 6000 series, and AMD Instinct
- Modular GPU trays: swap or scale as you grow
- Full support for mixed precision inference and quantized models
- iKVM / BMC / IPMI / Reddfish
🛠️ 100% Local. 100% Open.
Run your models locally, privately, and securely. RAMstack ships fully configured with a hardened Linux stack, and integrates seamlessly with the open-source AI ecosystem.
- Pre-installed with: Ubuntu Server, Docker, and Open-Source LLM frameworks (Ollama, LM Studio, vLLM, LangChain, and more)
- Designed for local-first AI workflows—no cloud lock-in, no vendor nonsense
- Enterprise-ready with remote management (IPMI/Redfish) and optional air-gapped configuration
đź”§ Built for Builders
Whether you're serving LLMs to your team, testing new agents, or doing on-prem R&D, RAMstack gives you the hardware you need and the freedom you want. Less marketing fluff, more meaningful compute.
- 2U and 4U rackmount chassis options
- Tool-less drive bays and hot-swappable PSU
- NVMe storage optimized for fast dataset streaming
Ready to Own Your AI Stack?
RAMstack gives you the power of local AI without the cloud costs, GPU waitlists, or hardware overkill. Finally—a practical AI server for people who actually use it.
Features & Specifications
🔍 Summary
| Feature | Spec / Benefit |
|---|---|
| RAM Capacity | Up to 3TB ECC DDR5 |
| CPU | Up to 2 (Dual) AMD EPYC processors |
| GPU Support | Up to 3 COTS GPUs (RTX 4060–4090, A6000, etc.) |
| Form Factor | Rackmount 2U, tool-less serviceability |
| Local-Only | Zero cloud dependency |
| Software | 100% Open-Source |
| OS Options | Ubuntu, Rocky, Proxmox |
Default Configuration
- Chassis: eRacks/2U-3GPU-8SSD
- Power Supply: 1600W Redundant PSU 80+ Platinum
- Motherboard: eRacks-certified Dual AMD EPYC motherboard
- CPU: Dual AMD EPYC 9004 Series processors (configurable)
- Memory: Up to 3TB DDR5 ECC/REG RAM (24x 128GB modules)
- GPUs: Up to 3x NVIDIA RTX / AMD / Intel A/B Series GPUs
- SATA / SSD Drives: 2x 2TB 2.5" SATA/NVMe SSD
- OS Drive(s): 2x NVMe SSD 500GB-class, Mirrored
- Network Interfaces: 2x 10GbE Intel Network Ports, RJ45
- Operating System: Ubuntu Linux LTS
- Remote Management: iKVM / BMC / IPMI / Redfish
- Warranty: Standard 1yr full / 3yr limited warranty, 5yr Manufacturer's HDD Warranty
- AI Software: Ollama installed, Llama3:70B model, OpenWebUI, others on request
Configure eRacks/AIDAN
Choose the desired options and click "Add to Cart", or "Get a Quote". Please add any additional requests and information in the "Notes" field. Your quote request will be sent to your profile's eMail if you are logged in, otherwise enter the email address below (required only if not logged in).
Current Configuration
Default Configuration
eRacks Open Source Systems