System Requirements
No container-per-agent overhead. No external workflow engines. Governed AI agents on commodity hardware — from a single server to enterprise air-gap clusters.
Sidjua Free — AGPL-3.0
Full-featured open-source edition. Everything you need to run governed AI agents on a single node.
Required Software
Every component listed here is needed for SIDJUA V1 to operate. There are no hidden dependencies and no optional-but-actually-required footnotes.
Operating System
- Ubuntu 24.04 LTS (or later) — the only supported production OS
- Other Debian-based distributions may work but are untested and unsupported
- macOS and Windows are not supported for running SIDJUA itself (development contributions welcome)
Runtime
- Node.js ≥22 — ESM + CJS dual-module support required
- npm ≥10 — package management
- Docker ≥24 — runs the infrastructure services below
- Docker Compose v2 — orchestrates the service stack
Infrastructure Services (Docker)
- PostgreSQL 16+ — shared state, knowledge raw storage, audit log. ~100 MB RAM
- Qdrant — vector search for the knowledge pipeline. ~300 MB RAM
- OpenBao — secrets management (Vault-compatible). API keys, tokens, credentials. ~25 MB RAM
All three ship as Docker containers in the provided docker-compose.yml. One command starts everything.
Not Required
- No Kubernetes — single-node Docker Compose is sufficient
- No Temporal or external workflow engine — SIDJUA has a built-in task pipeline
- No Redis, no Elasticsearch, no RabbitMQ
- No container per agent — all agents are Node.js child processes
- No GPU — LLM inference is API-based. Local models optional for air-gap
RAM Budget — Where the Memory Goes
Every component has a real cost. Here's what a V1 deployment actually consumes:
| Component | RAM |
|---|---|
| Ubuntu 24.04 OS baseline | ~800 MB |
| Docker daemon | ~150 MB |
| PostgreSQL 16 | ~100 MB |
| Qdrant | ~300 MB |
| OpenBao | ~25 MB |
| SIDJUA Core (Orchestrator + CLI) | ~200 MB |
| Base overhead (before any agents) | ~1.6 GB |
| Per Agent | RAM |
|---|---|
| AgentProcess (basic) | ~50–80 MB |
| AgentProcess + loaded knowledge | ~80–120 MB |
| SQLite per agent (encrypted, on disk) | ~1–10 MB disk |
V1 Hardware Tiers
Personal
- RAM 4 GB total
- CPU 2 cores
- Disk 10 GB free
- Agents 1–5
Small Team
- RAM 8 GB total
- CPU 4 cores
- Disk 20 GB free
- Agents 5–30
Business
- RAM 16 GB total
- CPU 4–8 cores
- Disk 40 GB free
- Agents 30–100+
Power User
- RAM 32 GB+
- CPU 8+ cores
- Disk 100 GB+ free
- Agents 100–200+
Why It's This Light
Most multi-agent platforms spin up a Docker container per agent, each carrying a full runtime, a workflow engine, and a database connection. SIDJUA doesn't. Agents are child processes sharing one Node.js runtime and encrypted SQLite databases. The governance layer, orchestrator, and task pipeline are built in.
| Metric | Container-per-Agent | SIDJUA V1 |
|---|---|---|
| Memory per agent | ~1 GB (Docker container) | 50–80 MB (Node.js process) |
| Workflow engine overhead | ~340 MB (Temporal/similar) | 0 MB (built-in) |
| Disk per agent | ~3 GB (image + layer) | 1–10 MB (SQLite + state) |
| Agents on 16 GB | 8–10 | 100+ (with full infra stack) |
| Agent cold start | ~30 seconds | ~2 seconds |
| External dependencies | Temporal, PostgreSQL, Redis | PG + Qdrant + OpenBao (all included) |
The Real Bottleneck
On modern hardware, agent capacity is rarely limited by RAM or CPU. The practical ceiling is your LLM API budget. Ten agents making parallel API calls can cost more per hour than the server they run on costs per month. SIDJUA tracks every API call, every token, every cent — so you know where the money goes before it's gone.
Network Modes
Cloud Mode
Internet access to LLM providers (Anthropic, OpenAI, Google, etc.). Standard deployment. Each API call is ~1–5 KB request, ~5–50 KB response.
Air-Gap Mode
Zero internet. Local LLMs via Ollama or vLLM, local embedding models, local secrets. All governance, audit, and knowledge stays on your infrastructure. Nothing phones home.
Hybrid Mode
Internal network for governance and audit. Internet for LLM API calls only. Configurable per agent — your compliance agents stay local while research agents reach the cloud.
# Clone and start (includes PostgreSQL + Qdrant + OpenBao)
git clone https://github.com/GoetzKohlberg/sidjua.git
cd sidjua && docker compose up -d
# Configure governance and provision your first agents
sidjua apply
# Base footprint: ~1.6 GB RAM, ~3 GB disk. Ready.Sidjua Enterprise — Commercial License
Everything in Sidjua Free plus scale, compliance depth, and infrastructure for regulated environments.
Additional Infrastructure
V2 Enterprise builds on V1's full stack and adds components for high-availability, compliance certification, and multi-node deployment.
V2 Additional Requirements
- Kubernetes — multi-node orchestration, auto-scaling, rolling deployments
- OpenBao HA-Cluster — multi-node secrets with Shamir key splitting
- Prometheus + Grafana — enterprise monitoring, alerting, SLA tracking
- systemd integration — process management, watchdog, auto-restart
- LDAP / SAML / SSO provider — enterprise identity management
V2 Enterprise Features (Software)
- MOODEX — patented AI agent affective state monitoring
- CDP — Calibration Dialogue Protocol for agent alignment
- Tamper-Proof Audit — cryptographically signed, legally defensible logs
- Compliance Packs — EU AI Act, GDPR, NIS2 documentation generators
- SLA Engine — escalation based on service level agreements
- Brainpool Module — multi-LLM consensus for critical decisions
- Auditor Module — isolated Docker container with local LLM for compliance verification
- Encrypted Agent Communication — E2E encryption between agent processes
- Enterprise Tiering — T4–T7 agent tiers for deep organizational hierarchies
Enterprise Single-Node
- RAM 32 GB+
- CPU 8+ cores
- Disk 200 GB+ SSD
- Agents 50–200
Enterprise Cluster
- RAM 64 GB+ per node
- CPU 16+ cores per node
- Nodes 3+ (HA minimum)
- Agents 200–1000+
Real-World Sizing
V1 — Home Server (Intel i7, 16 GB)
50+ agents across 3 divisions. Full infra stack (PG + Qdrant + OpenBao). Active knowledge pipeline. Total SIDJUA footprint: ~4.5 GB. Remaining for agents: ~11 GB. API budget: ~$50–100/month.
V1 — Budget VPS (4 GB RAM)
3 agents (Researcher, Writer, Reviewer). Full infra stack consumes ~1.6 GB. Leaves ~2 GB for agents + headroom. API budget: ~$5–10/month. The entry point for anyone who wants governed AI on the cheap.
V2 — Enterprise Air-Gap (3 × 64 GB nodes)
500+ agents across 8 divisions. Local LLM inference (Ollama/vLLM), local embedding. OpenBao HA cluster, Prometheus monitoring, tamper-proof audit. Full EU AI Act compliance pack. No external API costs — everything on-prem.
Performance figures based on internal benchmarks on commodity hardware (Intel i7-3770, 16 GB RAM, Ubuntu 24.04, February 2026). Actual consumption varies with agent complexity, knowledge collection sizes, and concurrent load. Sidjua Free (AGPL-3.0) includes all listed V1 features. Sidjua Enterprise features require a commercial license. Patent Pending.