System Requirements

No container-per-agent overhead. No external workflow engines. Governed AI agents on commodity hardware — from a single server to enterprise air-gap clusters.

Sidjua Free — AGPL-3.0

Full-featured open-source edition. Everything you need to run governed AI agents on a single node.

Required Software

Every component listed here is needed for SIDJUA V1 to operate. There are no hidden dependencies and no optional-but-actually-required footnotes.

Operating System

Runtime

Infrastructure Services (Docker)

All three ship as Docker containers in the provided docker-compose.yml. One command starts everything.

Not Required

RAM Budget — Where the Memory Goes

Every component has a real cost. Here's what a V1 deployment actually consumes:

ComponentRAM
Ubuntu 24.04 OS baseline~800 MB
Docker daemon~150 MB
PostgreSQL 16~100 MB
Qdrant~300 MB
OpenBao~25 MB
SIDJUA Core (Orchestrator + CLI)~200 MB
Base overhead (before any agents)~1.6 GB
Per AgentRAM
AgentProcess (basic)~50–80 MB
AgentProcess + loaded knowledge~80–120 MB
SQLite per agent (encrypted, on disk)~1–10 MB disk

V1 Hardware Tiers

Personal

"My server, my rules"
  • RAM 4 GB total
  • CPU 2 cores
  • Disk 10 GB free
  • Agents 1–5
Old desktop repurposed as server, Raspberry Pi 5 (8 GB), cheap VPS. Solo dev, personal automation. Base overhead ~1.6 GB leaves ~2 GB for agents.

Small Team

"Startup with ambitions"
  • RAM 8 GB total
  • CPU 4 cores
  • Disk 20 GB free
  • Agents 5–30
Mini PC (Intel N100), mid-range VPS, refurbished server. One division, shared task queue, growing knowledge base.

Business

"Real departments, real governance"
  • RAM 16 GB total
  • CPU 4–8 cores
  • Disk 40 GB free
  • Agents 30–100+
Dell OptiPlex, Hetzner AX42, dedicated server. Multi-division setup, active knowledge pipeline, full audit trail.

Power User

"Everything the AGPL gives you"
  • RAM 32 GB+
  • CPU 8+ cores
  • Disk 100 GB+ free
  • Agents 100–200+
Dedicated rack server, beefy workstation. Large knowledge collections, many concurrent agents. Still single-node.

Why It's This Light

Most multi-agent platforms spin up a Docker container per agent, each carrying a full runtime, a workflow engine, and a database connection. SIDJUA doesn't. Agents are child processes sharing one Node.js runtime and encrypted SQLite databases. The governance layer, orchestrator, and task pipeline are built in.

MetricContainer-per-AgentSIDJUA V1
Memory per agent~1 GB (Docker container)50–80 MB (Node.js process)
Workflow engine overhead~340 MB (Temporal/similar)0 MB (built-in)
Disk per agent~3 GB (image + layer)1–10 MB (SQLite + state)
Agents on 16 GB8–10100+ (with full infra stack)
Agent cold start~30 seconds~2 seconds
External dependenciesTemporal, PostgreSQL, RedisPG + Qdrant + OpenBao (all included)

The Real Bottleneck

On modern hardware, agent capacity is rarely limited by RAM or CPU. The practical ceiling is your LLM API budget. Ten agents making parallel API calls can cost more per hour than the server they run on costs per month. SIDJUA tracks every API call, every token, every cent — so you know where the money goes before it's gone.

Network Modes

Cloud Mode

Internet access to LLM providers (Anthropic, OpenAI, Google, etc.). Standard deployment. Each API call is ~1–5 KB request, ~5–50 KB response.

Air-Gap Mode

Zero internet. Local LLMs via Ollama or vLLM, local embedding models, local secrets. All governance, audit, and knowledge stays on your infrastructure. Nothing phones home.

Hybrid Mode

Internal network for governance and audit. Internet for LLM API calls only. Configurable per agent — your compliance agents stay local while research agents reach the cloud.

Quick Install — Sidjua Free
# Clone and start (includes PostgreSQL + Qdrant + OpenBao)
git clone https://github.com/GoetzKohlberg/sidjua.git
cd sidjua && docker compose up -d

# Configure governance and provision your first agents
sidjua apply

# Base footprint: ~1.6 GB RAM, ~3 GB disk. Ready.

Sidjua Enterprise — Commercial License

Everything in Sidjua Free plus scale, compliance depth, and infrastructure for regulated environments.

Additional Infrastructure

V2 Enterprise builds on V1's full stack and adds components for high-availability, compliance certification, and multi-node deployment.

V2 Additional Requirements

V2 Enterprise Features (Software)

Enterprise Single-Node

"Governed and certified"
  • RAM 32 GB+
  • CPU 8+ cores
  • Disk 200 GB+ SSD
  • Agents 50–200
On-prem server with full compliance stack. Prometheus monitoring, tamper-proof audit, MOODEX. Single-node Kubernetes or Docker Compose.

Enterprise Cluster

"Air-gapped, multi-node, zero trust"
  • RAM 64 GB+ per node
  • CPU 16+ cores per node
  • Nodes 3+ (HA minimum)
  • Agents 200–1000+
Multi-rack air-gap deployment. OpenBao HA with Shamir unsealing, Kubernetes cluster, local LLM inference, full EU AI Act compliance documentation.

Real-World Sizing

V1 — Home Server (Intel i7, 16 GB)

50+ agents across 3 divisions. Full infra stack (PG + Qdrant + OpenBao). Active knowledge pipeline. Total SIDJUA footprint: ~4.5 GB. Remaining for agents: ~11 GB. API budget: ~$50–100/month.

V1 — Budget VPS (4 GB RAM)

3 agents (Researcher, Writer, Reviewer). Full infra stack consumes ~1.6 GB. Leaves ~2 GB for agents + headroom. API budget: ~$5–10/month. The entry point for anyone who wants governed AI on the cheap.

V2 — Enterprise Air-Gap (3 × 64 GB nodes)

500+ agents across 8 divisions. Local LLM inference (Ollama/vLLM), local embedding. OpenBao HA cluster, Prometheus monitoring, tamper-proof audit. Full EU AI Act compliance pack. No external API costs — everything on-prem.

See Architecture → Talk to Us

Performance figures based on internal benchmarks on commodity hardware (Intel i7-3770, 16 GB RAM, Ubuntu 24.04, February 2026). Actual consumption varies with agent complexity, knowledge collection sizes, and concurrent load. Sidjua Free (AGPL-3.0) includes all listed V1 features. Sidjua Enterprise features require a commercial license. Patent Pending.