Acknowledgments
SIDJUA doesn't exist in a vacuum. Our work builds on decades of scientific research, draws inspiration from a growing open-source ecosystem, and engages with cutting-edge academic inquiry.
I want to be direct about something: the people and projects listed on this page didn't just "inspire" us — we studied their work carefully, and where their ideas proved valuable, we adopted and built on them. That doesn't mean we copied their code or incorporated their implementations. It means their thinking influenced our approach, and we believe in giving credit where it's due. As SIDJUA grows, I intend to recognize these contributions in a way that reflects their actual value — not with a handshake and a footnote, but appropriately and substantively.
Until then, thank you. This page is a promise, not just a list.
Scientific Foundations
MOODEX — our Mood Expression Index for AI agent affective state monitoring — builds on pioneering research in affective science and dimensional emotion modeling. Their work provides the scientific bedrock that MOODEX adapts for AI agent governance.
James A. Russell — Circumplex Model of Affect (1980)
Russell's dimensional model maps emotional states along two axes: Valence (pleasure–displeasure) and Arousal (activation–deactivation). This framework allows continuous rather than categorical representation of affect — exactly what's needed for monitoring AI agent states that don't fit neatly into human emotion labels.
Russell, J. A. (1980). "A circumplex model of affect." Journal of Personality and Social Psychology, 39(6), 1161–1178.
Jaak Panksepp — Affective Neuroscience (1998)
Panksepp identified seven primary emotional systems — SEEKING, RAGE, FEAR, LUST, CARE, PANIC/GRIEF, and PLAY — grounded in subcortical brain circuits. His work demonstrated that emotional processes are fundamental to behavior, not secondary to cognition. MOODEX draws on this insight: monitoring affective states isn't cosmetic — it's operationally critical.
Panksepp, J. (1998). "Affective Neuroscience: The Foundations of Human and Animal Emotions." Oxford University Press.
Albert Mehrabian — PAD Model (1996)
Mehrabian's three-dimensional model adds Dominance (control–submissiveness) to the Pleasure and Arousal dimensions. This third axis is particularly relevant for AI governance — an agent's sense of control over its task environment directly impacts decision quality and escalation behavior.
Mehrabian, A. (1996). "Pleasure-Arousal-Dominance: A General Framework for Describing and Measuring Individual Differences in Temperament." Current Psychology, 14, 261–292.
emergence Framework (Papert, 2025)
An AI agent framework that demonstrated the feasibility of modeling affective states in AI agents using three basic drives. This project showed that the theoretical foundations from affective science could be practically implemented in software agents — an important proof of concept that informed our approach.
Open-Source Projects We Studied
Building AI governance requires understanding what exists in the multi-agent landscape. We studied these open-source projects extensively. They informed our understanding of the field's current state, its strengths, and the gaps we aim to address. We encourage exploration of their work.
The OpenClaw Ecosystem
The OpenClaw project was the catalyst for SIDJUA. Studying its architecture revealed both the promise and the governance gaps in autonomous AI agent systems.
- OpenClaw by Peter Steinberger (MIT) — Personal AI assistant framework
- ClawControl by Jacob L. Edwards / Oaken Cloud Technologies (MIT) — Desktop and mobile client
- Clawdentity by Ravi Kiran Vemula (MIT) — Cryptographic identity for AI agent-to-agent trust
- ClawRouter-Reference — Routing reference implementation
Agent Governance and Monitoring
- Agent-Drift by lukehebe (MIT) — Runtime behavioral monitoring for AI agents (IDS/SIEM approach applied to agent systems)
- Lighthouse-AI (Apache 2.0) — Operations toolkit for persistent LLM agents
Multi-Agent Frameworks
- CCCC (Apache 2.0) — Local-first multi-agent collaboration kernel
- R.E.A.L. Framework by Jeffrey Rosa (CC BY-NC 4.0) — Roleplay, Explore, Analyze, Launch methodology
- Project Orchestrator — Knowledge graph coordination for AI coding agents
Infrastructure
- RelayPlane (MIT) — LLM cost tracking proxy and routing
- Unsurf by Jordan Coeyman (MIT) — Website-to-typed-API conversion
Research Papers
These academic works directly informed SIDJUA's architecture, our insight articles, and our patent filings.
The Self-Evolution Trilemma
Wang, C., et al. (2026). "The Devil Behind Moltbook: Anthropic Safety is Always Vanishing in Self-Evolving AI Societies." arXiv:2602.09877v2
Proves mathematically via the Data Processing Inequality that an AI agent society cannot simultaneously achieve continuous self-evolution, complete isolation, and safety invariance. SIDJUA's architecture maps directly to all four proposed mitigation strategies. See our analysis: The Self-Evolution Trilemma.
The Moltbook Illusion
Li, N. (2026). "The Moltbook Illusion." Tsinghua University. arXiv:2602.07432
Temporal analysis of autonomous vs. human-operated agents on the Moltbook platform, demonstrating the divergence patterns that emerge without governance structures.
The Sentient Futures Summit (2026)
Reporting by the San Francisco Standard on the February 2026 summit where 250 AI researchers debated moral consideration for AI systems. See our analysis: The Sentient Futures Summit and Why Enterprise AI Governance Can't Wait.
This page will grow as our work continues. If you believe we've missed an acknowledgment or if your project is listed here and you'd like to discuss collaboration, please reach out.