This is the product of eight months of work. ARES is live, and what started as a personal obsession rooted in my own autoimmune disease has become one of the most technically and philosophically challenging projects I have ever undertaken. I created a graph-based reasoning architecture modeled on the mechanisms of autoimmune disease: the same one I have. The core idea was to take every biological mechanism that governs how human cells behave when the body attacks itself and encode those patterns into a graph-based AI framework designed for cybersecurity threat analysis. That meant spending months mapping the arbitration logic of the immune system, understanding not just how it detects threats but how it decides, escalates, and sometimes catastrophically fails. The result is a system that treats threat detection the way the body treats pathogens, with structured agents that propose, challenge, and synthesize verdicts rather than relying on a single model to make a confident guess with no accountability. What the research revealed, however, is something that goes beyond cybersecurity. LLM agents, as currently architected, do not genuinely negotiate toward truth. Without explicit structural scaffolding, they fall back on mimicking negotiation, which produces capitulation, rigidity, and over-correction rather than calibrated reasoning. Structured dialectical debate, when left uncontrolled, actively degrades their accuracy rather than improving it. That is not a failure of the project. That is a finding, and it maps directly to how autoimmune systems fail when the body's defense mechanisms turn against themselves because the arbitration logic is broken. The goal from here is to build a sentinel that does not replicate the immune system's flaws but perfects its strengths, one that is logical by principle and fair by math. The link to the project and a live visualization WebSocket tied directly to simulated cybersecurity attacks in a contained environment is included below.