Open-source local-first cognitive memory. Same AGM-compliant math as commercial state-of-the-art. Plus the thing nobody ships: when a fact changes, dependent beliefs are re-evaluated, not just flagged.
Every memory system maps to one of six levels โ from native CLAUDE.md to OpenBrain's cross-tool Postgres. They all answer the same question:
Atlas answers a different question:
That's a Level 7 problem. Atlas runs on top of any of the 6 lower levels. Every memory system flags affected beliefs when a fact changes. Atlas is the only one that re-evaluates them.
| Atlas | Kumiho | Graphiti | Mem0 | Letta | Memori | |
|---|---|---|---|---|---|---|
| Open-source | โ | โ | โ | โ | โ | โ |
| Local-first | โ | cloud | โ | partial | โ | โ |
| AGM K*2โK*6 | 100% | 100% | โ | โ | โ | โ |
| Hansson postulates | โ | โ | โ | โ | โ | โ |
| Hash-chained ledger | SHA-256 | partial | โ | โ | โ | โ |
| Auto downstream reassessment | โ Ripple | flag-only | โ | โ | โ | โ |
| Domain ontology shipped | 8 types | โ | โ | โ | โ | partial |
| Multi-stream ingestion | 6 streams | SDK | โ | โ | โ | partial |
| Hermes / OpenClaw / MCP | all 3 | partial | โ | partial | โ | โ |
149-question deterministic subset (83 base templates ร paraphrase variants), seed 42. The 200-question human-authored gold subset and LLM expansion to 1,000 follow. Every cell measured against live Neo4j 5.26 โ none predicted. Reproducible in โค30 seconds with scripts/run_bmb.py.
| System | Overall | prop | contra | line | cross | hist | prov | forget |
|---|---|---|---|---|---|---|---|---|
| Vanilla (no memory) | 0.000 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |
| Graphiti | 0.711 | 0.33 | 0.00 | 1.00 | 0.00 | 1.00 | 1.00 | 0.00 |
| Atlas | 1.000 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 | 1.00 |
Atlas wins by +28.9 points over Graphiti โ the closest open-source neighbor โ on a benchmark we publicly release. All seven categories at 100%. The three where Graphiti also scores 1.00 (lineage, historical, provenance) are the ones a typed graph alone can answer. Mem0 / Letta / Memori scoring lands when keys are pinned in CI.
Below is the real Atlas pipeline running end-to-end against live Neo4j 5.26 โ not a screenshot, an asciicast (re-runnable in 4 seconds with scripts/demo_loop.py). Plant an upstream belief, plant a downstream that depends on it, change the upstream, watch Ripple cascade, write the strategic conflict to an Obsidian markdown queue, resolve via AGM revise, verify the SHA-256 ledger chain. Seven stages. Zero stubs.
One-author corpus (live Obsidian vault + 5,000 Limitless transcripts + 300 Screenpipe rows + 5,000 Claude Code session logs):
# 1. Clone + start Neo4j
git clone https://github.com/RichSchefren/atlas && cd atlas
docker compose up -d
# 2. Install
python -m venv .venv && source .venv/bin/activate
pip install -e .
# 3. Verify the test suite (469 tests, ~12s)
PYTHONPATH=. pytest tests/ -v
# 4. Reproduce AGM compliance (49/49 at 100%, ~30s)
PYTHONPATH=. pytest tests/integration/test_agm_compliance.py -v
# 5. Reproduce BusinessMemBench head-to-head (~3s)
PYTHONPATH=. python scripts/run_bmb.py
# vanilla 0.000 ยท graphiti 0.675 ยท atlas 0.952
# 6. First real ingest from your own vault
ATLAS_VAULT_ROOT=~/Documents/Obsidian \
PYTHONPATH=. python scripts/first_real_run.py
# 7. Watch the loop close, end to end (terminal screencap source)
PYTHONPATH=. python scripts/demo_loop.py
Plug Atlas into your agent runtime via MCP (Claude Code), Hermes MemoryProvider, or OpenClaw memory plugin.
Atlas implements the AGM correspondence proofs from Young Bin Park, Graph-Native Cognitive Memory for AI Agents (arxiv:2603.17244). Independent open-source implementation; not affiliated with Kumiho Inc.
Storage substrate forked from Graphiti by Zep AI (Apache 2.0). Trust-layer policy architecture ported from Bicameral by yhl999 (Apache 2.0). The SHA-256 hash chain is Atlas-original โ Bicameral's chain was aspirational.