Milla Jovovich Launches Open-Source AI Memory System

Actress Milla Jovovich and developer Ben Sigman released MemPalace, an open-source, locally runnable AI memory system that applies the method-of-loci architecture to agent memory. MemPalace claims a first-ever perfect 100% R@5 on the LongMemEval benchmark (with Haiku reranking) and 96.6% without external API calls. The system stores every token in a hierarchical structure—wings, halls, rooms, closets, tunnels—uses SQLite and ChromaDB for local persistence, ships under an MIT license, and includes a 30x AAAK compression dialect that compresses long passages so models can load months of context. Integrations include Python 3.9+ and connectors for Claude, ChatGPT, Cursor and MCP. The design prioritizes retrieval accuracy and local privacy, positioning MemPalace as a practical memory layer for conversational agents and personal AI.
What happened
Milla Jovovich and software developer Ben Sigman published MemPalace, an open-source AI memory system that maps the ancient method of loci (memory palace) to agent memory. The project ships under an MIT license, runs fully locally on SQLite and ChromaDB, and provides integration for Claude, ChatGPT, Cursor and MCP. The authors report a 100% R@5 score on LongMemEval with Haiku reranking and 96.6% R@5 with zero external API calls — the first perfect LongMemEval score claimed.
Technical context
Long-term memory for LLM-based agents is a persistent bottleneck: token limits and brittle retrieval reduce continuity in multi-session assistants. MemPalace tackles this by replacing flat vector stores with a strict hierarchical corpus architecture. Data are stored at multiple granularities and cross-referenced rather than left as undifferentiated embeddings. The project also introduces AAAK, a shorthand compression dialect that reportedly compresses passages ~30x (a 1,000-token passage to ~120 tokens), allowing models to preload far more context prior to interaction.
Key implementation details — The architecture uses five conceptual layers: wings (projects/people), halls (memory types like facts/events/preferences), rooms (topics), closets (compressed summaries pointing to raw content) and tunnels (cross-references between rooms). The team reports that constraining search scope incrementally (wing → hall → room) boosts retrieval accuracy from 60.9% to 73.1%, and adding hall+room context raises it to 94.8%, a 34% improvement over flat search. The codebase targets Python 3.9+, relies on local storage (SQLite, ChromaDB), and provides reranking via a Haiku module. MemPalace is MIT-licensed and available on GitHub.
Why practitioners should care
If reproducible, the combination of structured storage + compact agent-native compression materially reduces the 'forgetting' problem for assistants, enabling months of contextual state to be available without cloud APIs or subscriptions. The local-first approach reduces operational cost, surface for exfiltration, and regulatory complexity for privacy-sensitive deployments. AAAK-style compression could be integrated into retrieval chains to improve context utilization across models without model fine-tuning.
What to watch
Independent replication of the LongMemEval claims and open benchmarking details (datasets, prompts, reranker configs) are the immediate priorities. Monitor the GitHub repo for issues, reproducibility notes, and community adapters for common agent frameworks. Evaluate AAAK's fidelity on real user data (lossy compression tradeoffs) and test retrieval latency and index growth for large-scale personal archives. Also watch integration breadth (local LLMs, agent toolkits) and security/privacy audits.
Scoring Rationale
MemPalace addresses a widely recognized practical problem—long-term memory for agents—and claims a benchmark breakthrough while remaining open-source and local-first. If reproducible, it will be relevant for practitioners building persistent assistants and agent systems.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems


