← Back to stories Scrabble tiles spelling 'Improve Your Argument' on a green background with leaves.
Photo by DS stories on Pexels
ArXiv 2026-04-07

Rashomon Memory: Towards Argumentation-Driven Retrieval for Multi-Perspective Agent Memory

The paper and the problem

A new arXiv preprint, "Rashomon Memory: Towards Argumentation-Driven Retrieval for Multi-Perspective Agent Memory" (arXiv:2604.03588), tackles a blunt practical problem for long-lived AI agents: how to remember the same event in multiple, conflicting ways. A concession made during a negotiation can be a trust-building investment for one strategic goal and a contractual liability for another. Which interpretation should an assistant surface later? The authors frame this as a retrieval—not a storage—problem: agents must select the perspective that best supports the goals and constraints active at decision time.

What the authors propose

Rather than collapsing experiences into a single canonical record, the paper proposes storing multiple "narratives" or perspective-tagged traces and using argumentation-driven retrieval to surface the most relevant interpretation. Retrieval becomes an argumentative process: candidate perspectives are retrieved, weighed against current objectives and constraints, and selected through a form of internal debate or scoring that privileges the rationale most salient to the task. The manuscript lays out a conceptual framework and design directions for implementing such systems in multi-goal, multi-stakeholder settings.

Why it matters

This is timely work for developers building assistants that must operate over extended horizons—customer-support bots, negotiation agents, enterprise automation—and for researchers studying interpretable, goal-aware memory. The proposal also has policy and safety implications: agents that can justify which "truth" they act on make auditing easier, but they also complicate liability and compliance. The preprint is available on arXiv for further scrutiny (https://arxiv.org/abs/2604.03588). Who decides which memory wins in that internal debate — and how transparent that choice must be — are the next questions for engineers and regulators alike.

Research
View original source →