India’s #1 Memory Layer for AI Agents, outperforming existing architectures on Locomo and LongMemEval benchmarks.
Xmem introduces Memory-as-a-Service (MaaS) a universal, scalable memory layer designed to power the next generation of AI systems by enabling persistent context, structured knowledge, and memory-aware reasoning across use cases. Whether it’s temporal memory for long-running agents, medical memory for maintaining patient context, enterprise memory for teams and workflows, or developer memory for coding agents, Xmem provides a unified solution that helps AI systems remember, learn, and evolve over time.
Xmem is built as a modular ecosystem, with each repository serving a specific role in the stack:
-
Xmem
The core architecture and backend powering the system, including memory infrastructure, retrieval logic, and benchmark implementations. -
Xmem Landing
The main frontend repository, deployed at www.xmem.in, featuring user-facing experiences like/contextand/scanner. -
Xmem MCP
The primary MCP (Model Context Protocol) implementation for Xmem, built on top of the SDK to enable seamless integration with agents. -
Xmem SDK
Multi-language SDK available in Python, Go, and TypeScript (latest). Install via npm:npm i xmem-ai. -
Xmem Extension
A Chrome extension built on top of the SDK, bringing Xmem’s memory capabilities directly into the browser.
Xmem is building what we believe is the next fundamental layer in AI memory. If compute and models defined the last wave, persistent, intelligent memory will define the next. We’re opening this up to builders, researchers, and hackers who want to shape that future with us.
We reward API credits (Claude, Gemini, ChatGPT, Grok) as bounties based on meaningful contributions.
Whether you're improving core architecture, building new integrations, enhancing the SDK, or experimenting with novel memory use cases we welcome all contributors to come and grow this ecosystem together.
