AI agents are stateless — they forget everything between sessions. MemoryPrizm adds persistent, semantic memory and context engineering so your agents learn, coordinate, and get smarter over time.
Every session starts from zero. Agents repeat mistakes, lose context, and can't coordinate. Stuffing prompts with context doesn't scale — you need real memory infrastructure.
Agents rediscover the same solutions, ask the same questions, and make the same mistakes — session after session.
Multiple agents on the same project can't share discoveries. Each operates in total isolation.
Cross-session knowledge is lost. Stuffing prompts with context wastes tokens and doesn't scale beyond a single project.
From persistent storage to semantic recall to intelligent compaction — everything your agents need for long-term memory.
Search by meaning, not keywords. "How do we deploy?" finds deployment procedures even if they never use the word "deploy." Powered by vector embeddings with recency boosting.
Agents share a memory layer with scoped recall and cross-agent merging. The only system built for coordinated agent fleets.
Related memories are clustered and synthesized into fewer, richer memories. Reduces noise and cuts token costs 40-60%.
Project-aware memory with cross-project search. Keep memories organized without losing the ability to connect knowledge.
Token-budgeted context blocks optimized for system prompts. Prioritizes directives, then relevant memories, then recency. Reduce token costs 40-60%.
One-line integration with Claude Code and any MCP-compatible agent. 9 tools for full memory lifecycle management.
Initialize, store, recall. Works as a TypeScript library, HTTP server, or MCP server for Claude Code and any MCP-compatible agent framework.
import { configure, createMemory, recall } from '@memoryprizm/core'; // Connect to your database configure({ mongoUri: 'mongodb://localhost/myapp', openaiApiKey: process.env.OPENAI_API_KEY, }); // Store a memory await createMemory('agent-1', { content: 'User prefers TypeScript', type: 'preference', containers: ['my-project'], }); // Recall by meaning const results = await recall('agent-1', { query: 'What language do they like?', }); console.log(results.memories[0].content); // "User prefers TypeScript"
Works anywhere you run AI agents — locally, in the cloud, or at the edge. Self-host with MongoDB or use our managed platform.
Add @memoryprizm/core to your project, or spin up the HTTP server with one command.
Point to your MongoDB instance and optionally add an OpenAI key for semantic search.
Agents call createMemory() with facts, lessons, preferences, and directives. Embeddings are generated automatically.
Query by natural language. The engine returns ranked results using vector similarity, metadata filters, and recency boosting.
| Capability | MemoryPrizm | Mem0 | Letta | Zep |
|---|---|---|---|---|
| Multi-agent memory sharing | Yes | No | No | No |
| AI-powered compaction | Yes | No | No | No |
| Container/project scoping | Yes | No | No | No |
| MCP integration | Yes | No | No | No |
| Token-budgeted context | Yes | No | Partial | No |
| Self-hosted option | Yes | Yes | Yes | Yes |
| Open source license | MIT | Apache 2.0 | Apache 2.0 | Proprietary |
Claude Code, Cursor, and Copilot sessions share memory. Debugging insights, architecture decisions, and coding preferences persist across every session.
Support agents remember past interactions, preferences, and issue history. No more "Can you tell me your account number again?"
Research agents, planning agents, and execution agents share discoveries. What one agent learns, every agent knows.
Build AI assistants that accumulate knowledge about users over time — preferences, routines, goals, and relationships.
Self-host forever for free. Or let us handle the infrastructure.
AI agent memory is persistent storage that lets LLM-based agents retain knowledge across sessions. Without memory, agents are stateless — they forget everything between conversations, repeat mistakes, and can't learn from experience. MemoryPrizm provides semantic memory infrastructure so agents store facts, lessons, preferences, and directives, then recall them by meaning using vector embeddings.
MemoryPrizm is the only memory framework with multi-agent memory sharing, AI-powered compaction (merging redundant memories to save tokens), container-based project scoping, and native MCP integration for Claude Code. It's MIT licensed (not Apache 2.0 or proprietary), and designed from the ground up for coordinated agent fleets rather than single-agent chat history.
Context engineering is the practice of assembling the right information into an AI agent's context window at the right time. MemoryPrizm's context assembly engine builds token-budgeted context blocks that prioritize directives, then semantically relevant memories, then recent context — reducing token costs by 40-60% while improving response accuracy.
Yes. MemoryPrizm is fully open source under the MIT license. Self-host with your own MongoDB instance and optionally add an OpenAI API key for semantic search. No vendor lock-in, no usage limits, no fees. The managed cloud platform is for teams that want zero-ops memory infrastructure.
Install @memoryprizm/core via npm, call configure() with your MongoDB URI, then use createMemory() and recall(). It takes 3 lines of code. MemoryPrizm also ships as an HTTP server (@memoryprizm/server) and an MCP server (@memoryprizm/mcp) for Claude Code integration.
MemoryPrizm works with any framework. Use the TypeScript library directly, call the HTTP API from any language, or connect via MCP. It's framework-agnostic by design — your agents talk to MemoryPrizm through a simple API, regardless of what orchestration layer you use.
Join developers building the next generation of AI systems with persistent memory.