Open Source — MIT Licensed

The Memory Layer
for AI Agents

AI agents are stateless — they forget everything between sessions. MemoryPrizm adds persistent, semantic memory and context engineering so your agents learn, coordinate, and get smarter over time.

Get Started Free View on GitHub
$ npm install @memoryprizm/core

LLM Agents Are Stateless. That's Broken.

Every session starts from zero. Agents repeat mistakes, lose context, and can't coordinate. Stuffing prompts with context doesn't scale — you need real memory infrastructure.

🔄

No Learning

Agents rediscover the same solutions, ask the same questions, and make the same mistakes — session after session.

🚫

No Coordination

Multiple agents on the same project can't share discoveries. Each operates in total isolation.

💡

No Context

Cross-session knowledge is lost. Stuffing prompts with context wastes tokens and doesn't scale beyond a single project.

Complete AI Agent Memory & Context Engineering Stack

From persistent storage to semantic recall to intelligent compaction — everything your agents need for long-term memory.

🔍

Semantic Memory Recall

Search by meaning, not keywords. "How do we deploy?" finds deployment procedures even if they never use the word "deploy." Powered by vector embeddings with recency boosting.

🤖

Multi-Agent Sharing

Agents share a memory layer with scoped recall and cross-agent merging. The only system built for coordinated agent fleets.

🗜

AI Compaction

Related memories are clustered and synthesized into fewer, richer memories. Reduces noise and cuts token costs 40-60%.

📦

Container Scoping

Project-aware memory with cross-project search. Keep memories organized without losing the ability to connect knowledge.

Context Engineering & Assembly

Token-budgeted context blocks optimized for system prompts. Prioritizes directives, then relevant memories, then recency. Reduce token costs 40-60%.

🔌

MCP Native

One-line integration with Claude Code and any MCP-compatible agent. 9 tools for full memory lifecycle management.

Add Memory to AI Agents in 3 Lines of Code

Initialize, store, recall. Works as a TypeScript library, HTTP server, or MCP server for Claude Code and any MCP-compatible agent framework.

Read the Docs Examples
app.ts
import { configure, createMemory, recall }
  from '@memoryprizm/core';

// Connect to your database
configure({
  mongoUri: 'mongodb://localhost/myapp',
  openaiApiKey: process.env.OPENAI_API_KEY,
});

// Store a memory
await createMemory('agent-1', {
  content: 'User prefers TypeScript',
  type: 'preference',
  containers: ['my-project'],
});

// Recall by meaning
const results = await recall('agent-1', {
  query: 'What language do they like?',
});

console.log(results.memories[0].content);
// "User prefers TypeScript"

From Stateless to Stateful Agents in Minutes

Works anywhere you run AI agents — locally, in the cloud, or at the edge. Self-host with MongoDB or use our managed platform.

Install

Add @memoryprizm/core to your project, or spin up the HTTP server with one command.

Configure

Point to your MongoDB instance and optionally add an OpenAI key for semantic search.

Store

Agents call createMemory() with facts, lessons, preferences, and directives. Embeddings are generated automatically.

Recall

Query by natural language. The engine returns ranked results using vector similarity, metadata filters, and recency boosting.

MemoryPrizm vs Mem0 vs Letta vs Zep

Capability MemoryPrizm Mem0 Letta Zep
Multi-agent memory sharing Yes No No No
AI-powered compaction Yes No No No
Container/project scoping Yes No No No
MCP integration Yes No No No
Token-budgeted context Yes No Partial No
Self-hosted option Yes Yes Yes Yes
Open source license MIT Apache 2.0 Apache 2.0 Proprietary

Who Needs AI Agent Memory Infrastructure

AI Coding Assistants

Agents that learn your codebase

Claude Code, Cursor, and Copilot sessions share memory. Debugging insights, architecture decisions, and coding preferences persist across every session.

Customer Support

Agents that know your customers

Support agents remember past interactions, preferences, and issue history. No more "Can you tell me your account number again?"

Agent Fleets

Coordinated multi-agent systems

Research agents, planning agents, and execution agents share discoveries. What one agent learns, every agent knows.

Personal AI

Assistants that truly know you

Build AI assistants that accumulate knowledge about users over time — preferences, routines, goals, and relationships.

Start free, scale as you grow

Self-host forever for free. Or let us handle the infrastructure.

Self-Hosted

$0
forever
  • Unlimited memories
  • Full source code (MIT)
  • Your infrastructure
  • Community support
Clone Repo

Free Cloud

$0
per month
  • 1,000 memories
  • 100 recalls / day
  • 1 project
  • Community support
Coming Soon

Enterprise

Custom
contact us
  • Unlimited everything
  • SSO / SAML
  • SOC2 compliance
  • Dedicated support
  • Custom SLA
Contact Sales

Frequently Asked Questions

What is AI agent memory and why do agents need it?

AI agent memory is persistent storage that lets LLM-based agents retain knowledge across sessions. Without memory, agents are stateless — they forget everything between conversations, repeat mistakes, and can't learn from experience. MemoryPrizm provides semantic memory infrastructure so agents store facts, lessons, preferences, and directives, then recall them by meaning using vector embeddings.

How is MemoryPrizm different from Mem0, Zep, or Letta?

MemoryPrizm is the only memory framework with multi-agent memory sharing, AI-powered compaction (merging redundant memories to save tokens), container-based project scoping, and native MCP integration for Claude Code. It's MIT licensed (not Apache 2.0 or proprietary), and designed from the ground up for coordinated agent fleets rather than single-agent chat history.

What is context engineering?

Context engineering is the practice of assembling the right information into an AI agent's context window at the right time. MemoryPrizm's context assembly engine builds token-budgeted context blocks that prioritize directives, then semantically relevant memories, then recent context — reducing token costs by 40-60% while improving response accuracy.

Can I self-host MemoryPrizm?

Yes. MemoryPrizm is fully open source under the MIT license. Self-host with your own MongoDB instance and optionally add an OpenAI API key for semantic search. No vendor lock-in, no usage limits, no fees. The managed cloud platform is for teams that want zero-ops memory infrastructure.

How do I add memory to my AI agent?

Install @memoryprizm/core via npm, call configure() with your MongoDB URI, then use createMemory() and recall(). It takes 3 lines of code. MemoryPrizm also ships as an HTTP server (@memoryprizm/server) and an MCP server (@memoryprizm/mcp) for Claude Code integration.

Does MemoryPrizm work with LangChain, CrewAI, or other agent frameworks?

MemoryPrizm works with any framework. Use the TypeScript library directly, call the HTTP API from any language, or connect via MCP. It's framework-agnostic by design — your agents talk to MemoryPrizm through a simple API, regardless of what orchestration layer you use.

Give your agents a brain

Join developers building the next generation of AI systems with persistent memory.

Star on GitHub Talk to Founders