Setup Guide
MemPalace Setup Guide: Install & Configure in 5 Minutes
MemPalace installs with a single pip command and gives any MCP-compatible AI client — including Claude Code, ChatGPT, and Cursor — persistent memory that runs entirely on your local machine. This guide covers installation, configuration, and integration in under five minutes.
Follow this step-by-step tutorial to install MemPalace, connect it to your favourite AI client, and store your first memory. No cloud account needed — everything runs locally on your machine.
Prerequisites
Before you install MemPalace, make sure you have the following on your system:
- ✓Python 3.9+ — MemPalace uses modern Python features. Run python --version to check. We recommend 3.11+ for best performance.
- ✓pip package manager — Ships with Python. Verify with pip --version. If you use conda or poetry, those work too.
- ✓An MCP-compatible client — Claude Code (CLI), Claude Desktop, ChatGPT, Cursor, Windsurf, or any editor supporting the Model Context Protocol.
Install MemPalace
To install MemPalace, open your terminal and run the following command. This single line pulls in everything you need:
pip install mempalaceThe command installs MemPalace along with its core dependencies:
- •ChromaDB— the vector database that powers semantic search across your memories. Stores embeddings locally, no cloud needed.
- •SQLite— lightweight relational storage for metadata, relationships, and the structural hierarchy of your palace.
- •Sentence Transformers (all-MiniLM-L6-v2)— a compact 384-dimensional embedding model that runs on CPU. Converts text into vectors for similarity search.
Tip:First-time installation may take 2–3 minutes because pip downloads the Sentence Transformers model (~80 MB). Subsequent installs are near-instant.
Initialize Your Palace
To initialize your palace, run the command below to create the directory structure that MemPalace uses to organise your memories:
mempalace initThis creates a .mempalace/ directory in your home folder with the following layout:
.mempalace/
├── palace.db # SQLite metadata
├── chroma/ # Vector embeddings
└── wings/
└── default/
├── hall_facts/
├── hall_events/
├── hall_discoveries/
├── hall_preferences/
└── hall_advice/Think of it like a real memory palace: Wings are top-level containers (one per person or project), Rooms are specific topics within a wing, and Halls are corridors that categorise memories by type. We cover the full architecture in the Core Concepts section below.
Connect to Claude Code
To connect MemPalace to Claude Code, add the MCP server configuration below to your Claude Code settings file (~/.claude.jsonor your project's .mcp.json):
{
"mcpServers": {
"mempalace": {
"command": "mempalace",
"args": ["mcp"]
}
}
}That's it. Restart Claude Code and MemPalace will appear as an available MCP tool. Claude can now read, write, and search your memories automatically during conversations.
How it works: The mempalace mcpcommand starts a local MCP server that communicates with Claude Code over stdio. All data stays on your machine — nothing is sent to external servers.
Connect to ChatGPT / Cursor
To connect MemPalace to other AI clients, follow the quick setup instructions below. MemPalace works with any MCP-compatible client:
ChatGPT Desktop
Open Settings → MCP Servers → Add Server. Enter mempalace as the command and mcp as the argument. ChatGPT will auto-detect the available tools.
Cursor
Go to Settings → MCP and add a new server with the same config as Claude Code. Cursor supports MCP natively, so MemPalace tools will appear in the Composer and Agent panels.
Claude Desktop
Edit claude_desktop_config.json and add the same mcpServersblock shown in Step 3. Restart the app to activate.
Store Your First Memory
To store your first memory, choose one of two methods: bulk import from conversation exports, or real-time storage through the MCP interface.
Option A: Import conversation history
Export your conversation history from Claude, ChatGPT, or any supported client, then run:
mempalace mine conversations ./path/to/export.jsonMemPalace will parse the conversations, extract facts, events, preferences, and discoveries, then file each memory into the appropriate hall.
Option B: Through the MCP interface
Once connected, simply chat with your AI assistant. MemPalace automatically captures important information during your conversations. You can also explicitly ask your assistant to “remember” something:
You: "Remember that our production database is on us-east-1
and the staging DB is on eu-west-2."
Claude: "Stored in hall_facts under the 'infrastructure' room."Search Your Memories
To search your memories, query your palace from the command line or let your AI assistant search automatically.
CLI search
mempalace search "database configuration"Returns semantically relevant memories ranked by similarity. You can filter by wing, room, or hall:
mempalace search "database config" --wing=work --hall=hall_factsAI-assisted search
When connected via MCP, your AI assistant searches automatically whenever context is needed. You can also ask directly:
You: "What did we decide about the caching strategy last week?"
Claude: [searches palace] "On March 30 you decided to use Redis for
session cache and CloudFront for static assets. The TTL for sessions
is 4 hours. (Source: hall_events/infrastructure)"Core Concepts
MemPalace borrows from the ancient method of loci— a mnemonic technique where you associate memories with physical locations in an imagined building. Here's how the metaphor maps to the system:
Wings
Top-level containers. Create one wing per person, project, or domain. Example: a “work” wing and a “personal” wing, or one wing per client.
Rooms
Specific topics within a wing. A “work” wing might have rooms for “infrastructure”, “frontend”, and “team-processes”.
Halls
Memory-type corridors that exist in every room. Five built-in halls categorise what kind of information each memory represents:
- hall_facts— stable truths (API keys, config, preferences)
- hall_events— things that happened (decisions, incidents)
- hall_discoveries— insights and learnings
- hall_preferences— user likes, dislikes, style
- hall_advice— recommendations and best practices
Closets & Drawers
Closetshold compressed summaries in AAAK format (Assertion → Assumption → Action → Knowledge) — distilled to fit in limited context windows. Drawers store the original verbatim files for deep retrieval when full detail is needed.
4-Layer Memory Stack
MemPalace uses a tiered retrieval system that balances context window efficiency with recall depth. Only the layers that are needed get loaded, keeping token usage minimal.
| Layer | What | Size | When loaded |
|---|---|---|---|
| L0 | Identity — who you are, core prefs | ~50 tokens | Always |
| L1 | Critical facts (AAAK summaries from closets) | ~120 tokens | Always |
| L2 | Room recall — topic-specific context | Variable | On demand |
| L3 | Deep semantic search — full drawer retrieval | Variable | On demand |
L0 and L1 are injected into every conversation automatically (<200 tokens total). L2 activates when the conversation topic matches a known room. L3 fires only when the user or AI explicitly requests a deep search. This design means MemPalace adds virtually zero overhead to normal conversations while providing perfect recall when needed.
Troubleshooting
Running into issues? Here are solutions to the most common problems:
Windows Unicode crash (GitHub issue #47)
On some Windows systems, MemPalace crashes when processing text with non-ASCII characters. Fix: set the environment variable PYTHONIOENCODING=utf-8 before running any MemPalace command, or upgrade to v0.4.2+ where this is patched.
# Windows PowerShell
$env:PYTHONIOENCODING = "utf-8"
mempalace initPython version compatibility
MemPalace requires Python 3.9 or higher. If you see SyntaxError or ModuleNotFoundError, check your Python version. On systems with multiple Python installations, use python3 and pip3 explicitly.
python3 --version # Should be 3.9+
pip3 install mempalaceChromaDB dependency issues
ChromaDB may fail to install on older systems due to native dependencies. If you see build errors related to hnswlib or chroma-hnswlib, install the build tools first:
# macOS
xcode-select --install
# Ubuntu/Debian
sudo apt-get install build-essential python3-dev
# Then retry
pip install mempalaceMCP connection not working
If your AI client does not detect MemPalace tools, verify the command is on your PATH:
which mempalace # Should print a path
mempalace mcp # Should start the MCP serverIf which mempalace returns nothing, the package was installed in a virtual environment that is not active, or your PATH does not include pip's script directory.
Frequently Asked Questions
How long does it take to install MemPalace?
The full installation takes about 5 minutes. Running pip install mempalace downloads all dependencies (ChromaDB, Sentence Transformers, SQLite). The first-time model download for all-MiniLM-L6-v2 may add 1–2 minutes on slower connections.
Does MemPalace work offline?
Yes. After the initial setup, MemPalace works entirely offline. All embeddings are computed locally using Sentence Transformers, and data is stored in local ChromaDB and SQLite databases. No API keys or cloud services required.
Which AI clients are compatible with MemPalace?
MemPalace works with any MCP-compatible client including Claude Code (CLI), Claude Desktop, ChatGPT, Cursor, Windsurf, and other editors that support the Model Context Protocol. The MCP standard is growing fast, so more clients are added regularly.
Can I use MemPalace with multiple AI assistants at the same time?
Yes. MemPalace uses a local database that can be accessed by multiple MCP clients simultaneously. Your memories are shared across all connected AI assistants, so context learned in Claude is available in Cursor and vice versa.
How do I add memory to Claude Code?
To add persistent memory to Claude Code, install MemPalace with pip install mempalace, then add the MCP server configuration to your Claude settings. MemPalace provides 19 MCP tools that Claude Code can use to store, search, and manage memories automatically. See Step 3 of this guide for the exact configuration.
Does MemPalace work with ChatGPT?
Yes, MemPalace works with ChatGPT through MCP integration. It also works with Claude (via Claude Code or the API), Cursor, local models like Llama and Mistral, and any LLM that supports the Model Context Protocol or can read structured text.
How much disk space does MemPalace use?
MemPalace uses ChromaDB for vector storage and SQLite for metadata, both running locally. With AAAK compression achieving 30x compression ratios, a typical 6-month conversation history (~19.5 million tokens) compresses to approximately 650K tokens of stored data, using roughly 50–100MB of disk space.
Ready to go deeper?
Now that your palace is set up, see how MemPalace compares to other AI memory solutions.