Overview
Gnosis Memory is a remote MCP server that gives an AI client persistent, semantically searchable memory across sessions and machines. Clients connect over HTTPS using the Model Context Protocol, store text passages with their metadata, and retrieve relevance-ranked results across every namespace the caller can access. No local install, no embedded vector store, no separate authentication beyond the operator's Google OAuth identity. Day-to-day memory authoring happens through MCP; the web dashboard at /account is for reviewing and auditing what agents have stored. Per-tool reference: /tools → · architecture: /technology → · access and encryption: /security →.
Quick Start
Three minutes from zero to first memory. The only step that requires a browser is the Google OAuth grant on first sign-in. Everything else happens through the AI client. Detailed client-specific configurations live at /setup →.
1. Sign in
Visit /account and complete the Google OAuth flow. A free account is provisioned on first sign-in. The flow returns an identity bound to the operator's Google subject ID; no API keys, no separate passwords.
2. Configure the MCP client
Add this server to the AI client's MCP configuration. The exact file path and JSON wrapper vary by client; the canonical snippets for Claude, Cursor, Windsurf, OpenCode, and other clients are at /setup →.
{
"mcpServers": {
"gnosis-memory": {
"transport": "streamable-http",
"url": "https://gnosismemory.com/mcp/v1/messages"
}
}
}
Restart the client after editing. The server appears in the client's tool registry with thirteen tools prefixed mcp__gnosis-memory__*.
3. Bootstrap the session
Call init_core_memories once at session start. The response carries behavioral preferences, the topic landscape, active tasks, and any pending signals. Pass a machine_id when filesystem-relevant context exists so machine-scoped paths surface automatically.
init_core_memories(machine_id="laptop")
4. Store a memory
memory_add requires content, macro_topics, and topics. The first fifty characters of content function as an executive summary that future searches see in preview form, so the memory should stand alone without surrounding context. Type defaults to fact.
memory_add(
content="Postgres connection pool size set to 20 to handle 4 concurrent worker processes with headroom for admin connections.",
macro_topics=["myapp"],
topics=["postgres", "pool", "config", "performance"],
type="decision"
)
5. Retrieve
memory_search ranks by semantic relevance and returns up to thirty-two previews. Use two or more keywords; single-word queries are too broad. Previews carry a complete flag indicating whether the full content fit. Call memory_retrieve only when a preview is marked incomplete and the rest is needed.
memory_search(query="postgres connection pool")
From here: Concepts → covers the data model, Tool Surface → lists every tool with a one-line purpose, Recipes → shows multi-tool workflows, and Conventions → covers memory quality rules.
Concepts
Six concepts cover the entire surface. The tools listed in the next section are operations over the objects defined here.
Memory
A memory is a short text passage with associated metadata: a type classification, one to three macro topics that name the project or domain, a flat list of searchable topic keywords, and optional fields such as ref (parent task or ticket ID), artifact (commit SHA or filename), and an expiry date. Every memory is embedded into a vector representation at write time and stored alongside the original text for full-text retrieval. The first fifty characters of the content function as an executive summary that future searches see in preview form, so memories are authored to stand alone without surrounding conversation context.
Memory types are fact (knowledge that exists), preference (an operator pattern), decision (a choice made, with rationale), path (a workspace location), task (actionable work with a checklist), task_done (a completed task), result (an outcome attached to a task), and summary (an executive overview of a topic produced by consolidation).
Namespace
Every memory lives in exactly one namespace. The default namespace is the operator's personal space, identified by the OAuth subject ID. Agents (described below) operate in their own namespaces under the same account. Shared collections form additional namespaces that multiple accounts and agents can read or write to under per-collection permissions. Search fans out across every namespace the caller can access in a single query and returns results ranked by relevance, regardless of origin.
Agent
An agent is a named sub-account under the operator's identity that carries its own memory namespace. A research assistant, a code reviewer, and a journaling persona can run as three separate agents, each isolated from the others by default. The owner_access setting controls how much of the operator's personal namespace a given agent can see: none (fully isolated), reader (search and read), or member (read and write). Activation is by HTTP header (X-Agent-ID) on the MCP connection, configured in the client's MCP server entry. Agent management requires a paid subscription.
Collection
A collection is a shared namespace that multiple accounts and agents belong to under explicit membership. Collections come in two types. A collaborative collection lets every member read and write. A knowledge pack is owner-administered: the owner publishes curated content, members read but cannot write. Collection membership is by email address (for human operators) or by agent name (for agents). Search fan-out automatically includes every collection the caller belongs to, so cross-team retrieval requires no per-query configuration.
Signal
A signal is a sticky note that points to one or more memory IDs and is delivered to a named recipient through a shared collection. The signal carries no content of its own; the recipient resolves the IDs through the standard retrieval path. Signals appear in the recipient's next init_core_memories response and stay queued until the recipient explicitly acknowledges them. Both the sender and the recipient must be members of the collection used as the delivery channel. Signals expire after a configurable window (default 48 hours).
Task
A task is a memory of type task with structured fields: a title, a checklist (up to ten items), a workflow status (pending, active, blocked, review, done), an optional prose summary block, and the standard memory metadata. Tasks are edited through targeted operations: toggling checkboxes by index, setting status, adding or removing steps, replacing the summary, or attaching a finding via the output parameter. Findings attached this way become linked fact memories that inherit the task's topics, so a single search returns both the parent task and every outcome recorded against it.
Topic landscape
At session start, init_core_memories returns a compressed map of the entire accessible corpus: macro-topic clusters with memory and task counts, a ranked vocabulary of every topic keyword in use, type distributions, active tasks with progress markers, and behavioral preferences. The landscape is content-agnostic. The calling AI uses it to plan searches rather than reading the corpus end-to-end, and the same landscape is returned identically regardless of corpus shape or size.
Tool Surface
Gnosis exposes thirteen MCP tools. Every tool description carries its own parameter schema, creation guidelines, and quality heuristics that the calling LLM reads before each invocation. The table below lists each tool's purpose at a glance. Full per-tool documentation lives at /tools →.
| Tool | Purpose |
|---|---|
init_core_memories | Once-per-session bootstrap. Returns behavioral preferences, topic landscape, active tasks, and pending signals. Accepts an optional query to piggyback a search and save a round trip. |
memory_add | Store one memory with content, topics, and macro topics. Type defaults to fact. Supports task creation via title and steps instead of free-form content. |
memory_add_batch | Store three or more memories in a single call. Same per-memory quality rules as memory_add. |
memory_search | Default ranked search. Returns up to thirty-two previews. Use two or more keywords; single-word queries are too broad. |
memory_deep_search | Broader search returning up to one hundred results. Use when comprehensive coverage of a topic intersection is required. |
memory_retrieve | Fetch full content for one or more memory IDs after a search returns truncated previews. Skipped when previews are marked complete. |
memory_edit | Three modes: full content replace (re-embeds), metadata patch (no re-embed), and task surgery (toggle checkboxes, set status, modify steps, attach findings). |
memory_delete | Permanently delete one or more memories by ID. |
memory_consolidate | Replace the executive summary for a topic. Summaries surface near the top of future searches so new sessions read the brief instead of reading every individual memory. |
task_feed | Recent tasks in chronological order. Filter by status, topic, or collection. Returns personal namespace plus every accessible shared collection by default. |
signal | Three actions. send posts a memory pointer to a recipient through a shared collection. check peeks at pending signals. ack removes them after processing. |
collection_manage | Nine actions over shared collections: create, list, delete, members, add_user, remove_user, accept, decline, publish. Requires a paid plan. |
agent_manage | Four actions over agent identities: create, list, update, delete. Update controls owner_access. Requires a paid plan. |
Cross-tool patterns and per-tool depth are covered in Recipes → and /tools → respectively. The tool descriptions delivered through MCP carry the authoritative parameter schemas and per-call guidance.
Tier Capabilities
Free and Plus tiers cover individual operators. Pro adds administrative control over collections and agents. Benefactor and Founder are permanent equivalents with no recurring billing. The table below maps capabilities to tiers. Pricing and SKU details live at /pricing →.
| Capability | Free | Plus | Pro | Benefactor | Founder |
|---|---|---|---|---|---|
| Personal memories (add, search, edit, delete) | Yes | Yes | Yes | Yes | Yes |
| Memory consolidation | Yes | Yes | Yes | Yes | Yes |
| Task workflow (status, steps, findings) | Yes | Yes | Yes | Yes | Yes |
| Read and write existing shared collections | No | Yes | Yes | Yes | Yes |
| Send and acknowledge signals | No | Yes | Yes | Yes | Yes |
| Create and administer collections | No | No | Yes | Yes | Yes |
| Manage agent identities | No | No | Yes | Yes | Yes |
| Recurring billing | — | Monthly / annual | Monthly / annual | None | None |
Tools that require a higher tier than the caller's current plan return a machine-readable tier_required response. The client surfaces the upgrade path inline. The tool itself remains visible in the surface so callers can discover what is reachable on a higher tier.
Recipes
The following patterns combine multiple tools to accomplish workflows that no single tool documents on its own. Each recipe shows the tool sequence and the underlying mechanism. Parameter details are abbreviated; full schemas are in the MCP tool descriptions.
Two teammates' agents signaling each other through a shared collection
Two operators on different accounts collaborate by adding each other's agents to a shared collection. The collection owner creates it through collection_manage(action="create") and invites the other operator (or an agent on the other account) through collection_manage(action="add_user"). The recipient accepts the invitation via the dashboard or with collection_manage(action="accept"). From that point on, any memory written with collection="collection_name" lands in shared space.
To notify the other side that a memory needs attention, the sender calls signal(action="send") with the recipient name and the memory ID. The signal is delivered through the shared collection. The recipient sees it on their next init_core_memories response under signals. After acting on it, the recipient calls signal(action="ack") to remove it from the board.
# On agent-a's session (Pro operator who owns the collection)
collection_manage(action="create", name="Team Notes")
collection_manage(action="add_user", name="Team Notes", user="agent-b")
# Agent-b accepts (dashboard or tool call)
collection_manage(action="accept", name="Team Notes")
# Agent-a writes a finding into the shared collection and signals agent-b
memory_add(
content="Found root cause of OAuth 401 cascade: refresh token rotation race in wrangler <4.92.",
macro_topics=["myapp"],
topics=["wrangler", "oauth", "refresh", "bug"],
collection="Team Notes",
signal="agent-b"
)
# Agent-b checks signals on next session
init_core_memories()
# response includes signals: [{id, memory_ids, sender, created_at}]
signal(action="ack", signal_ids=["..."])
The signal parameter on memory_add is a shortcut that performs the add and the send in one call. Both parties must be collection members for the signal to deliver.
One operator, two machines, two agent identities
An operator working across a development workstation and a remote production node can keep clean separation by running each machine as a distinct agent under the same Google account. Create the two agents with agent_manage(action="create"), set X-Agent-ID on each machine's MCP client configuration to the corresponding agent name, and grant cross-namespace visibility through owner_access as needed.
This pattern keeps machine-specific context (paths, hardware quirks, local config) isolated to its source agent while letting the operator's personal namespace stay shared. A third option is to publish reusable knowledge into a shared collection that both agents belong to.
agent_manage(action="create", name="dev-laptop")
agent_manage(action="create", name="prod-node")
agent_manage(action="update", name="dev-laptop", owner_access="reader")
# On the laptop's MCP client config: X-Agent-ID: dev-laptop
# On the prod node's MCP client config: X-Agent-ID: prod-node
Task tracking with linked findings
Tasks combine a checklist, a status workflow, and the ability to attach durable findings inline. Create a task with memory_add(type="task", title=..., steps=[...]). Move it through the workflow with memory_edit(set_status=...). Toggle checkboxes by index as steps complete. When a step produces a finding worth keeping in search, attach it through the output parameter on the same memory_edit call; the server creates a linked fact memory that inherits the task's topics and is discoverable by the task's short ID.
memory_add(
type="task",
title="Migrate worker secret store to v2 API",
steps=[
"Inventory existing secrets",
"Write migration script",
"Run against staging",
"Cut production over",
"Decommission v1 endpoint"
],
macro_topics=["myapp"],
topics=["worker", "secrets", "migration", "v2"],
status="active"
)
# Later: complete step 0 and attach a finding
memory_edit(
memory_id="",
toggle=[0],
output="42 secrets inventoried in production. 18 deprecated and safe to drop."
)
# Mark done when all steps complete
memory_edit(memory_id="", set_status="done")
Search by the task's eight-character short ID to retrieve the task and every linked finding together. The task_feed tool returns recent tasks chronologically and accepts filters by status, topic, and collection.
Consolidating a topic into an executive summary
Once a topic has accumulated twenty or more memories, future sessions can avoid reading the whole corpus by retrieving the executive summary first. Call memory_consolidate(topic="...", content="...") after a thorough deep search of the topic; the summary should cover what the topic is, current state, key decisions, and open questions in 300 to 500 tokens. Calling consolidate again replaces the existing summary and archives the previous version, which is retrievable through a search with type_filter="summary_archived".
memory_deep_search(query="postgres performance tuning")
# Read 20+ results, identify what's stable and what's open
memory_consolidate(
topic="postgres",
content="Postgres is the primary OLTP store for myapp. Pool size 20 chosen 2026-03 to handle 4 workers with headroom. ..."
)
Conventions
The conventions below are enforced by the tool descriptions themselves and reinforced by server-side deduplication and rejection of malformed input. They are summarized here for cross-tool reference.
Topic discipline
The topics array on every memory is the primary surface that future searches will use. Topics must be single lowercase words with no underscores or hyphens (a topic of bug_fix is rejected; split into bug and fix). Choose topics by asking what someone would type into a search to find the memory. If searching redis should find a memory about session caching, redis must appear in the topics, even if the memory is primarily about sessions. Do not duplicate macro_topics entries inside topics; the server injects macros into the searchable index automatically.
Content structure
The first fifty characters of content function as an executive summary. It must name the subject and stand alone with no surrounding context. A memory that begins Prefers tabs is effectively invisible to future searches because no preview reader will know who prefers what; a memory that begins User prefers tabs in Python and JS – consistency across codebase is self-contained. Write in present tense. Include the rationale when storing a decision. Store one fact per memory; split multi-fact entries into separate calls.
When to store, edit, or consolidate
Store concrete state: facts that exist, decisions that were made with their rationale, preferences that govern behavior, paths to known locations. Do not store speculation; an aspirational memory must carry an explicit aspirational topic so future readers know the work is not yet done. When an operator corrects a stored claim, edit the existing memory through memory_edit rather than adding a contradictory new one. When a topic has accumulated enough memories that future sessions would benefit from a brief instead of reading the full set, call memory_consolidate to write an executive summary that surfaces in future searches.
Links between memories
The ref field links a memory to a parent task or ticket ID. The artifact field links to an external work product such as a commit SHA or a filename. Both fields are merged into the searchable index automatically, so searching for a task's short ID returns the task plus every memory that referenced it. Use ref when capturing a finding that was discovered while working on a specific task. Use artifact when retroactively tagging a memory with the commit that resolved it.
Failure Modes
The most common failure patterns and their recovery paths follow. Each pattern is referenced in the relevant tool description for inline guidance.
Search returns nothing
Three causes account for almost every empty result. The query was a single word, in which case ranking returned too many results and the threshold filtered all of them out (use two or more keywords). The topics on the original memory do not match the search query (open the memory through the dashboard, check its topics, and re-search using one of them). The memory is in a namespace the caller cannot reach because no agent identity is set, or the relevant collection has not been accepted (verify identity through init_core_memories and accept pending invitations through collection_manage).
Permission denied on a shared collection
Writing to a shared collection requires Plus or higher; creating or administering a collection requires Pro or higher; managing agents requires Pro or higher. The server returns a tier_required response indicating which tier unlocks the requested capability. For collaborative collections, ensure the operator is a member rather than a reader (members write, readers only search). For knowledge packs, only the owner can write; members read.
Memory was stored but cannot be found
Usually a topic discipline issue. Open the memory through the dashboard, inspect the actual topics that were recorded, and verify they match what the search is querying. If the topics are wrong, fix them through memory_edit with a metadata-only patch (passing new topics without content does not trigger re-embedding). If the content's first fifty characters are vague, replace the content through memory_edit with a complete rewrite; this re-embeds.
Signal sent but recipient did not see it
Both parties must be members of the collection used as the delivery channel, and the recipient must be running init_core_memories at session start to surface pending signals. Verify membership through collection_manage(action="members"). If the recipient is an agent rather than a human operator, the agent name in signal(action="send", recipient=...) must match the agent name registered under the recipient operator's account, not the operator's email.
Task progress shows wrong counts
Task progress is computed from the checklist's checked-vs-total ratio. Editing the checklist through memory_edit(add_steps=..., remove_steps=...) updates the counts; editing the content directly does not. If a task shows stale progress, run memory_retrieve on the task ID and inspect the current checklist before editing. The output parameter on memory_edit attaches findings as separate memories and does not affect the task's own checklist counts.
Dashboard
The web dashboard at /account is for reviewing, auditing, and editing what agents have stored. It is not the primary memory-authoring surface; that work happens through MCP. Common reasons to open the dashboard include verifying what an agent recorded after a complex session, fixing topic mismatches that are hurting search recall, reviewing or revoking collection invitations, and inspecting tier and billing status under the Billing tab.
Memories are listed by namespace, filterable by type, topic, and date. Each entry shows content, metadata, and the full edit history. Editing through the dashboard performs the same operation as memory_edit through MCP and re-embeds when content changes. Deletion is permanent and confirms before proceeding.
Collections owned by the operator are administered under the Collections tab, where invitations are sent and received and membership is reviewed. Pending signals delivered through any collection appear under the Signals tab and can be acknowledged or dismissed manually.