Cosmo on Claude Code

Same brain, different surface. Sat 25 Apr 2026.

Listen — tap to start, auto-plays each section
The question. The Telegram-facing Cosmo agent reads ~/cosmo-memory/ on every turn. When you're talking to Claude Code in the terminal, should it have the same memory, the same way? If yes, how, exactly, without slowing every keystroke?
Contents
  1. What "parity" actually means
  2. The two surfaces are different by nature
  3. The three real options
  4. The decision and why
  5. What's shipped right now
  6. What's deferred and when
  7. How to test it

1. What "parity" actually means

Two things should hold true whether you reach Cosmo via Telegram or via Claude Code in the terminal:

  1. It knows you. The same persistent context — where you live, current training phase, active projects, etc — is available without you having to re-explain.
  2. It learns from you. When you mention something worth remembering, it gets captured for next time. Same way, regardless of surface.

What parity does not mean: identical implementation. The two surfaces have different ergonomics, different latency expectations, different tool access. Forcing them to use the same plumbing would make at least one of them worse.

2. The two surfaces are different by nature

Telegram → CosmoClaude Code (terminal)
You cansend a message; that's itsend a message AND directly tell Claude what to read
The agent canonly what's pre-loaded into its promptcall Read, Grep, Bash on the actual filesystem
Latency budget~30 sec — you sent a message and walked off~3 sec feels slow — you're typing live
Pre-loading memoryessential — agent has no other way to know thingsnice but not essential — Claude can Read what it needs
Per-message router callworth it — only chance to inject contextwasteful — would burn 2-3 sec per turn for context Claude can fetch on demand

This is the key insight. The Telegram side has a router because the agent is otherwise blind. Claude Code is not blind. Claude Code can grep, read, find. Pre-loading routed topics every turn would just be redundant work.

3. The three real options

Claude Code has four mechanisms that could be used to inject memory. Verified against the docs this morning:

MechanismWhat it doesLatency costRight tool here?
SessionStart hook Runs a script at session start. Output as JSON with additionalContext field gets injected as a system message. ~1× per session yes — for proactive items
UserPromptSubmit hook Runs before each message. Can also output additionalContext. 2-3 sec per message if it calls a router no — too much per-keystroke pain
CLAUDE.md @import @~/path/file.md in CLAUDE.md inlines that file at session start. Resolved before Claude reads it. 0 — happens at startup yes — for the always-loaded part
Skills Markdown files Claude triggers on relevance. Not always-resident. 0 unless triggered no — we want always-resident, not on-demand

4. The decision and why

For now: CLAUDE.md @import only. Future, once the new system has interesting things to surface daily: SessionStart hook on top.

The CLAUDE.md @import handles the always-loaded part with zero per-message latency. Two lines added to the cosmo CLAUDE.md:

@~/cosmo-memory/INDEX.md

@~/cosmo-memory/SCHEMA.md

Claude Code resolves these at session start, so when you open a new session in ~/cosmo, those two files are already in context. INDEX tells Claude what topic files exist; SCHEMA tells Claude the rules for editing them. About 7KB combined. Loaded once. Free.

For specific topic content — say you ask Claude something about Phase 9 training — Claude can Read ~/cosmo-memory/topics/running-phase-9.md directly, since the INDEX told it the file exists and the SCHEMA told it where it lives. This is actually better than the Telegram side's router, because Claude reads the exact file it needs, not 1-3 best-guesses pre-loaded blindly.

For writing — when something worth remembering comes up in conversation — there's a soft instruction in CLAUDE.md telling Claude to append it to episodes/YYYY-MM/<today>.md. Same write path as the Telegram extractor. Dream pass picks it up the next morning, promotes to a topic file.

What we're not doing: per-message router. Verified the math — if it added 2-3 sec to every Claude Code message, that compounds fast across a typing session. The router exists on the Telegram side because the agent has no other way to find context. Claude Code doesn't have that constraint.

5. What's shipped right now

Committed today on the memory-v2 branch as "step 4 partial":

That's it. Two lines and a soft instruction. The next time you open a new Claude Code session in ~/cosmo, it should know where its memory lives and how to use it.

6. What's deferred and when

Things that need building later, on the Claude Code side:

CapabilityWhy deferredWhen
SessionStart hook injecting today's morning brief Inbox doesn't exist yet (build step 7). Nothing to inject. After step 7
SessionStart hook injecting active task summaries Tasks dir doesn't exist yet (build step 5). After step 5
Stop hook hardening the write path Soft instruction is good enough for now. Could harden if Claude Code skips it consistently. If/when it proves needed
Shared dashboard Lives outside both surfaces — see memory-v2 plan step 1.5. Next, after Telegram smoke test passes

7. How to test it

From any new Claude Code session in ~/cosmo:

What's in your cosmo-memory? Describe the structure.

If parity is working, Claude should describe the directory layout (topics, plans, tasks, inbox, episodes) and the SCHEMA rules without needing to run any tools. It already loaded both files via @import at session start.

To test the write path:

Just so you know, my Phase 9 training switches to gym focus from Mon 28 Apr 2026.
Append that to episodes.

Claude should write to ~/cosmo-memory/episodes/2026-04/2026-04-25.md with a date-stamped line per the SCHEMA rule. The dream pass (build step 3) will eventually promote that into topics/running-phase-9.md on its next run.

Why this doc exists. The full memory-v2 build plan is at cosmo-memory-v2.pages.dev. This page exists because the Claude Code parity question came up mid-build and deserved its own short writeup — easier to listen to in the car than to scroll through terminal output.

Sat 25 Apr 2026. Doc lives at plans/claude-code-parity.html.

Now playing