Every Claude Code and Copilot session starts cold, burning tokens to re-learn your work. Hydrate captures project context automatically and injects it into the next session - in Claude Code, VS Code Copilot, Mistral Vibe, or any MCP client.
"Start a session in Claude. Continue it in Copilot. Your context comes with you."
Models are not the problem. Amnesia is. The tokens a session burns re-discovering your project are freed for code, reasoning, and actual work.
$ claude "add a dark mode toggle"
Claude: What should the default
colour scheme be?
Claude: What do you mean by
"toggle" - a button? a switch?
Claude: Which pages should it
affect?
[ nothing written ]
$ claude "add a dark mode toggle"
Claude: edits src/styles/globals.css
Claude: edits src/components/Nav.tsx
Claude: Done. Toggle wired to
prefers-color-scheme. Persists in
localStorage. Icon updates on
change.
Hydrate runs entirely on your machine. No daemon. No cloud dependency. A local server, a local SQLite store, and four lightweight hook binaries.
Hydrate installs four Claude Code hooks: UserPromptSubmit, Stop, PreToolUse, and PostToolUse. They capture each session's decisions, tool calls, and output automatically. You never invoke them by hand.
Between sessions, an extractor pulls concrete, scoped facts from the transcript - "pricing tiers are Free/Pro/Team", "auth uses Bearer via TASK_API_TOKEN", "errors follow RFC 7807" - and writes them to a local SQLite store scoped to the project. Nothing leaves your machine on the free tier.
On the next session - in any supported tool - the most relevant facts are injected into the context before the model sees your prompt. A 6,000-token CLAUDE.md re-read becomes 200 tokens of targeted facts. The model starts loaded, not blank.
Whichever path your session takes, it reads from and writes to the same Hydrate fact store. A decision captured in Claude Code is already there when you open Copilot.
Native hooks - zero configuration. Works on day one with your existing Claude subscription. The UserPromptSubmit hook injects facts; the Stop hook extracts them.
VS Code extension with @hydrate chat participant plus three Language Model Tools Copilot auto-invokes. 86-95% measured token reduction across ten scenarios.
MCP server wired into Vibe via hydrate vibe install. Receives Claude Code and Copilot project memory on the first tool call. Contributes facts back to the shared store.
One binary (hydrate-mcp) speaks MCP over stdio. Every MCP-capable client - Cursor, Cline, Zed, Gemini CLI, Claude Desktop - gets hydrate_recall and hydrate_save_fact.
Most developers burn 40% of their daily usage window re-explaining the project to the model at the start of each session. Hydrate separates design from implementation.
Use an expensive model (Opus, Sonnet) to architect a feature and capture the plan as facts. Then execute with Haiku or a local LLM: even Haiku ships correctly when it starts loaded with Sonnet's architectural intent.
The benchmark result: a Hybrid workflow (Sonnet for design, Haiku for execution) ships 7/7 sessions at $0.20 per session - 12% of the cost of running Opus throughout, with identical output quality.
Haiku reliability
Multi-session scenario, with Hydrate
2 → 7/7
sessions shipping
Opus cost reduction
Simple scenario, skips re-exploration
-57%
identical output
Copilot token reduction
10-scenario protocol, VS Code extension
86-95%
tokens saved
Hybrid workflow
Sonnet design + Haiku execution
$0.20
per shipped session
$67 of real API spend. Every number is reproducible. Full methodology at gethydrate.dev/benchmarks →
We ran a full cross-vendor memory round trip. Alice and Bob built a Go API in Claude Code. Carol joined on Mistral Vibe and started her first tool call with their full project memory already loaded.
Alice (Claude Code)
Builds REST API. Hydrate captures 9 conventions and preferences into shared store.
Carol (Mistral Vibe)
Opens Vibe on a fresh machine. First hydrate_recall call returns Alice's full project context. Adds a logging convention.
Alice (Claude Code)
Returns to Claude Code. Carol's logging convention is already in Alice's injected context. Full round trip: two vendors, one shared store.
Register during beta and your Pro rate is locked at $5/mo forever, even if retail launches higher.
Free
Pro
$9/mo at retail
Team
Post-beta
Enterprise
Hydrate is scaffolding - and it knows it. The hook binaries are lightweight. They write to a local SQLite store. There is no daemon, no shared state, no lock file, no cloud dependency on the free tier.
If Anthropic ships native cross-session memory tomorrow, the hooks become no-ops. Delete the binaries. Remove a few lines from your settings.json. Claude Code works exactly as it did before.
What you keep is the team knowledge: conventions, preferences, architectural decisions. Those are stored as plain text in a local SQLite file (and optionally as git commits on the Team tier). They are yours regardless of what Hydrate does next.
Local SQLite store
Facts live in ~/.hydrate/. Nothing leaves your machine on Free or Pro tier.
Local server
Dashboard and API run at localhost:49849 by default. Check ~/.hydrate/server.port if yours differs.
Team sync via git
On the Team tier, facts are stored as git commits. Full history, no server required. Any git remote works.
Works with your existing subscription
No separate API key for the memory layer. Day one on whatever Claude or Copilot plan you already have.
The beta rolls out in groups of 50 on a first-come-first-served basis. Register before 6 May and your future Pro rate locks at $5/mo forever.
Press, researchers, and newsletter writers covering the launch can skip the queue via early access - see gethydrate.dev.