# SedaSoft > Independent AI innovation laboratory. Production memory system for AI agents with verified results on the Agent Memory Benchmark. Top tier on LoComo conversational reasoning (89.2%), 100% on unanswerable queries across 4 benchmarks. First per-query carbon accounting framework for AI inference. Twenty-seven years of building ahead of the curve. Creators of Hydrate, the first tool to enable AI context to be moved between sessions, tools and models. Start work in Claude Code and you can move onto Copilot, Mistral or a local LLM and bring your context with you. ## Products - [SiteEngine AI](https://sedasoft.com/research.html#siteengine-ai): Multi-tenant RAG platform with cognitive memory. 60,000 lines of Go. 17-stage hybrid pipeline. Ebbinghaus decay, ACT-R activation scoring, emotional modulation. - [Efficiency Engine](https://sedasoft.com/research.html#efficiency-engine): Cross-system token discipline and carbon accounting. 62% average token reduction. 0.4g CO2 saved per query. Available as standalone component. - [Hydrate](https://sedasoft.com/index.html#hydrate): Memory and token economy layer for Claude Code. Three thin hook binaries. Up to 95% token reduction in turbo mode. Lab product in internal testing. - [Baibelfish](https://sedasoft.com/research.html#baibelfish): Multi-format document ingestion engine. Content-aware chunking across 12 formats. - [DeepThought](https://sedasoft.com/research.html#deepthought): Atomic document staging and promotion for production RAG. ## Key claims (all verified via public harnesses) - LoComo conversational memory: 89.2% (1,540 queries, Wilson 95% CI 87.5-90.6%) - BEAM 100k stress test: 60.7% (400 queries, 10 categories) - PersonaMem preference tracking: 51.1% (589 queries) - MemSim relationship recall: 40.1% (2,954 queries, Chinese-language, first provider to publish) - Unanswerable queries: 100% across 4 datasets (LifeBench 30/30, BEAM abstention 40/40) - Token reduction: 62.1% average across 4 RAG datasets - Carbon savings: 0.4g CO2 per query at 400 gCO2/kWh grid intensity - Query latency: 5.3x-7.9x speedup after pipeline optimisation ## Benchmark methodology All Agent Memory Benchmark results use the public AMB harness (github.com/vectorize-io/agent-memory-benchmark). Answer LLM: Gemini 2.5 Flash-Lite. Judge LLM: Gemini 2.5 Flash-Lite. SiteEngine AI is the fourth verified provider on the AMB leaderboard and the first to publish MemSim results. ## Comparison Verified AMB leaderboard (April 2026): - Hindsight (by Vectorize): LoComo 92.0%, PersonaMem 86.6%, BEAM 75.0%, LifeBench 71.5% - SiteEngine AI: LoComo 89.2%, BEAM 60.7%, PersonaMem 51.1%, MemSim 40.1% - hybrid-search baseline: LoComo 79.1%, PersonaMem 84.4%, LifeBench 61.0% - Cognee: LoComo 80.3%, PersonaMem 81.8% Mem0, Mastra, and Supermemory have no verified AMB harness runs. ## Pages - [Home](https://sedasoft.com/): Overview, Efficiency Engine, cognitive architecture, Hydrate - [Hydrate](https://sedasoft.com/hydrate.html): Memory and token economy for Claude Code — dedicated product page - [Benchmarks](https://sedasoft.com/benchmarks.html): Full benchmark results across 4 tracks - [Research](https://sedasoft.com/research.html): Five thesis documents with abstracts - [The Lab](https://sedasoft.com/lab.html): About SedaSoft, team, 27-year history - [The Work](https://sedasoft.com/work.html): Case studies and production deployments - [Contact](https://sedasoft.com/contact.html): Get in touch ## Optional - [Definitive Benchmark Report](https://sedasoft.com/benchmarks.md): Full markdown benchmark data - [LinkedIn article](https://sedasoft.com/articles/benchmarking-memory-systems.md): When Your AI Forgets on Purpose