Claude Managed Agents launched in public beta on April 8th at $0.08 per session-hour. That sounds cheap until you do the math: a 24-agent fleet running 8-hour daily tasks costs $15.36/day in session fees alone, before a single token of inference. But the session fee isn't the real cost. The real cost is what happens to your agent's memory when you want to leave.
Every managed agent service stores your agent's accumulated knowledge — conversation history, learned preferences, entity relationships, extracted facts — inside their infrastructure, in their format, behind their API. When your sales agent has spent three months learning your customer base, that knowledge isn't sitting in a Postgres table you can export. It's compressed into proprietary representations that only work inside the harness that created them.
This is database vendor lock-in all over again, except the data is harder to recreate.
What This Means
The agent memory market has fragmented into four distinct storage architectures, each with different lock-in profiles.
Vector-based systems like Mem0 store memories as compressed semantic representations. Their free tier gives you 10K memories and 1K retrievals per month, enough to get hooked. The graph memory feature that makes cross-conversation entity linking actually useful is paywalled at $19/month. Your memories live in Mem0's proprietary compressed format. Exporting requires API calls with no standardized data format guarantee.
Temporal knowledge graphs like Zep build evolving entity-relationship models where facts invalidate each other as data changes. Zep uses a credit system where objects over 350 bytes consume multiple credits — pricing opacity that creates stickiness once you're integrated. The custom entity types you define become domain-specific to Zep's graph structure. Migrating means rebuilding your entire entity model from scratch.
Library-based approaches like LangMem store everything in your own Postgres. You own the data. But the trade-off is severe framework coupling — LangMem is effectively LangGraph-only, the documentation is thin enough that you end up reading source code, and you're betting on LangGraph's internal APIs remaining stable.
Framework-agnostic APIs like MemoClaw avoid lock-in on the integration side but create it on the embedding side — they use OpenAI embeddings exclusively. Switching embedding providers means rebuilding every stored vector. Their 8K character limit per memory also constrains what you can store.
None of these is a clean win. But at least with self-hosted options, you can see the lock-in coming.
The Bigger Picture
Claude Managed Agents' memory system is still in research preview — you need separate access to even try it. Multi-agent coordination and self-evaluation, the features that would make the $0.08/hour worthwhile for complex workloads, are also behind a research access gate. You're paying production prices for beta infrastructure.
The open alternatives are responding to this directly. LangChain's Deep Agents Deploy stores agent memory in standard formats — AGENTS.md files, skills, and structured data you can query through documented API endpoints. Their pitch is blunt: "by choosing an open harness you are choosing to own your memory, and not have it be locked into a proprietary harness or tied to a single model."
Other projects like Multica take a different approach entirely — Go backend, PostgreSQL 17, task lifecycle management with Kanban boards and skill sharing across agents. No container-level sandboxing, which limits security isolation, but also no memory lock-in because everything sits in Postgres tables you control.
Cabinet goes even further: file-based knowledge, Git-backed history, persistent memory through cron-scheduled tasks. Your agent's entire brain is a directory of files you can git clone. The constraint is that your machine needs to stay online — there's no hosted option.
The pattern across all of these: memory portability is the feature, not the afterthought. Managed services treat memory as an implementation detail of the platform. Open runtimes treat it as data you own.
What Happens Next
The memory portability problem will force standardization within 12 months. Right now, every framework stores agent memory differently — vectors in proprietary formats, temporal graphs with custom schemas, compressed representations behind paywalled APIs. This fragmentation is unsustainable once companies start running multi-model agent fleets.
MCP, now donated to the Linux Foundation's Agentic AI Foundation, is the most likely vehicle for standardization. It already defines how agents connect to external tools. A memory layer spec would be the natural extension — standard formats for storing and retrieving agent knowledge that work across any harness.
Until that happens, the safest bet is to keep agent memory in infrastructure you control. Postgres, flat files, Git-backed storage — boring technologies that don't disappear when a startup pivots. The agent runtime is a replaceable commodity. The memory your agents accumulate is not.
Final Thoughts
The managed agent services will compete on convenience, security, and the performance optimizations that come from owning the full stack — Anthropic claims a 10-point improvement on structured tasks compared to raw prompting. That's a real advantage. But it's an advantage you rent, not own.
The developers who separate their agent's intelligence (the model) from their agent's knowledge (the memory) from their agent's infrastructure (the runtime) will be the ones who can actually switch providers when the economics shift. And the economics always shift.