My observations, frustrations, and the reason I started building airembr
I’ve spent the last few years doing something that probably sounds boring: testing AI memory systems. Assistants, agents, memory modules, frameworks that promise “long-term context.” I’ve tried a lot of them.
And honestly? I’m frustrated.
Not because they’re all bad — some are clever, some are useful. But because after using dozens of these systems, a pattern emerged that I couldn’t ignore: AI memory feels fundamentally unfinished.
It’s scattered. It’s inconsistent. It’s slow. It’s unpredictable. And way too often, it’s stored on someone else’s cloud — which, personally, makes me uncomfortable.
That frustration eventually crystallized into a deeper realization: the AI industry is trying to build memory without a framework. We have tools, hacks, clever vector search tricks — but no shared foundation. No architecture. No common language.
And that’s the real problem.
What I Keep Seeing (And It’s Getting Old)
These aren’t scientific findings — just patterns I noticed while testing products. But they showed up again and again:
SaaS memory systems feel slow. Even with small user bases, performance lags. If they struggle now, what happens when they scale to millions of users?
Most don’t feel production-ready. Performance issues, instability, missing features. Maybe I’m unlucky, but this has been consistent across many systems I’ve tried.
Quality is all over the place. As an industry, we haven’t solved retrieval, learning, context merging, or how memories should evolve over time. Every system handles these differently — and most handle them poorly.
The dominant memory model is embarrassingly primitive. Here’s what “AI memory” typically means today:
- Take some text
- Turn it into a knowledge base
- Create embeddings
- Retrieve via vector search
That’s… it. It’s useful for some things, sure. But calling this “memory” is like calling a filing cabinet “cognition.”
After trying system after system, it became obvious: we’re building memory on improvised architecture. We’re hacking around something that desperves its own discipline.
The Thesis (Let Me Say It Clearly)
AI memory needs a proper framework — a conceptual and architectural foundation — before we can build reliable, scalable, trustworthy products on top of it.
That’s the point. Everything else is just a symptom of this missing foundation.
Why This Actually Matters (Beyond Just “Better AI”)
Here’s something that took me a while to realize: properly structured AI memory doesn’t just make AI better — it could replace entire categories of software we use today.
Think about CRM systems. What do they do? Store data about customers. Track history of contacts. Record interactions, preferences, purchases. That’s literally what they are: a structured memory system for customer relationships.
Now imagine AI memory that does the same thing — but better. Instead of rigid database schemas and manual data entry, you get natural language understanding, automatic context extraction, and an AI agent that can interpret the memory and act on it. Not just “show me the contact history” — but “understand this relationship and help me navigate it.”
That’s not a slightly better CRM. That’s a completely different paradigm.
Or take Customer Data Platforms (CDPs). The entire point of a CDP is solving data silos — unifying customer data from different sources, resolving identity across touchpoints. This is literally an identity problem that AI memory must solve anyway to function properly.
If your AI memory can’t figure out that “[email protected] from the website,” “John from Slack,” and “John Smith from the CRM” are the same person — it’s broken. But if it can? You’ve just replaced a multi-million dollar CDP infrastructure with a fundamental capability of proper memory architecture.
The more I think about it, the more obvious it becomes: AI memory, done right, isn’t a feature. It’s infrastructure that makes entire software categories obsolete.
CRMs, CDPs, knowledge management systems, personalization engines — they’re all trying to solve memory and identity problems. They’re just doing it without AI, with rigid schemas, and with far less intelligence.
This is why the framework matters. This is why we can’t keep building memory as an afterthought.
But here’s the irony: to replace these systems, AI memory needs to solve the same foundational problems they solved — just better. And that’s where we’re failing.
The Missing Piece: A Mental Model
Traditional IT systems — CRMs, CDPs, databases — figured out the fundamentals decades ago. They have:
- Identity models
- Schemas
- Rules for handling different types of facts
- Lifecycle definitions
- Uncertainty handling
- Clear separation between data and processing
AI memory? Still doesn’t have these things.
So every developer ends up rebuilding the same boilerplate every single time:
- Embeddings infrastructure
- Chunking logic
- Summarization pipelines
- Metadata rules
- Background updaters
- Retrieval systems
- Compaction logic
- Identity inference
- Privacy patches
By the time you’ve built all the foundations, the interesting part — the actual “intelligence” — is still ahead of you.
I’ve done this dance too many times. And it’s exhausting. This shouldn’t be necessary.
Why We Need the Framework First, Tools Second
This is the gap I kept hitting: everyone wants better memory tools, but nobody’s defining what memory actually is in an AI context.
A real framework would:
- Define what a “memory” is (fact? preference? event? speculation?)
- Separate different types of knowledge
- Unify identity models across contexts
- Establish rules for how memories learn and update
- Distinguish between subjective and objective information
- Standardize metadata
- Define a predictable memory lifecycle
- Ensure memory belongs to users, not providers
Only when this foundation exists can tools become reliable. Without it, every product is fragile, every implementation is subjective, and everything breaks with the next update.
What I’m Building: airembr
airembr is still in development, but the vision is clear.
We’re building a framework, an architecture, and an SDK that lets developers focus on what they want to store — not on reinventing the basics.
The system will help developers:
- Structure facts properly
- Store them through a consistent API
- Maintain identity across contexts
- Separate data from processing logic
- Keep memory on-prem or user-owned (no forced cloud storage)
- Trigger memory processes (retrieval, embeddings, summarization, compaction, audits)
- Evolve as the field evolves
Our goal isn’t to freeze the design — AI memory is too young for that, and any framework must stay adaptable. Our goal is to give developers a foundation so they don’t have to build the basics from scratch every damn time.
Why This Matters to Me
After testing so many systems, I kept feeling the same things:
- I don’t want my memories stored in someone else’s cloud.
- I don’t want to rewrite the same boilerplate endlessly.
- I don’t want memory treated as an afterthought.
- And I really don’t want to keep relying on vector-search hacks that pretend to be cognition.
I want a world where AI memory is:
- Structured
- Trustworthy
- User-owned
- Predictable
- Grounded in real architecture — not improvisation
airembr is my attempt to build that missing foundation.
If this resonates with you, I’d love to hear your thoughts. And if you’re building something in this space, maybe we should talk — because we all need this framework to exist.
Building something related to AI memory? Have thoughts on this? Reach out — let’s figure this out together.
