Introducing Memory: Private Recall, Team Memory, and Governed Knowledge
Today we are announcing Memory — a memory system for knowledge work that captures, searches, and recovers the context behind your work across devices and apps. It starts with private recall, then extends into team memory and governed organizational knowledge without locking you into a proprietary vendor stack.
TL;DR
- Memory captures from sources you choose across desktop, mobile, and web, and helps you search and recover the context behind your work.
- It now spans three layers: private recall for individuals, work-scoped memory for teams, and a governed enterprise layer for reusable knowledge artifacts.
- Ask questions in plain English and get answers with citations back to source events or promoted knowledge artifacts.
- Privacy-first: local-first storage, EU-hosted infrastructure, governed sharing, and portable data that avoids vendor lock-in.
- Currently in pilot — request early access.
The Problem: Work Context Is Scattered and Easily Lost
Think about everything your team touched last week: email threads, chat messages, documents, browser research, notes, photos, voice memos, and handovers across devices. Now try to recover one specific detail — the source behind a decision, the link someone shared, or the exact wording agreed in a discussion three weeks ago.
Usually, that context is not truly gone. It is just scattered across apps, devices, and platforms, each with its own search, retention rules, and access boundaries. What individuals lose as recall, teams lose as continuity, and organizations lose as reusable knowledge.
Note-taking tools, browser history, and chat search can help in fragments, but they do not create a governed memory layer across the way modern work actually happens.
Memory: One Product, Three Layers
Memory takes a layered approach to knowledge work. Instead of forcing people to change how they work, it captures from the sources they choose and then helps them move from private context to shared, governed knowledge:
- Memory Personal — private recall for individuals, with user-controlled capture and local-first storage.
- Memory Work — workspace-scoped memory with source policies, sensitivity controls, and private-by-default raw capture.
- Memory Enterprise Layer — curated knowledge artifacts promoted through review workflows into a governed organizational layer.
This is the key shift in the product. Memory is no longer just about helping one person remember more. It is about preserving context at the right layer, keeping raw activity private where it should stay private, and turning validated insights into assets teams and organizations can actually reuse.
Portable Memory, Not Vendor Lock-In
If the idea of AI systems that remember things for you sounds familiar, it is because major LLM vendors have started adding memory features to their own products.
But there is a fundamental difference, and it matters more as memory becomes part of real work.
When a proprietary assistant remembers something about you or your team, that memory usually lives on the vendor's infrastructure, in the vendor's format, governed by the vendor's terms. You may not be able to export it meaningfully, use it with another model provider, inspect it deeply, or host it on your own infrastructure. The more useful that memory becomes, the harder it is to leave.
This is the definition of lock-in.
Memory is built on the opposite principle. Data starts in local databases on user devices, syncs through EU-hosted infrastructure designed for customer control, and remains exportable in standard formats. The knowledge base stays yours, and the AI layer remains something you point at your data rather than something that absorbs it into a closed system.
When you ask Memory a question, it can use your configured AI provider to answer, but the underlying memory and promoted knowledge remain under your control. That makes it easier to change providers, self-host, or evolve your architecture over time.
Proprietary Memory vs. Memory by ContentCloud
Proprietary LLM Memory
- Data stored on provider's servers
- Opaque storage format
- No meaningful export
- Locked to one LLM provider
- Provider controls retention
- Closed source
Memory by ContentCloud
- Local-first storage + EU-hosted infrastructure designed for customer control
- Open SQLite format
- Full export and portability
- Works with any LLM provider
- You control retention
- Governed promotion into shared knowledge layers
From Personal Recall to Organizational Continuity
We believe one of the most valuable assets in the AI era is not just information, but recoverable context: what people read, discussed, decided, and handed over across the real tools of work.
At the individual level, that means better recall. At the team level, it means fewer lost decisions, smoother handovers, and less repeated work. At the organizational level, it means turning context into governed knowledge artifacts that survive staff changes and tool fragmentation.
Memory is designed to support that full progression. It does not replace your existing tools. It sits underneath them, helping people recover context privately first, then selectively promote validated knowledge into shared layers where it can create lasting value.
Privacy and Governance Are Built Into the Architecture
We did not want a memory product that forced organizations to choose between usefulness and control. Memory is built so privacy and governance shape the architecture from the start:
- Local-first storage — each device keeps its own SQLite database, with offline capture and sync when connected.
- Private-by-default raw memory — raw events remain user-scoped unless explicitly promoted through governed workflows.
- Workspace controls — Memory Work adds source policies, channel boundaries, and retention rules at the workspace level.
- Sensitivity controls — credentials, PII, and other sensitive signals can be classified and quarantined before reaching shared layers.
- EU-hosted infrastructure and self-hosting options — to support GDPR-aligned deployments and stronger operational control.
- Full data export — portable data in open formats so memory remains yours rather than the vendor's.
For organizations subject to data residency requirements, internal governance rules, or stricter security review, Memory can also be self-hosted. The product is designed so useful shared knowledge comes from reviewed artifacts, not unrestricted access to raw user activity.
Currently in Pilot
Memory is available today through a pilot program. We are onboarding a limited number of early users and teams who want to help shape the product across all three layers: personal recall, work memory, and the enterprise layer for governed knowledge artifacts.
If this resonates with you — whether you need stronger private recall, better continuity for teams, or a more portable path than proprietary AI memory systems — we would love to hear from you.
Join the Memory Pilot
Explore Memory across private recall, team memory, and governed knowledge workflows.
Memory is developed by ContentCloud, the AI division of EWORX S.A. Deployments are designed around EU-hosted infrastructure and GDPR-aligned implementation options.