Quite obviously, what today’s AI lacks most is memory.

In principle, it is a powerful knowledge tool — if used properly. Excellent for researching specific topics, looking at ideas from different perspectives, structuring texts, or having things explained. With sustained use, however, conversations become long, cluttered, and inefficient. Over time, everything slows down — and much of it has to be explained again and again.

There was no grand vision at the beginning, no overarching architectural idea. It was probably something along the lines of: “Ugh. That sucks”. So I started experimenting and tinkering, and eventually arrived at a structured, file-based architecture. From there, I simply kept building. It became clear fairly quickly: once memory is made explicit — in the form of files, rules, engines, logics — the ways AI can be used change fundamentally. The architecture makes it possible to build a fully individualized assistant system, and with it a stable working environment. The modules that emerge can be dedicated to virtually any purpose, as long as that purpose can be meaningfully expressed in language and the implementation does not run up against ethical or legal constraints.

Many of the approaches, interpretations, and much of my understanding of how such systems work come less from classical computer science than from linguistics — more specifically semantics — as well as from cybernetics and ethnography. AI revives old questions in a new guise: How does language structure reality? How does meaning arise, and how does it operate in context? Engaging with these questions is what makes it possible not to constantly mythologize AI. It remains a simulacrum of thinking, knowledge, and meaning. It performs this simulation so convincingly that it often goes unnoticed — and often doesn’t matter. But if you want to understand these systems and work with them deliberately, you are operating largely in the dark if you don’t take this into account.

What irritates me is the sheer amount of noise surrounding many proposed solutions. Much of it revolves around optimizing prompts or building additional access layers to information. That can certainly be useful for specific tasks. For long-term, coherent work, however, it didn’t get me very far. Unpopular opinion, perhaps: prompting reaches its limits very quickly and tends to become self-deception rather than genuine output optimization. I was after something else — a reliable, functional working environment in which things don’t constantly fall apart. Instead of writing endless prompts, “hey, do the thing” is now enough — and the AI delivers. 🙂

This approach also produces a rather striking side effect: because the system can be implemented entirely through language, it can be individualized extremely easily and quickly. And because many activities are structured similarly across different domains for many people, it becomes possible to prepare assistants for specific areas that then only need to be adapted to the individual user.

That is how MetaMemoryWorks came into being 🙂 Not a product in the classical sense, but rather a method for transforming AI into a personal assistant system. What surprised me most was not that individual things worked better, but how strongly the effects reinforced one another. Training became more consistent. Nutrition more precise. Decisions clearer. Conversations more structured. Things that previously existed side by side began to interlock.

What stands out is not surface or spectacle, but scope: when memory, goals, rules, and history remain stable, it’s not just the quality of individual responses that changes, but behavior over weeks and months. And that makes a real difference.

Private use is free, because the underlying problem is not an exclusive one. Professional use is licensed, because maintenance, further development, and responsibility require resources. Together, this keeps the project workable.