In April 2026, Andrej Karpathy published a short essay on Wikis for LLMs. His argument: the enterprises that win with AI will be the ones that build, govern, and refine the curated knowledge their AI systems draw on, not the ones with access to the best models.
For a CIO in a regulated industry, that argument lands differently than it does for a startup. The shift from model-centric to memory-centric AI is the most important reframing of enterprise AI in the past year, with direct implications for how you spend your 2026 AI budget.
For three years, enterprise AI strategy has been organized around models. Pick the best one, prompt it carefully, and upgrade when something better ships. Performance, in most board decks, is a function of model capability.
That framing is starting to break.
CIOs running these systems in production are finding that two enterprises using the same frontier model can produce wildly different AI outcomes. The variable is what the model has been told about the business: the policies, definitions, prior decisions, escalation paths, and institutional rules that turn a generic LLM into something that behaves like a useful colleague.
Karpathy gave that body of knowledge a name: a wiki for the LLM. We’ve been building it under a different name — resolutions — inside our IT operations agent for months. The terminology matters less than the architectural conclusion both arrive at: the durable AI advantage is the curated memory the system draws on, not the model it calls.
Consider a recurring problem in IT operations: a Windows build update fails on an end-user device. A model-centric approach treats every occurrence as a fresh prompt and the work product evaporates the moment the ticket closes. A memory-centric approach captures the resolution as a structured, durable artifact that records the context, troubleshooting steps, and escalation criteria. Every subsequent occurrence draws on it, so the model applies a vetted answer instead of re-deriving one.
The economic difference is significant. Inference costs drop because known problems stop consuming frontier compute, and resolution times drop because the system is executing rather than reasoning. For a regulated environment, the bigger payoff is auditability: you can point a regulator at the exact knowledge artifact that produced an outcome.
There is a specific risk in memory-centric AI that we’ve watched play out in real systems, and it deserves a CIO’s attention before it shows up in your environment. We call it error baking.
When AI systems enrich tickets, documents, or workflows by drawing on prior outputs, any error embedded in those prior outputs gets reused, reinforced, and amplified. A resolution that was 80% correct becomes the source material for the next resolution, which is now 75% correct, which trains the next one, and so on. There is no single moment of failure, just a subtle compounding drift.
The fix is governance at the memory layer, not better models: a reviewed, version-controlled knowledge base the AI is allowed to draw on, kept separate from the raw outputs it generates. Without that separation, your AI gets worse over time, in ways that are nearly impossible to detect from outside the system. With it, the system improves with every resolved incident, because every resolved incident becomes a vetted asset the next one builds on.
For a CIO in a regulated industry, this is the difference between an AI investment that compounds and an AI investment that decays.
The memory layer is not documentation, and it is not a side project. It is infrastructure. It belongs in the same conversation as your data warehouse, access controls, and audit logs — because functionally, it is all three.
Three questions a CIO should be asking now: Where does our AI’s institutional knowledge live today — in prompts, in chat histories, in individual employees’ heads, in scattered Confluence pages? Who owns the curation, review, and version control of that knowledge? Can we point an auditor at the specific artifact that produced a given AI output?
In financial services, healthcare, and government IT, the answer to that last question is going to determine which AI workloads are allowed in production at all.
The competitive advantage in enterprise AI will not belong to the organizations that access the most capable models. Those models are becoming a commodity, available to your competitors on the same terms they’re available to you.
The advantage will belong to the organizations that own — and govern — what their models know.
That asset compounds, a regulator can inspect it, and a competitor cannot replicate it by signing a different vendor contract.
The model is rented. The memory is yours.