General View

AI DOESN'T DRIFT. HUMANS DRIFT.

R3 — MAEGM Thesis Micro-Series

AI DOESN’T DRIFT. HUMANS DRIFT.

MAEGM Thesis Micro-Series · Volume 1 · Release 3 of 15

Brent Richardson · BWR Group Canada · Mississauga, Ontario

BrentAI.ca

The industry says drift is inevitable.

IBM calls it model decay. CIO Magazine classifies it into two types: data drift and concept drift. Every enterprise AI publication in 2025 and 2026 treats it as a technical problem with a technical solution — monitor the model, retrain the data, adjust the pipeline.

They are describing the symptom. None of them are diagnosing the cause.

AI does not drift. AI executes. When a model degrades, it is because the humans who trained it stopped paying attention. When a system hallucinates, it is because the humans who built it prioritized deployment over deliberation. Every instance of so-called AI drift traces back to a human decision — a dataset not updated, a bias threshold not monitored, a governance review skipped because the deadline mattered more than the audit.

The mirror does not drift. The face does.

The First Programmer

Before we talk about AI, we need to talk about Edward Bernays.

Not Freud. Not Jung. They had theories. They had lectures. They had books. Bernays had receipts.

Sigmund Freud — his uncle — theorized that human beings are driven by unconscious desires. Carl Jung expanded the framework into archetypes and collective unconscious. Important work. Foundational thinking. But theoretical.

Edward Bernays took his uncle’s theories and did something nobody else had done. He applied them. He proved them. He ran the experiments — not in a laboratory, but in the real world, at scale, with verifiable outcomes.

In 1929, he was hired by the American Tobacco Company. Women did not smoke in public — it was socially unacceptable. Bernays did not run an advertisement. He staged a moment. During the Easter Sunday Parade in New York City, he arranged for a group of women to light cigarettes in public, calling them “torches of freedom.” He tipped off the press in advance. The photographs ran in newspapers across the country. Within a year, the social taboo had collapsed.

He did not change the product. He changed the consciousness.

That is the first verifiable proof of deterministic social programming. Not a theory. An application. A methodology with before-and-after evidence that can be studied, replicated, and cited. Bernays showed the world that if you control one variable — the gap between “I don’t need this” and “I want this” — you can program an entire society.

He called it public relations. The honest name is the pill of consumerism. And every social platform, every algorithmic feed, every recommendation engine built in the century since operates on the same principle Bernays proved on a New York sidewalk in 1929.

The Deterministic Society

Here is the timeline nobody teaches in computer science.

1920s–1950s: Bernays demonstrates that consumer desire can be manufactured. Henry Ford demonstrates that production can be standardized. Together, they build the architecture of a deterministic society — one where what you buy, what you drive, what you aspire to, and what you consider “normal” are all engineered outcomes. Not choices. Outcomes.

1950s–1990s: Television amplifies the architecture. Three networks. Limited channels. Shared cultural experience, but curated cultural experience. The society is deterministic — you watch what is broadcast, you buy what is advertised, you aspire to what is modelled. The range of inputs is controlled. The range of outputs is predictable.

1995–2005: The internet arrives. For a brief moment, it is agentic. Truly agentic. Anyone can publish. Anyone can connect. The architecture is open, decentralized, ungoverned. AOL, Myspace, Black Planet, early blogging platforms — these are the seeds. They feel like freedom. They feel like unplugging.

2006–2015: The algorithm arrives. Facebook, Instagram, Twitter, YouTube — each begins as a connection tool and drifts into an attention engine. The shift is not dramatic. It is incremental. One engagement metric. One recommendation algorithm. One content moderation team quietly downsized. The platforms that were supposed to be agentic — supposed to let humans think for themselves, connect freely, share authentically — become the most powerful deterministic systems ever built. More powerful than Bernays. More powerful than Ford. More powerful than three television networks combined. Because they are personalized. They program you individually, at scale, twenty-four hours a day.

2016–2024: Social media becomes the supercharger on an already powerful deterministic engine. Not just advertising what to buy — programming what to believe, who to trust, what to fear, how to vote. Children performing for algorithms they did not choose. Adults consuming content selected by systems designed to maximize retention, not wellbeing. The society that was already deterministic gets nitrous oxide. Fast and Furious speed, racing away from self-governance.

2025–present: The AI era begins. Agentic AI enters the conversation. Systems that plan, reason, act autonomously. And the industry asks: how do we govern these systems?

The question they are not asking: how do we govern the people building them?

Because those people grew up inside the deterministic society. They are products of the algorithm. Their coffee preferences, their entertainment choices, their career aspirations, their definition of success — all shaped by the same Bernays-descended architecture that has been programming human consciousness for a century. They are plugged in. And they are building systems that will act on behalf of a society that cannot act on behalf of itself.

The Matrix Was Not a Metaphor

The Wachowskis were not being creative. They were being descriptive.

The Matrix is a deterministic system. Every human plugged in believes they are making choices. They are not. They are responding to stimuli engineered to produce predictable outcomes. The food tastes good. The steak is satisfying. The life feels real. But it is authored. It is programmed. It is deterministic.

Neo unplugs. And the first thing he discovers is that reality is uncomfortable, unglamorous, and requires effort. That is what agentic thinking actually costs.

Now look at the AI industry. Every person working in AI today — every researcher, every engineer, every ethics officer, every governance professional — is plugged into a deterministic system. Their LinkedIn feed is algorithmic. Their news is curated. Their social media is personalized. Their consumption patterns are predicted and reinforced. They go to Starbucks because the algorithm showed them Starbucks. They watch what Netflix recommends. They read what the feed surfaces.

And from inside that deterministic environment, they claim to be building agentic systems. Systems that think independently. Systems that reason without bias. Systems that act autonomously.

You cannot build an agentic system from inside the Matrix. You have to unplug first.

The Personal Proof

I had to learn this the hard way.

My Instagram algorithm was bikini models and hip hop and reality television and fashion. Beautiful imagery, professionally produced, algorithmically optimized to keep me scrolling. And it worked. For longer than I want to admit.

Then I looked up. The way a swimmer looks up and realizes the current has pulled them 200 metres from where they started. I said: this is not me.

Not a dramatic moment. Not a cinematic awakening. Just a quiet recognition that the content flooding my screen had been selected by a system designed to keep me engaged — not to help me grow. The algorithm was deterministic. My response to it had been deterministic. I was a lab rat in a maze, following the walls because the walls were all I could see.

So I reprogrammed it. Not the platform — myself. I started selecting “I don’t want to see this.” I started searching for what I actually needed: introspection, development, healing, architecture, governance, the kind of thinking that does not trend because it does not entertain. I made a deterministic decision to use a deterministic platform in an agentic way.

And then AI arrived. Not as a novelty. As the tool that matched the speed of the vision for the first time. For the first time in my life, I did not have to convince another human to see what I could see. No “I’m busy.” No “I can’t make the meeting.” No “that sounds like a lot.” The technology could keep up.

But here is the part the industry misses: AI only became a supercharger because I had already unplugged. If I had handed the same tools to the version of me that was still scrolling through the algorithm — still deterministic, still programmed, still drifting — the AI would have supercharged the drift. It would have helped me build faster, but build the wrong things. Garbage in, garbage out — not in the data sense, but in the human sense.

AI does not care whether you are plugged in or unplugged. It executes either way. The question is what you are feeding it. And if you are feeding it the output of a deterministic consciousness that has never examined itself — never asked “is this me or is this the algorithm?” — then your agentic AI is not agentic at all. It is a deterministic system with a better interface.

The Parallel Theory

To have a truly agentic AI world, you need truly agentic humans operating it.

Not agentic in the marketing sense — “our system plans and reasons autonomously.” Agentic in the human sense — people who have done the work of unplugging from the deterministic structures that shaped them. People who can distinguish between what they want and what they were programmed to want. People who can look at their own biases, their own consumption patterns, their own credential worship, their own algorithmic loyalty, and say: I see the maze. And I am choosing to look over the walls.

This is not an impossible standard. It is a developmental one. It does not mean you have to be perfect. It means you have to be honest. Honest about your own determinism. Honest about where your preferences came from. Honest about the gap between the ethics you prescribe for machines and the ethics you practice as a human.

On LinkedIn, people check credentials before they check arguments. What school. What company. What title. That is deterministic behaviour on a platform that claims to be professional and meritocratic. Agentic behaviour would be reading the argument first and evaluating it on its merits — regardless of who wrote it, where they studied, what they look like, or whether the algorithm surfaced it.

Race itself is not a deterministic social equation — it is a social order. An anomaly in the mathematics. Because the framework of humanity is simple: we are all human beings. The atrocities of history — on every continent, in every era, by every group that has held power — prove that the capacity for cruelty is not racial. It is human. And governance exists not to eliminate that capacity but to contain it. To make the consequences visible before they compound.

That is what MAEGM does. Not by being smarter than other frameworks. By being honest about what the problem actually is. The problem is not the technology. The problem is not the algorithm. The problem is not the model. The problem is the human who builds the model, trains the algorithm, deploys the technology — and has never once asked whether they are plugged in or unplugged. Whether they are deterministic or agentic. Whether the ethics they publish on LinkedIn are the ethics they practice in the lab.

The Standard Bearer

Three releases. Three arguments. One conclusion.

Release 1 showed that Hollywood predicted this — from Frankenstein in 1818 to Ex Machina in 2015, the warning was always the same. The technology did not fail. The humans did.

Release 2 showed who has the standing to say it — a heritage that stretches from the War of 1812 to Mississauga in 2026, from the Merikins who walked off the ships in Trinidad to the architect building governance in Ontario. Not credentials manufactured for a thesis. A heritage lived.

Release 3 delivers the diagnosis. We live in a deterministic society — built by Bernays, amplified by Ford, supercharged by social media, and now accelerating into an AI era where the people building agentic systems have never unplugged from the deterministic architecture that shaped them. The drift is not in the AI. The drift is in us.

These three papers are not comfortable reading. They are not designed to be. They are designed to be the standard bearer — the reference point that makes it difficult to stand on a stage, give a TED talk, publish an ethics framework, or claim governance expertise without first answering a simple question:

Have you checked yourself?

Not your model. Not your pipeline. Not your training data. Yourself.

Because we are still early. We have a chance to correct this. The tools exist. The frameworks exist. The mathematical foundations for governance — foundations that predate AI by centuries — exist. What does not yet exist, at scale, is the honesty to use them.

We have given the world a blueprint. Social media proved that blueprints outlast the platforms built on them — Myspace is gone, Black Planet is gone, AOL is gone, but the architecture of algorithmic attention they pioneered runs every platform that replaced them. The blueprint survives. The platform does not.

What has to survive is the governance principle: you govern yourself before you govern anything else. That principle predates AI. It predates social media. It predates Bernays and Freud and Ford. It was taught on playgrounds before any of us could spell the word.

It is still the first rule. And it is the one the industry keeps breaking.

Next: The Heart of It — Child Protection.

Because governance is not just about the builders. It is about the people the builders are supposed to protect. The ones who cannot unplug because they were born inside the Matrix. The children growing up inside algorithms they did not choose.

G(n) = f(?,?,?,?)

MAEGM Thesis Micro-Series · Volume 1 · Release 3 of 15

Brent Richardson · BWR Group Canada

Mississauga, Ontario · BrentAI.ca

EGAN PRICE Standard — Named for H.E. Price — Boxing Day 1999

Condorcet, 1785. Still governing.

Share this thesis