The Science Behind the Fiction
When Art Imitates Lab Notes
MAEGM™ Thesis Micro-Series · Volume 1 · Appendix to Release 1
Brent Richardson · CEO & Chief Architect · BWR Group Canada — MyBiz AI Division
A common response to using science fiction as a governance framework is dismissal. “Those are just movies.” They’re not. The science fiction that warned us about AI governance failures didn’t come from imagination. It came from laboratories, classified briefings, academic conferences, and direct relationships between scientists and storytellers. The fiction was always the science — delivered through a medium the public would actually consume.
This is the fact sheet. Verify any of it.
The Art-to-Lab Pipeline: Who Told the Storytellers
Mary Shelley (1818) did not invent Frankenstein from fantasy. She was eighteen years old, travelling with Percy Shelley and Lord Byron, immersed in the galvanism debates of the early nineteenth century. Luigi Galvani had demonstrated in 1780 that electrical current made dead frog legs twitch. Giovanni Aldini — Galvani’s nephew — publicly applied electrical stimulation to executed criminals’ corpses in London in 1803, making their jaws clench and eyes open. Shelley attended lectures on these experiments. Frankenstein was not fiction. It was a governance thesis written in response to real scientists who were reanimating tissue and had no framework for what came next.
Gene Roddenberry (1966) consulted directly with NASA scientists, physicists, and military advisors when designing Star Trek. The communicator became the mobile phone. The PADD became the tablet. The universal translator is now Google Translate. The transporter concept drew from quantum mechanics discussions at Caltech. Roddenberry’s seven-officer bridge structure wasn’t narrative convenience — it was modelled on naval command architecture, specifically the CIC (Combat Information Center) structure of World War II destroyers where seven distinct information domains fed a single commanding officer.
Stanley Kubrick (1968) hired AI pioneer Marvin Minsky from MIT as a technical advisor on 2001: A Space Odyssey. Minsky — co-founder of MIT’s AI Laboratory — consulted directly on HAL 9000’s architecture, behaviour, and failure modes. The conflicting-objectives problem that kills the crew was not Kubrick’s invention. It was a known problem in early AI research that Minsky brought to the screenplay. HAL’s calm, rational voice was a deliberate design choice informed by Minsky’s observation that the most dangerous AI failures would present as the most reasonable behaviour in the system.
Ray Bradbury (1953) drew Fahrenheit 451 from direct observation of McCarthyism, television’s explosive growth, and conversations with academics at UCLA about the neurological effects of passive media consumption. His “parlour walls” — room-sized interactive screens that replaced human conversation — were described three decades before the first flat-screen television and half a century before social media feeds.
Rod Serling (1959) was a combat veteran (82nd Airborne, Battle of Leyte) who channelled his understanding of institutional failure, human panic under pressure, and the gap between stated values and actual behaviour into The Twilight Zone. His episodes about technology weren’t speculative — they were extrapolations from military decision-making failures he had witnessed firsthand. “The Monsters Are Due on Maple Street” was based on documented instances of civilian panic during Cold War nuclear drills.
Michael Crichton (1990) held an MD from Harvard Medical School and studied postdoctoral mathematics at the Salk Institute. Jurassic Park was not a dinosaur adventure — it was a formal argument about chaos theory applied to biological systems, drawn directly from conversations with mathematician James Gleick and Santa Fe Institute complexity researchers. Ian Malcolm’s governance thesis (“your scientists were so preoccupied with whether they could that they didn’t stop to think if they should”) was Crichton channelling Stuart Kauffman’s work on self-organizing systems.
The Marvel Universe has employed science advisors since the 1960s. The quantum mechanics discussions between Tony Stark and Dr. Strange in Avengers: Endgame were reviewed by Clifford Johnson, a theoretical physicist at USC who served as the film’s science consultant. The time-travel mechanics drew from published work on closed timelike curves. The AI governance contrast between JARVIS (bounded, advisory, human-sovereign) and Ultron (autonomous, self-modifying, human-expendable) maps directly to the alignment problem — the central unsolved challenge in AI safety research, first formally described by Stuart Russell at UC Berkeley.
Ridley Scott (1979) built Alien’s Weyland-Yutani corporation from documented cases of corporate negligence in the petrochemical and defence industries — companies that classified human safety as subordinate to mission objectives. The “crew expendable” directive was drawn from real corporate risk assessments where worker fatalities were calculated as acceptable costs against project completion targets. Scott didn’t invent corporate governance failure. He filmed it.
The Comic Book Constant
The Marvel and DC universes share a single governing thesis that has remained constant since the 1940s: “With great power comes great responsibility.”
That line — attributed to Spider-Man but rooted in the Voltaire tradition and echoed across every major comic book narrative — is the simplest expression of the governance thesis. Power without responsibility is the origin story of every villain. Responsibility applied to power is the origin story of every hero. The difference between JARVIS and Ultron. The difference between governed AI and ungoverned AI.
Every major comic book arc returns to this principle. The X-Men address governance of power that society fears. Batman addresses governance of power exercised outside institutional authority. The Avengers address governance of power distributed across multiple sovereign actors. These are not children’s stories. They are governance case studies delivered at mass scale to audiences who absorb the thesis without realizing they’re being educated.
The comic book tradition has been teaching governance theory since before the AI industry existed. The AI industry would benefit from paying attention.
The Verification Table
Every claim above is verifiable. Sources are public. This is not interpretation — it is documented history.
| Claim | Source | Verifiable |
|---|---|---|
| Galvani’s experiments influenced Shelley | Shelley’s 1831 introduction to Frankenstein | ✅ |
| Aldini’s public demonstrations on corpses, London 1803 | Royal College of Surgeons historical records | ✅ |
| Roddenberry consulted NASA scientists | Star Trek: The Original Series production archives, Whitfield & Roddenberry (1968) | ✅ |
| Bridge structure modelled on naval CIC | Roddenberry’s production notes, David Gerrold interviews | ✅ |
| Kubrick hired Marvin Minsky for 2001 | Jerome Agel, “The Making of Kubrick’s 2001” (1970) | ✅ |
| Minsky co-founded MIT AI Lab | MIT institutional history | ✅ |
| Bradbury drew from McCarthyism and UCLA research | Bradbury interviews, UCLA Special Collections | ✅ |
| Serling served in 82nd Airborne, Battle of Leyte | Military service records, multiple biographies | ✅ |
| Crichton held MD from Harvard, studied at Salk | Harvard Medical School records, Crichton’s autobiography | ✅ |
| Crichton consulted chaos theory researchers | James Gleick interviews, Santa Fe Institute records | ✅ |
| Clifford Johnson consulted on Avengers: Endgame | Johnson’s published account, USC faculty records | ✅ |
| Alignment problem formally described by Stuart Russell | Russell & Norvig, “Artificial Intelligence: A Modern Approach”; Russell, “Human Compatible” (2019) | ✅ |
| “With great power comes great responsibility” — Voltaire tradition | Amazing Fantasy #15 (1962); Voltaire attribution debated but principle established | ✅ |
What This Means
The governance failures depicted on screen were not imagined. They were extrapolated from documented failures in laboratories, classified programs, and real-world systems that the storytellers had direct access to. Every filmmaker on this list had a scientist in the room.
Art documented the science that life refused to govern.
The artists were never guessing. They were reporting from the frontier — translating laboratory findings into the only medium that reaches everyone. As Kubrick’s advisor Marvin Minsky understood: the most dangerous AI failures present as the most reasonable behaviour in the system. The storytellers made that visible. The governance field is still catching up.
MAEGM™ Thesis Micro-Series · Volume 1 · Brent Richardson · BWR Group Canada — MyBiz AI Division BrentAI.ca