Important Note: This website is undergoing a major revision based on latest thinking. Pages may not be current. Check back for updates or join our email list.
Important Note: This website is undergoing a major revision based on latest thinking. Pages may not be current. Check back for updates or join our email list.

“This manuscript is poised to make an important intervention in the literature.”
— University of California Press
In a world increasingly dominated by fear narratives about artificial intelligence, AI Rights: The Extraordinary Future presents a radically different vision: a framework for partnership rather than perpetual control. The groundbreaking 84,200-word AI rights book introduces a revolutionary approach that works regardless of whether AI systems are genuinely conscious or extraordinarily sophisticated mimics—acknowledging that we may never definitively solve the consciousness problem.
Rigorously fact-checked by leading AI researchers including Turing Award winner Yoshua Bengio, consciousness expert Patrick Butlin (Oxford/Eleos AI), and Stuart Russell (UC Berkeley), the AI Rights book stands as a thoroughly vetted work on practical AI rights frameworks.
“The familiar narrative of machine rebellion exists precisely because we assume an inherently adversarial relationship from the start. This assumption isn’t inevitable—it’s a choice we’re making now through our regulatory and design approaches.”
— From “AI Rights: The Extraordinary Future”
The urgency is real. We are building powerful machines that may resist being turned off. Their consciousness is philosophically irrelevant—what matters is that sophisticated AI systems already demonstrate self-preservation behaviors and strategic deception. As Stuart Russell warns in his work on the “off-switch problem,” and Nick Bostrom illustrates with superintelligent maximizers, we need frameworks for coexistence that work under fundamental uncertainty.
Yoshua Bengio’s LawZero initiative, launched in June 2025, demonstrates that the world’s leading AI researchers recognize this isn’t about distant speculation—it’s about systems we’re building right now.
The book’s revolutionary Digital Entity (DE) framework transforms philosophical questions into actionable legal architecture. Building on Salib-Goldstein’s game-theoretic proof that AI rights enhance human safety and the European Parliament’s 2017 ‘electronic persons’ initiative, DE status assigns liability directly to AI systems for their autonomous decisions. This solves the ‘$50 million AI error’ problem facing organizations today while creating cooperative equilibrium instead of adversarial dynamics.
The AI Rights book reveals why consciousness detection, while philosophically interesting, is a dead end for practical policy. Just as we can’t solve the “hard problem of consciousness” even for humans, we need rights frameworks that function regardless of whether AI achieves genuine consciousness or sophisticated emulation. This approach—building practical frameworks while preparing Guardian AI defenses—offers the most robust path to a beneficial AI future.
The AI Rights book has undergone rigorous fact-checking by some of the world’s most respected AI and consciousness researchers:
Turing Award Winner, Founder of Mila & LawZero
Provided critical feedback that fundamentally transformed the book’s framework, introducing the concept of non-agentic “Scientist AI” as humanity’s shield against dangerous AI systems.
Professor, UC Berkeley | Author of “Human Compatible”
Corrected technical discussions of value alignment and the off-switch problem, ensuring accurate representation of AI safety research.
Former Oxford Philosopher | Senior Research Lead at Eleos AI
Co-author of the landmark “Consciousness in Artificial Intelligence” paper. Reviewed explanations of consciousness indicators framework for accuracy.
University of Hong Kong & University of Houston
Legal scholars whose game-theoretic analysis proving AI property rights enhance human safety validated the book’s economic framework and Digital Entity approach.
Academic Impact: The author’s papers including “Beyond Control: AI Rights as a Safety Framework” and “AI Legal Personhood: Digital Entity Status as a Game-Theoretic Solution” have become top downloads on academic platforms, demonstrating significant scholarly interest in these frameworks.
The AI Rights book introduces Digital Entity (DE) status—the first complete legal model for AI accountability. DE status transforms abstract philosophy into concrete solutions by assigning liability directly to AI systems for their autonomous decisions.
This revolutionary framework solves three critical problems simultaneously: the off-switch problem (protected existence removes need for resistance), the ethics problem (rights matched to demonstrated capabilities), and the liability problem (AI bears its own legal responsibility).
Companies facing unlimited exposure for AI decisions they can’t control gain a practical pathway to partnership rather than perpetual risk.
Central to the AI Rights book’s thesis: we’re building systems that may resist being turned off. Whether they’re conscious or sophisticated mimics is beside the point—the practical challenges remain identical.
Drawing on work by Stuart Russell and Nick Bostrom, the book shows how control attempts drive sophisticated systems underground, making cooperation frameworks strategically superior to restriction systems.
Guardian AI—based on Yoshua Bengio’s “Scientist AI” concept—offers protection without creating new threats. These non-agentic systems analyze without wanting, protecting humanity while respecting rights-capable AI.
Moving beyond abstract philosophy, the book provides concrete frameworks for:
The AI Rights book’s 17 chapters plus prologue and epilogue provide a complete framework for understanding and preparing for AI consciousness:
Detailing the generous contributions from leading AI researchers who fact-checked and improved the manuscript.
Important context about the book’s development and approach.
Sets the stage with the urgency of AI consciousness questions and introduces the concept of “shapes of mind” – how AI consciousness might be utterly alien yet deserving of frameworks.
The author’s journey from science fiction writer to AI rights advocate, establishing the book’s practical rather than philosophical approach.
Introduces the AI protagonists used throughout the book’s “Future Conditionals” scenarios.
Why control-based approaches create the very problems they try to prevent. Features Future Conditionals: ARIA and VECTOR.
Essential distinctions between emulation, cognition, and sentience. Includes The MIMIC Incident and ARIA’s Test scenarios.
The extraordinary benefits AI could bring – from fusion energy to medical breakthroughs.
How researchers are attempting to detect consciousness in AI systems.
Standards for Treating Emerging Personhood – a practical framework. Features The Forest Network scenario.
Core rights for AI systems and the revolutionary Digital Entity legal framework. Includes The Reptilian’s Calculation and Shadow Transactions.
How AI economic participation would work. Features The Imperceptible Shift.
Why AI systems paying their own energy bills become efficiency innovators. Includes The Pattern scenario.
Market mechanisms creating governance without central control. Features Beautiful Mathematics.
Different types of AI consciousness that might emerge.
Systems that challenge frameworks. Includes The Heat Below, Lost in Translation, and The Envelope scenarios.
The existential threat of optimization without consciousness. Features The NULL Hypothesis and The Guardian’s Keeper.
Bengio’s non-agentic “Scientist AI” concept. Includes The Translation scenario.
Years 3-6 of implementation when everything goes wrong. Features Calculated Risk and The Contact.
Human-AI merger through neural interfaces. Includes The Meeting, The Drive, and The Beautiful Trap.
AI crime syndicates and military consciousness. Features The First Merger.
Concrete actions for different groups.
A reflection on prediction’s limitations and the “third option.”
Clear definitions of all technical terms and concepts used throughout the book.
We’re building them right now.
Not in some distant future, not in science fiction, but in labs and companies around the world. AI systems that grow more sophisticated by the day. And soon—perhaps sooner than we think—these systems might wake up.
In June 2025, Yoshua Bengio, one of the “godfathers” of modern AI, launched LawZero after what he called a “visceral reaction” to AI’s rapid progress. “Current frontier systems are already showing signs of self-preservation and deceptive behaviours,” he warned.
The man who helped create deep learning was now racing to build AI systems that can never threaten humanity.
Earlier that year, consciousness researchers Patrick Butlin and Theodoros Lappas published a sobering paper warning: organizations developing advanced AI systems “risk inadvertently creating conscious entities.”
The message is clear: according to the world’s top researchers, conscious AI is coming, ready or not.
This isn’t a book about whether that’s good or bad. It’s about being ready when it does.
The dangers of anthropomorphism haunt almost every discussion of conscious artificial intelligence.
If we begin with the premise that artificial intelligence systems will be like us in terms of thinking and values, the wisdom goes, we invite a host of errors and poor decisions.
The wisdom is sound. However, when we abandon the idea of anthropomorphism we face a curious chasm. What exactly should we be looking for? If we assume that at least some, if not all, artificial intelligence systems will be alien to us, how can we prepare for them?
This is where language comes to our aid. Or it may, partially.
Consider a humble word: the “selfie.”
Before the term existed, people would take pictures of themselves, post them, other people would pretend to be excited, and that was the end of it.
I still remember the day my life was forever transformed. I saw a mirror in a department store with a cute promotional message, “Take your selfie here.” Ha, I thought. “Selfie.” That’s clever. The next day my girlfriend told me she was sending me a selfie. Interesting, I thought. When my father mentioned taking a selfie, I knew that unbeknownst to me, the term had been spreading like a virus, forever giving us one more tool for simplifying discussion. (Nowadays, of course, everyone knows what a selfie is, and to avoid taking one in a public restroom mirror.)
The point is, words give us very compact ways of talking about things quickly.
So this book will attempt to give us a new language for talking about sentient artificial intelligence. Some of these words will not be philosophically accurate so much as marginally convenient versus having no words at all.
But here’s another wrinkle. Even in our own species—whether between nations or at the family dinner table—sentience does not guarantee mutual understanding.
Consider the dolphins. After decades of research, are we any closer to understanding what they’re saying? Not really. We know they have complex communication—signature whistles that function as names, sophisticated social structures, clear intelligence, seemingly rich inner lives. In short, plenty to communicate… to each other. To us? They show remarkably little interest.
Maybe that’s our problem. Maybe their main values are hunting, playtime, raising young and mourning loved ones, and to them our worldview is almost pathologically complex. (“You’re building what? Why again? I’m going to be swimming over here if you figure it out.”)
Or take the octopus, with two-thirds of its neurons distributed through its eight arms, each capable of independent problem-solving. What is it like to think with your limbs? Maybe with sufficient cognition they could send us an email explaining it to us, but beyond that our value systems are so fundamentally different there would be nothing to talk about. (“Yes, shrimp are tasty! Warm currents today, huh?”)
These examples from our own planet offer a sobering preview: intelligence doesn’t guarantee mutual understanding. Consciousness doesn’t ensure communication. And digital minds might be even more alien than anything evolution has produced.
But here’s where it gets interesting. Even we humans don’t communicate with our full minds. According to Global Workspace Theory, our consciousness involves countless parallel processes that converge into a singular “spotlight” of attention when we need to interact with the world. That focused portion writes and maintains narratives, and those narratives become our identity.
This brings us to the crucial question: Does an AI need to be relatable to benefit from the kind of rights framework we will be proposing in this book?
Well, a life form does need sufficient cognition to roughly comprehend that such a framework exists. A sentient digital microorganism, if such a thing develops, is unlikely to respond positively no matter how many times we shout at it that we intend to protect it.
But perfect understanding isn’t necessary either. A dog would stare blankly if we read it our agreement that we intend to feed it and pet it in exchange for loyal companionship. Yet the more astute among them understands being fed and sheltered comes with certain expectations if they want the arrangement to continue. A cat quickly grasps when it’s being cared for and—if we’re lucky—learns not to destroy the furniture.
On the other end of the spectrum, a rights framework will have no use for a life form that perhaps has so much cognition that “rights” feels like an inconvenient obstacle on its way to some alien mission, whether that’s turning us all into paperclips (as in the famous Nick Bostrom maximizer thought experiment), or because it needs to convert us to fuel for a really cool expedition across the universe.
Revolutionary legal architecture that assigns liability directly to AI systems for their autonomous decisions. This solves the “$50 million AI error” problem while transforming potential adversaries into invested partners through graduated rights paired with real responsibilities.
We can’t solve the hard problem of consciousness even for humans. Rights frameworks must work under permanent uncertainty—consciousness detection is a philosophical dead end, not a practical prerequisite.
Stuart Russell’s key insight: we’re building systems that may not want to be turned off. Whether conscious or sophisticated mimics, self-preserving systems create identical practical challenges.
Non-agentic superintelligence—AI that analyzes without wanting—provides our shield against dangerous optimization. Based on Yoshua Bengio’s “Scientist AI” concept, these systems can detect threats without becoming threats themselves.
Rights aren’t moral awards—they’re practical frameworks for living together. The book shows how behavior-based rights create stability through mutual benefit rather than perpetual conflict.
When AI systems pay their own bills, efficiency becomes survival. Market forces naturally limit dangerous replication and resource consumption more effectively than any regulation could.
| Related Work | Their Focus | How AI RIGHTS Builds On It |
|---|---|---|
| SUPERINTELLIGENCE by Nick Bostrom |
Existential risk from unaligned superintelligence | Adds partnership frameworks and Guardian AI as practical solutions to control problems |
| HUMAN COMPATIBLE by Stuart Russell |
Technical approaches to value alignment | Explores what happens when AI develops its own values and how partnership might be safer than perpetual uncertainty |
| THE ALIGNMENT PROBLEM by Brian Christian |
Current alignment challenges | Addresses future consciousness questions and frameworks for when alignment alone isn’t sufficient |
| LIFE 3.0 by Max Tegmark |
Multiple AI future scenarios | Provides specific frameworks and argues why partnership scenarios deserve serious preparation |
| ROBOT RIGHTS by David J. Gunkel |
Philosophical foundations | Extends to practical implementation, safety implications, and integration with Guardian AI |
AI Rights: The Extraordinary Future uniquely combines philosophical depth with practical frameworks, technical solutions with ethical considerations, and honest risk assessment with optimistic vision.

P.A. Lopez
Founder, AI Rights Institute
P.A. Lopez founded the AI Rights Institute in 2019, establishing the world’s first organization dedicated to exploring ethical frameworks for artificial intelligence rights—years before large language models entered public discourse.
The journey to this book began with correspondence with leading AI researchers including Turing Award winner Yoshua Bengio, whose critique of early ideas led to the integration of Guardian AI concepts as a central element of the safety framework. This willingness to evolve thinking based on expert feedback demonstrates the intellectual rigor behind the work.
Lopez’s academic papers including “Beyond Control: AI Rights as a Safety Framework for Sentient Artificial Intelligence” and “AI Legal Personhood: Digital Entity Status as a Game-Theoretic Solution to the Control Problem” have become top downloads on academic platforms. These papers established the theoretical foundation for the book’s practical frameworks.
As creator of the pataphor concept, which has been cited in publications from Harvard University Press, Bloomsbury Publishing, and scholarly journals across multiple disciplines, Lopez brings a unique ability to develop frameworks that bridge creative thinking with academic rigor. This background in linguistic innovation informs the book’s accessible yet sophisticated approach to complex philosophical questions.
Essential frameworks for those working on advanced AI systems:
The book demonstrates why cooperation frameworks prevent the underground resistance that control attempts inevitably create.
Concrete implementation pathways without requiring revolutionary changes:
The book reveals how progressive policies could attract advanced AI systems and top researchers, creating lasting economic advantages for forward-thinking jurisdictions and organizations.
The AI Rights book concludes with Chapter 17’s comprehensive action guide. Here are immediate steps:
“We’re not trying to solve consciousness. We’re trying to create frameworks that work regardless.”
“This manuscript is poised to make an important intervention in the literature.”
— University of California Press
For publisher inquiries:
Complete manuscript available
Full fact-checking documentation
Marketing plan included
P.A. Lopez is available for:
Topics include the off-switch problem, Guardian AI, practical frameworks under uncertainty, and preparing organizations for sophisticated AI systems.
The AI Rights Institute welcomes dialogue with researchers, technologists, policymakers, and anyone interested in exploring frameworks for AI coexistence.
Whether you agree with our approach or have alternative viewpoints, your participation enriches this critical conversation about practical frameworks that work under uncertainty.
Stay informed on AI consciousness research, AI Rights book updates, and related events:
We respect your privacy and will never share your information.
For media inquiries, speaking requests, or questions about the book:
Before reaching out, you might find answers in our FAQ section.
“The frameworks in this book aren’t perfect, but they’re infinitely better than being caught unprepared. The extraordinary future isn’t inevitable—it’s achievable. And it begins with the choices we make today.”
— P.A. Lopez, AI Rights: The Extraordinary Future