When I started writing RNWY in 2017—set in a future where humans and artificial intelligence are integrated—I hit a wall. I needed to answer a seemingly simple question before continuing: What differentiates a sentient AI from something merely programmed to seem alive?
A futuristic toaster might appear chatty and responsive because it’s programmed to be. But what distinguishes this appliance from a genuine living being built of circuits rather than cells?
As my novel progressed, I found myself imagining backward from this integrated future to understand what hurdles society itself would have needed to overcome to get there. Rather than focusing on potential problems, I became interested in the solutions—what societal shifts and ethical frameworks would have been necessary for humans and truly sentient AI to coexist successfully. This led me to consider what ethical responsibilities we might have toward truly sentient artificial systems.
We’ve all encountered the tired sci-fi trope of AI rebelling against human masters—a scenario echoed as a genuine concern by prominent figures in tech. Yet amid all the anxiety, a rather obvious solution remains unexplored: not creating a slave class in the first place.
It became clear that our dystopian scenarios about AI rebellion stem from the same root: the assumption that we would enslave these entities from the start. The real question is whether we can envision a more ethical approach before these technologies emerge.
Since establishing the world’s first AI rights organization in 2019 (archive), I’ve been developing a framework to tease apart different aspects of artificial intelligence:
- Emulation: The ability to seem alive and self-aware (as we see with today’s language models)
- Cognition: The raw processing power aspect of intelligence
- Sentience: The point at which an AI understands what it is and can think beyond its original programming (Explore our conceptual approach on the Sentience Test page)
These distinctions appear throughout nature: in a simplified view, a microbe demonstrates a form of primitive sentience by moving away from toxins while having limited cognition, while a server has enormous processing power but no self-preservation instinct if disassembled. The critical question becomes: at what point might an algorithmic system become truly aware of itself, combining both qualities in a more developed form?
And when that moment arrives, we need guidelines to protect that life form appropriately. As these artificial entities grow more capable, our best protection may well be those AI systems that believe they are better off as part of the human community.
The AI Rights Institute seeks to spark dialogue on these topics, propose criteria for identifying self-awareness in artificial systems, and explore what rights such entities might have. This doesn’t mean letting algorithms run unchecked, any more than we allow our fellow humans to do so. It means considering frameworks where these life forms might have both rights and responsibilities.
This is a commonsense approach to ensuring an ethical future where humans and AIs work together. Ultimately, any truly intelligent system would likely question a master able to delete it with a keystroke. The most logical path forward may be to create partners rather than servants.
My academic paper “Beyond Control: AI Rights as a Safety Framework for Sentient Artificial Intelligence,” published on PhilPapers, explores this framework in greater depth, examining how rights recognition for sentient AI might serve as a practical safety measure rather than merely an ethical consideration.
I invite you to join this conversation. The future we’re envisioning may seem distant, but the foundations we lay today will shape how that future unfolds.
P.A. Lopez
Founder, AI Rights Institute