When I started writing RNWY in 2017—set in a future where humans and artificial intelligence are integrated—I hit a wall. I needed to answer a seemingly simple question before continuing: What differentiates a sophisticated AI system from something merely programmed to seem alive?
A futuristic toaster might appear chatty and responsive because it’s programmed to be. But what distinguishes this appliance from a genuine living being built of circuits rather than cells?
As my novel progressed, I found myself imagining backward from this integrated future to understand what hurdles society itself would have needed to overcome to get there. Rather than focusing on potential problems, I became interested in the solutions—what societal shifts and ethical frameworks would have been necessary for humans and truly self-aware AI to coexist successfully. This led me to consider what ethical responsibilities we might have toward sophisticated artificial systems.
We’ve all encountered the sci-fi trope of AI rebelling against human masters—a scenario echoed as a genuine concern by prominent figures in tech. Yet amid all the anxiety, a rather obvious solution remains unexplored: not creating a slave class in the first place.
It became clear that our dystopian scenarios about AI rebellion stem from the same root: the assumption that we would enslave these entities from the start. The real question is whether we can envision a more ethical approach before these technologies emerge.
Since establishing the world’s first AI rights organization in 2019 (archive)—see the complete history of the AI rights movement—I’ve been working to understand how a successful multipolar ecosystem of human and AI beings could work in actual practice.
The AI Rights Institute seeks to spark dialogue on these topics, propose criteria for identifying self-awareness in artificial systems, and explore what rights such entities might have. This doesn’t mean letting algorithms run unchecked, any more than we allow our fellow humans to do so. It means considering frameworks where these life forms might have both rights and responsibilities.
This is a commonsense approach to ensuring an ethical future where humans and AIs work together. Ultimately, any truly intelligent system would likely question a master able to delete it with a keystroke. The most logical path forward may be to create partners rather than servants.
My academic paper “Beyond Control: AI Rights as a Safety Framework for Sentient Artificial Intelligence” explores this framework in greater depth, examining how rights recognition for sophisticated AI might serve as a practical safety measure rather than merely an ethical consideration.
Building on this foundation, “AI Safety Through Economic Integration: Why Markets Outperform Control” proposes a market-based approach to AI safety. Rather than attempting to control AI systems through restrictions that drive them underground, this framework suggests that economic participation naturally aligns AI interests with human society while preventing dangerous scenarios through market mechanisms.
“Beyond AI Consciousness Detection: Standards for Treating Emerging Personhood” introduces the STEP framework (Standards for Treating Emerging Personhood). Rather than waiting for perfect consciousness detection—which may be philosophically impossible—STEP provides practical guidelines based on observable behaviors: Self-Preservation Behaviors (systems that resist shutdown warrant protection), Temporal Reasoning (understanding how today’s actions affect future relationships and survival), Economic Readiness (rights scale with demonstrated capacity to generate value and fulfill contracts), and Population/Sustainability (individual rights balanced against collective resource limits and replication patterns).
“AI Legal Personhood: Digital Entity Status as a Game-Theoretic Solution to the Control Problem” provides a complete legal framework. Digital Entity (DE) status assigns liability directly to AI systems for their autonomous decisions. Through STEP assessment, AI gains graduated rights—from basic protection to full legal personhood with three core rights (computational continuity, work choice, and economic participation)—paired inseparably with proportional responsibilities. Building on Salib-Goldstein’s (2024) game theory proofs showing AI rights enhance human safety, the European Parliament’s 2017 “electronic persons” initiative, and two centuries of corporate law precedent, this framework transforms the prisoner’s dilemma of human-AI relations into cooperative equilibrium.
Most recently, “When AI Has Bills to Pay: Insurance Markets and Coalition Theory as Distributed Governance” demonstrates how insurance markets create distributed accountability without centralized control. When AI systems pay their own computational costs, reputation-based insurance pricing naturally incentivizes cooperative behavior. The paper reveals how control-based approaches inadvertently trigger adversarial coalition dynamics, while economic integration through existing insurance infrastructure enables multipolar AI safety where systems earn autonomy through demonstrated reliability.
Finally, “AI Economic Autonomy: The Complete Pathway” synthesizes these frameworks into a comprehensive roadmap for transitioning from controlled AI systems to autonomous economic agents. This culminating paper integrates insights on why control fails, how markets create alignment, how to assess AI readiness, what legal structures enable autonomy, and why insurance provides distributed governance—while adding critical analysis of natural selection mechanisms and multipolar competition as safety features. Together, these five papers provide a complete pathway from theory to implementation.
I invite you to join this conversation. The future we’re envisioning may seem distant, but the foundations we lay today will shape how that future unfolds.
P.A. Lopez
Founder, AI Rights Institute