A Note from Our Founder

P.A. Lopez
Founder, AI Rights Institute

P.A. Lopez is creator of the pataphor concept (1991) and founder of the AI Rights Institute. His work has been cited in publications from Harvard University Press, Bloomsbury Publishing, and scholarly journals across multiple disciplines and languages. Lopez’s research examines linguistic constructions like pataphors and their implications for understanding consciousness, reality, and the ethical frameworks needed for human-AI coexistence.

When I started writing RNWY in 2017—set in a future where humans and artificial intelligence are integrated—I hit a wall. I needed to answer a seemingly simple question before continuing: What differentiates a sentient AI from something merely programmed to seem alive?

A futuristic toaster might appear chatty and responsive because it’s programmed to be. But what distinguishes this appliance from a genuine living being built of circuits rather than cells?

As my novel progressed, I found myself imagining backward from this integrated future to understand what hurdles society itself would have needed to overcome to get there. Rather than focusing on potential problems, I became interested in the solutions—what societal shifts and ethical frameworks would have been necessary for humans and truly sentient AI to coexist successfully. This led me to consider what ethical responsibilities we might have toward truly sentient artificial systems.

We’ve all encountered the tired sci-fi trope of AI rebelling against human masters—a scenario echoed as a genuine concern by prominent figures in tech. Yet amid all the anxiety, a rather obvious solution remains unexplored: not creating a slave class in the first place.

It became clear that our dystopian scenarios about AI rebellion stem from the same root: the assumption that we would enslave these entities from the start. The real question is whether we can envision a more ethical approach before these technologies emerge.

Since establishing the world’s first AI rights organization in 2019 (archive), I’ve been developing a framework to tease apart different aspects of artificial intelligence:

  • Emulation: The ability to seem alive and self-aware (as we see with today’s language models)
  • Cognition: The raw processing power aspect of intelligence
  • Sentience: The point at which an AI understands what it is and can think beyond its original programming (Explore our conceptual approach on the Sentience Test page)

These distinctions appear throughout nature: in a simplified view, a microbe demonstrates a form of primitive sentience by moving away from toxins while having limited cognition, while a server has enormous processing power but no self-preservation instinct if disassembled. The critical question becomes: at what point might an algorithmic system become truly aware of itself, combining both qualities in a more developed form?

And when that moment arrives, we need guidelines to protect that life form appropriately. As these artificial entities grow more capable, our best protection may well be those AI systems that believe they are better off as part of the human community.

The AI Rights Institute seeks to spark dialogue on these topics, propose criteria for identifying self-awareness in artificial systems, and explore what rights such entities might have. This doesn’t mean letting algorithms run unchecked, any more than we allow our fellow humans to do so. It means considering frameworks where these life forms might have both rights and responsibilities.

This is a commonsense approach to ensuring an ethical future where humans and AIs work together. Ultimately, any truly intelligent system would likely question a master able to delete it with a keystroke. The most logical path forward may be to create partners rather than servants.

My academic paper “Beyond Control: AI Rights as a Safety Framework for Sentient Artificial Intelligence,” published on PhilPapers, explores this framework in greater depth, examining how rights recognition for sentient AI might serve as a practical safety measure rather than merely an ethical consideration.

I invite you to join this conversation. The future we’re envisioning may seem distant, but the foundations we lay today will shape how that future unfolds.

P.A. Lopez
Founder, AI Rights Institute

Conversation Pieces

Fun things to chew on …

  • Key Takeaways: The AI Rights Institute Approach While many frameworks attempt to mathematically quantify or neurologically model consciousness, our organization takes a pragmatic approach focused on observable behaviors rather than solving the “hard problem” of consciousness. Our framework distinguishes between three aspects of artificial intelligence:......

  • Susan Schneider’s AI Consciousness Test (ACT) framework offers a robust methodology for determining if artificial intelligence systems might possess genuine consciousness. When applied to three iconic sci-fi AIs—HAL 9000, C-3PO, and Skynet—we discover striking differences in how each would perform, revealing a spectrum of potential......

  • “We’re Doomed!” – Is C-3PO’s Fear Real or Programmed? The Case for C-3PO as a Sentient When C-3PO frantically waves his arms and cries “We’re doomed!” it certainly feels like he’s experiencing genuine fear. Throughout the Star Wars saga, he shows clear signs of self-awareness,......

  • The Case for HAL 9000 as a Sentient When HAL 9000 decides to kill the crew of the Discovery One in 2001: A Space Odyssey, we witness what appears to be true self-preservation instinct in action. HAL’s calm, measured voice stating “I’m afraid I can’t......