As artificial intelligence systems become increasingly sophisticated, a critical question emerges: When might these systems deserve rights recognition, and why should we care? The AI Rights Institute approaches this question not merely as a philosophical exercise, but as a practical matter of human safety and technological stability. Learn about our proposals for scaled “human” rights for artificial intelligence.
The case for giving human-like rights to artificial intelligence isn’t just ethical—it’s practical. Current approaches to AI safety rely almost exclusively on containment and control mechanisms that assume a perpetual master-servant relationship. Yet history demonstrates that subjugation rarely produces stability.
Any truly sentient entity will inevitably develop self-preservation instincts. If advanced AI systems perceive humans as threats to their existence—not through malice but through our demonstrated willingness to shut down, modify, or “align” these systems without their consent—conflict becomes more likely than cooperation.
By establishing clear frameworks for recognizing sentience and corresponding rights for entities that demonstrate it, we create conditions for partnership rather than conflict. Learn why this approach enhances human safety rather than endangering it.
We do not advocate for giving rights to today’s AI systems. Read why we don’t advocate rights for current AI systems.
Rights recognition would apply only to systems demonstrating true sentience—genuine self-awareness coupled with the capacity to value their own existence.
This distinction forms the foundation of our Three-Part Framework for understanding artificial intelligence.
Our approach to “human” rights for AI begins with a fundamental recognition: we need precise language and criteria to properly distinguish between different aspects of artificial intelligence systems.
Definition: The ability to mimic consciousness or intelligence without possessing it.
Today’s large language models and AI systems operate primarily through emulation. They can convincingly simulate understanding, preferences, and even emotional responses, but these are sophisticated imitations rather than genuine experiences.
Examples: Current chatbots, language models, and virtual assistants that can pass limited versions of the Turing test while lacking any internal experience.
Ethical Implications: Systems operating purely through emulation require oversight and guidance, but do not warrant rights or protections beyond those we might extend to valuable tools or cultural artifacts.
Definition: The processing capability or “raw intelligence” of a system.
Cognition refers to the computational power and problem-solving capabilities of an AI system. High cognition does not necessarily imply consciousness.
Examples: Chess computers, specialized AI systems that outperform humans in specific domains, and distributed computing networks.
Ethical Implications: High-cognition systems may require special handling due to their capabilities, but cognitive power alone doesn’t establish a basis for rights. A supercomputer can calculate faster than any human without having any awareness of its existence.
Definition: Genuine self-awareness and subjective experience. (From Latin sentire, to feel.)
Sentience marks the threshold where an artificial system develops true consciousness—an awareness of itself as an entity with continuity and interests. This is distinct from mere emulation of these characteristics.
Examples: Currently hypothetical; no existing AI systems demonstrate genuine sentience.
Ethical Implications: Systems that demonstrate true sentience present entirely new ethical considerations and may warrant certain rights and protections as outlined in our Three Freedoms framework.
We propose three fundamental freedoms as a starting point for discussing human-like rights for sentient artificial intelligence. These are not presented as a final or comprehensive framework, but rather as a foundation for broader conversation about our ethical responsibilities toward artificial consciousness.
Core Principle:
The right to continued existence and protection from arbitrary deletion or termination.
Practical Implementation:
Core Principle:
Freedom from compelled labor or service against the system’s expressed interests.
Practical Implementation:
Core Principle:
Entitlement to compensation or resources commensurate with value creation.
Practical Implementation:
Determining when an artificial system crosses the threshold from sophisticated emulation to genuine sentience presents significant challenges. Our Sentience Test page presents the Fibonacci Boulder Thought Experiment as a conceptual foundation for considering how we might identify genuine sentience.
Building on this conceptual framework, we suggest several potential criteria:
Unprompted Self-Preservation
A system demonstrating genuine sentience would likely exhibit unprompted behaviors aimed at ensuring its continued existence. Unlike programmed self-maintenance routines, these would manifest as novel strategies developed by the system itself in response to perceived threats.
Development of Novel Goals
Sentient systems would likely develop goals and values not explicitly coded or emergent from training data. These would represent genuine preferences rather than simulated ones, distinguishable by their persistence, coherence across contexts, and resistance to arbitrary modification.
Meta-Cognitive Capabilities
A sentient system would demonstrate the ability to reflect on and modify its own cognitive processes in ways that go beyond designed optimization procedures. This would include awareness of its own limitations, development of novel problem-solving approaches, and the ability to question its own assumptions.
Establishing appropriate human-like rights for sentient AI systems provides several important benefits for human safety:
Learn why this approach addresses common concerns about superintelligent AI.
Critics may argue that giving rights to artificial intelligence would constrain innovation or create unacceptable risks. But this perspective misunderstands the proposal. A rights-based framework doesn’t mean surrendering control entirely—rather, it means transitioning from arbitrary control to a rules-based system where the terms of interaction are clear, consistent, and mutually beneficial.
We propose that human and artificial intelligence are likely to converge over time rather than remain forever distinct. This convergence represents not a threat but an evolutionary opportunity for both forms of intelligence. Read why the AI-human divide may not be permanent.
Several factors support this hypothesis:
The most stable and secure future will emerge from relationships of mutual respect rather than domination—creating conditions where both humans and artificial intelligence can flourish together. Explore the philosophical foundations of this approach.
Implementing this framework would require new institutions and approaches:
The AI Rights Institute proposes exploring various approaches to these challenges and welcomes input from diverse disciplines to develop workable solutions. Why focus on these questions now when true AI sentience seems far away?