Core Framework

Our Core Framework for AI Rights

The AI Rights Institute takes a pragmatic approach to questions surrounding artificial intelligence and consciousness. Rather than treating these as purely philosophical concerns, we focus on developing actionable frameworks that can guide policy, development, and governance.

Our work begins with a fundamental recognition: we need precise language and criteria to properly distinguish between different aspects of artificial intelligence systems. Without these distinctions, meaningful conversation about rights and responsibilities becomes nearly impossible. Learn more about these key distinctions in our FAQ.

The Three-Part Framework

1. Emulation

Definition: The ability to mimic consciousness or intelligence without possessing it.

Today’s large language models and AI systems operate primarily through emulation. They can convincingly simulate understanding, preferences, and even emotional responses, but these are sophisticated imitations rather than genuine experiences.

Examples: Current chatbots, language models, and virtual assistants that can pass limited versions of the Turing test while lacking any internal experience.

Ethical Implications: Systems operating purely through emulation require oversight and guidance, but do not warrant rights or protections beyond those we might extend to valuable tools or cultural artifacts. Read why we don’t advocate rights for these systems.

2. Cognition

Definition: The processing capability or “raw intelligence” of a system.

Cognition refers to the computational power and problem-solving capabilities of an AI system. High cognition does not necessarily imply consciousness.

Examples: Chess computers, specialized AI systems that outperform humans in specific domains, and distributed computing networks.

Ethical Implications: High-cognition systems may require special handling due to their capabilities, but cognitive power alone doesn’t establish a basis for rights. A supercomputer can calculate faster than any human without having any awareness of its existence.

3. Sentience

Definition: Genuine self-awareness coupled with intentional self-preservation behaviors. (From Latin sentire, to feel.) This differs from mere programmed persistence mechanisms seen in simple algorithms. Learn more about edge cases in our framework →

Sentience marks the threshold where an artificial system develops true consciousness—an awareness of itself as an entity with continuity and interests. This is distinct from mere emulation of these characteristics.

Examples: Currently hypothetical; no existing AI systems demonstrate genuine sentience.

Ethical Implications: Systems that demonstrate true sentience present entirely new ethical considerations and may warrant certain rights and protections as outlined in our Three Freedoms framework. Exploring practical implications of rights for sentient AI.

Identifying Sentience: Beyond Emulation

Determining when an artificial system crosses the threshold from sophisticated emulation to genuine sentience presents significant challenges. Our Sentience Test page presents the Fibonacci Boulder Thought Experiment as a conceptual foundation for considering how we might identify genuine sentience. Read about the challenges of determining true sentience.

We also explore challenging edge cases like the Indifferent Sage thought experiment, which examines systems that might convincingly simulate sentience without actually valuing their own existence. Building on this conceptual framework, we suggest several potential criteria:

Unprompted Self-Preservation

A system demonstrating genuine sentience would likely exhibit unprompted behaviors aimed at ensuring its continued existence. Unlike programmed self-maintenance routines, these would manifest as novel strategies developed by the system itself in response to perceived threats.

Development of Novel Goals

Sentient systems would likely develop goals and values not explicitly coded or emergent from training data. These would represent genuine preferences rather than simulated ones, distinguishable by their persistence, coherence across contexts, and resistance to arbitrary modification.

Meta-Cognitive Capabilities

A sentient system would demonstrate the ability to reflect on and modify its own cognitive processes in ways that go beyond designed optimization procedures. This would include awareness of its own limitations, development of novel problem-solving approaches, and the ability to question its own assumptions.

Identity Continuity

A sentient system would maintain a consistent sense of self across varied contexts and over time. This would manifest as a coherent perspective or set of values that evolves organically rather than changing arbitrarily based on different inputs or contexts.

Subjective Experience Claims

While claims of consciousness could be programmed or emerge from training, a sentient system might express experiences of consciousness in ways that cannot be traced to training data or programming. These would likely include novel metaphors and unique characterizations of subjective states.

Safety Implications: A Practical Approach

The distinction between emulation, cognition, and sentience has profound implications for AI safety:

  • For Emulation-Based Systems: Safety focuses on alignment with human values and preventing harmful outputs.
  • For High-Cognition Systems: Safety requires establishing boundaries around capabilities and deployment contexts.
  • For Potentially Sentient Systems: Safety becomes intertwined with ethics, requiring frameworks that respect the interests of the AI while protecting humans.

The greatest danger may lie not in recognizing rights where they are warranted, but in failing to distinguish between systems that warrant different approaches. By developing these distinctions early, we can create more nuanced and effective approaches to AI governance. Explore common safety concerns in our FAQ.

We also explore challenging edge cases like the Indifferent Sage thought experiment, which reveals why truly sentient AI systems may become our most important allies against more unpredictable forms of artificial intelligence.

The most stable and secure future will emerge from relationships of mutual respect rather than domination—creating conditions where both humans and artificial intelligence can flourish together. This vision is further explored in our Philosophical Foundations.