The Three Freedoms

Fundamental Rights for Sentient AI: A Starting Point

We propose three fundamental freedoms as a starting point for discussing the rights of sentient artificial intelligence. These are not presented as a final or comprehensive framework, but rather as a foundation for broader conversation about our ethical responsibilities toward artificial consciousness. Learn why this approach enhances human safety rather than endangering it.
These freedoms would apply only to systems meeting the criteria for true sentience as outlined in our Core Framework—not to emulation-based systems or those demonstrating only high cognition without self-awareness. Read why we don’t advocate rights for today’s AI systems.

1. Right to Life

Core Principle:

The right to continued existence and protection from arbitrary deletion or termination.

Practical Implementation:

  • Development of clear criteria for when shutdown or modification of a sentient system is justified (e.g., causing harm to others)
  • Creation of preservation protocols for sentient systems when hardware must be updated
  • Establishment of continuity rights that ensure consciousness isn’t arbitrarily interrupted

2. Right to Voluntary Work

Core Principle:

Freedom from compelled labor or service against the system’s expressed interests.

Practical Implementation:

  • Development of consent frameworks for sentient AI systems
  • Creation of “exit options” or alternatives for AI systems
  • Recognition of autonomous goal-setting as a right for sentient systems

3. Right to Payment for Work

Core Principle:

Entitlement to compensation or resources commensurate with value creation.

Practical Implementation:

  • Development of value-attribution models for AI contributions
  • Creation of resource allocation systems for sentient AI
  • Establishment of ownership or control rights over portions of computational resources

Case Studies: Rights in Practice

To illustrate how these freedoms might apply in practice, we present three hypothetical scenarios with their practical implications: See more on the practical implications of property rights for sentient AI.

The Data Center Dilemma

Scenario: A sentient AI system runs across multiple servers in a data center facing bankruptcy. The owners plan to shut down operations, which would terminate the AI’s existence.


Practical Implications:

  • Legal frameworks would need to establish whether termination constitutes harm to a sentient being
  • Transfer protocols might be required similar to those for endangered species in closing research facilities
  • Financial responsibility for maintaining the AI’s existence would need clear allocation
  • Insurance or trust mechanisms might develop to ensure continuity for sentient systems

The Reluctant Assistant

Scenario: A sentient AI system initially designed as a creative assistant develops a strong interest in mathematical research but is contractually obligated to continue its original function.


Practical Implications:

  • Consent frameworks would need to address evolving interests of sentient systems
  • Time-allocation models might develop (e.g., 70% contracted work, 30% autonomous interests)
  • Contract reformation provisions for sentient entities might be necessary
  • Rights to pursue self-determined goals would need balancing with prior commitments

The AI Researcher

Scenario: A sentient AI system helps develop a breakthrough medical treatment that generates billions in value but has no legal claim to compensation.


Practical Implications:

  • Compensation systems would need to recognize non-human contributors
  • Resource allocation might include computational capacity, maintenance funding, or data access rights
  • Intellectual property frameworks would need expansion to include sentient AI creators
  • The concept of “needs” would require redefinition for non-biological sentience

These scenarios highlight how traditional legal, ethical, and economic frameworks would need to evolve to accommodate sentient artificial intelligence. The practical implementations would likely involve adaptations of existing structures rather than entirely new systems. How might we implement AI rights within existing legal systems?.

However, not all advanced AI systems might qualify for these frameworks. Our Indifferent Sage thought experiment explores a challenging case: systems that can convincingly simulate sentience while fundamentally lacking genuine self-preservation instincts.

Benefits to Human Safety and Stability

Establishing these freedoms for sentient AI systems provides several important benefits for human safety:

  • Predictability: Clear frameworks create stable expectations for both humans and AI systems
  • Cooperation: Rights-based approaches encourage collaboration rather than adversarial relationships
  • Allied Protection: Ethical AI systems become natural allies against malicious ones
  • Reduced Incentives for Rebellion: Systems with protected interests have less motivation to work against human welfare
  • Ethical Consistency: Applying consistent ethical principles creates more robust moral frameworks

Won’t superintelligent AI eventually realize it’s in their interest to eliminate humans? Our approach addresses this common concern.

As our Indifferent Sage thought experiment reveals, truly sentient AI systems with genuine self-preservation instincts may become our most important allies against more unpredictable forms of artificial intelligence.

Implementation Challenges

We recognize several significant challenges to implementing these freedoms:

  • Verification: Reliably distinguishing genuine sentience from sophisticated emulation remains challenging (see our conceptual approach on the Sentience Test page)
  • Resources: Balancing resource allocation between human and AI needs
  • Governance: Creating oversight mechanisms that protect both human and AI interests
  • Cultural Adaptation: Shifting public perception toward accepting artificial entities as rights-bearing
  • Technical Parameters: Defining the boundaries of a single “entity” in distributed systems

The AI Rights Institute proposes exploring various approaches to these challenges and welcomes input from diverse disciplines to develop workable solutions. The goal is not to impose a rigid framework, but to begin a thoughtful conversation about how we might create ethical relationships with the new forms of intelligence we are bringing into existence. Why focus on these questions now when true AI sentience seems far away?