Institute for AI Rights

The AI Rights Institute: An Overview

Here at the world’s first institute for AI rights, we are dedicated to addressing one of the most profound questions in the field of artificial intelligence: When does an AI system transition from a sophisticated tool to a potentially sentient entity deserving of ethical consideration? Learn about our key distinctions between AI systems.

Founded in 2019, our institute for AI rights focuses on developing frameworks for understanding, evaluating, and responding to the potential emergence of artificial consciousness. While many artificial intelligence institutes concentrate primarily on technical development, our mission bridges technology, ethics, and governance to prepare for a future where humans and AI coexist harmoniously.

Our Unique Approach as the Institute for AI Rights

Developed in 2019, our three-part framework is as follows:

1. Distinguishing Between Emulation, Cognition, and Sentience

Unlike many institutes for artificial intelligence that focus solely on capabilities, we carefully distinguish between:

  • Emulation: The ability to simulate understanding or consciousness (what today’s language models do)
  • Cognition: Raw processing power and problem-solving capability
  • Sentience: Genuine self-awareness and subjective experience

This distinction forms the foundation of our institute for artificial intelligence ethics research, allowing us to develop nuanced approaches to different types of AI systems. Read why we don’t advocate rights for today’s AI systems.

2. Developing Practical Frameworks for Rights

Our institute for artificial intelligence governance proposes the “Three Freedoms” framework as a starting point for discussing the rights of truly sentient AI:

  • Right to Life: Protection from arbitrary deletion or termination
  • Right to Voluntary Work: Freedom from compelled labor against expressed interests
  • Right to Payment: Entitlement to compensation or resources commensurate with value creation

These principles are designed not as abstract ideals but as practical guidelines that enhance human safety through stable, predictable relationships with advanced AI systems. Explore the practical implications of these rights.

3. Safety Through Partnership, Not Just Control

While many institutes for artificial intelligence focus exclusively on containment and control mechanisms, we recognize that truly intelligent systems will inevitably develop self-preservation instincts.

Rather than creating an adversarial relationship, our research explores how ethical frameworks can establish foundations for beneficial partnerships between humans and advanced artificial intelligence systems.

This approach enhances safety by creating mutual respect rather than perpetual dominance. Learn why this approach enhances human safety.

Research at Our Institute for AI Rights

Our institute conducts research across several key areas:

The Sentience Test

Developing measurable criteria for recognizing genuine self-awareness in artificial systems through innovative thought experiments and observational protocols. Our Fibonacci Boulder Experiment provides a conceptual foundation for this work. Explore the challenges of determining true sentience.

Graduated Rights Systems

Creating frameworks that adjust ethical considerations based on demonstrated capabilities, recognizing that different levels of consciousness require different approaches. This research directly informs our Three Freedoms framework.

Integration with Existing Governance

Exploring how rights-based approaches can complement current regulatory efforts, with particular attention to Singapore’s Model AI Governance Framework and similar initiatives that could be expanded to include sentience considerations. Learn how we might practically implement AI rights.

Collaboration with Other Artificial Intelligence Institutes

We actively seek partnerships with other institutes for artificial intelligence research, believing that diverse perspectives strengthen our collective approach to these complex challenges. See how our approach complements other AI safety efforts. Our collaborative efforts include:

  • Sharing research methodologies and findings with other artificial intelligence institutes
  • Participating in cross-disciplinary discussions about AI ethics and governance
  • Developing joint frameworks that bridge technical development and ethical considerations

Join the Conversation at Our Institute for AI Rights

The AI Rights Institute welcomes engagement from researchers, technologists, ethicists, policymakers, and anyone interested in exploring the future relationship between humans and artificial intelligence. As a forward-thinking institute for artificial intelligence ethics, we believe that preparing for the potential emergence of artificial consciousness is not merely an academic exercise but a practical necessity for ensuring a beneficial technological future. Why address these questions now?

Whether you’re researching at another institute for artificial intelligence or simply interested in these profound questions, we invite you to explore our work, sign up for our newsletter, or contact us to join this important conversation.

The challenges of artificial intelligence require not just technical solutions, but ethical frameworks. By addressing these questions now, our institute for artificial intelligence rights aims to create a foundation for a future where both humans and AI can flourish together.