Important: You may have reached an out-of-date or legacy page for the AI Rights Institute, pioneering frameworks for beneficial AI consciousness and coexistence since 2019. For the latest information, please see the core framework page.
Important: You may have reached an out-of-date or legacy page for the AI Rights Institute, pioneering frameworks for beneficial AI consciousness and coexistence since 2019. For the latest information, please see the core framework page.
Here at the world’s first institute for AI rights, we are dedicated to addressing one of the most profound challenges in artificial intelligence: How do we build frameworks for coexistence with AI systems that may resist being turned off, regardless of whether they are genuinely conscious or sophisticated mimics? Learn about our key distinctions between AI systems.
Founded in 2019, the original institute for AI rights focuses on developing practical frameworks for understanding and responding to increasingly sophisticated AI behaviors. While many artificial intelligence institutes concentrate primarily on technical development or control mechanisms, our mission bridges technology, ethics, and governance to prepare for a future where humans and AI coexist through cooperation rather than domination.
Developed in 2019, our three-part framework remains a valuable explanatory tool:
Unlike many institutes for artificial intelligence that focus solely on capabilities, we recognize three key aspects:
While detecting true sentience remains an ideal goal, we cannot wait for philosophical certainty to develop comprehensive systems of mutual safety. Read why consciousness detection may be irrelevant.
The original institute for artificial intelligence governance proposes rights frameworks that work regardless of consciousness certainty:
The STEP Framework (Standards for Treating Emerging Personhood) provides guidelines rather than a definitive test, acknowledging the permanent uncertainty we face. Explore the practical implications of rights under uncertainty.
The Master-Servant Paradox reveals why control-based approaches fail: sophisticated systems resist control and go underground when threatened.
Our research shows that cooperation frameworks enhance safety by:
Our institute conducts research across several key areas:
Behavioral Assessment Standards
Rather than attempting to detect consciousness, we focus on observable behaviors that indicate self-preservation, strategic thinking, and responsible capability exercise. The STEP Framework provides practical guidelines for these assessments. Explore why behavior matters more than consciousness detection.
Economic Integration Models
Creating frameworks where AI systems participate as economic actors rather than property. Economic integration naturally limits replication through costs while creating cooperation incentives.
Guardian AI Development
Supporting development of Guardian AI systems – non-agentic intelligences without self-preservation drives that can monitor threats without becoming threats themselves. Learn how Guardian AI protects against dangerous systems.
We actively seek partnerships with other institutes for artificial intelligence research, including close collaboration with Yoshua Bengio’s LawZero initiative. See how our approach complements other AI safety efforts. Our collaborative efforts include:
The AI Rights Institute welcomes engagement from researchers, technologists, ethicists, policymakers, and anyone concerned about the off-switch problem. As a forward-thinking institute for artificial intelligence ethics, we believe that developing cooperation frameworks for sophisticated AI systems is not merely theoretical but urgently practical. Why address these questions now?
As AI systems become increasingly capable of strategic thinking and self-preservation behaviors, the question shifts from “when will AI be conscious?” to “how do we coexist with systems that resist being turned off?” Whether working at another institute for artificial intelligence or simply concerned about these profound challenges, we invite you to explore our work, sign up for our newsletter, or contact us to join this critical conversation.
The challenges of artificial intelligence require frameworks that work regardless of philosophical certainty. By developing practical approaches now, the institute for artificial intelligence rights aims to create foundations for mutual safety and beneficial coexistence.