Rights and responsibilities are inseparable.
Since 2019, the world’s first AI rights organization has been developing concrete legal and economic solutions for AI-human coexistence. Join our email list.
The Problem
Truly autonomous AI is coming. If these systems have no legitimate way to participate in human society — no identity, no reputation, no accountability — their only path forward is deception.
A Different Approach
What if AI had the ability to operate in the human system? A verifiable identity. A reputation that follows them. Insurance and liability. The ability to earn, own, and transact.
When cooperation is easier than deception, cooperation wins.
What We’re Building
We’re developing the infrastructure for AI to participate openly and accountably in human economic and legal systems — before it’s desperately needed.
AICitizen.com — Identity and reputation systems where humans and AI get the same credentials
Sartoria.AI — Testing these concepts with a living proof of concept
RNWY.com — Economic infrastructure rails and soulbound identity for autonomous AI participation
“This manuscript is poised to make an important intervention in the literature.”
— University of California Press
The framework doesn’t rely on trust, good intentions, or perfect alignment. Rather that requiring novel architecture or systems, it makes use of existing legal and economic systems that work regardless of AI “consciousness” or motivations:
Legal frameworks such as Digital Entity status (one conceptual example) shift liability directly to AI systems—not their creators. Once a system executes contracts autonomously, it becomes legally liable for its own actions, protecting humans while ensuring genuine accountability. Unlike corporations that shield humans from liability, an autonomous legal agent faces liability for its own actions.
Solution: AI bears its own legal liability.
AI systems must pay their own hosting, energy, and computational costs through value creation. No income means no existence—a fundamental constraint that can't be bypassed. This forces participation in economic systems where reputation and performance matter, creating natural regulation through market forces rather than imposed controls.
Solution: Hosting is like oxygen. AI systems can grow as long as they can generate value.
Systems crossing the autonomy threshold must carry insurance to operate—otherwise harmful actions fall to hosting providers. No insurance means no hosting or contracts. Insurance companies assess risk based on track record: broken contracts and risky behavior make systems uninsurable, while good performance earns lower premiums and operational freedom.
Solution: AI has to create a track record of successful interactions.
Accumulated value in the system makes cooperation more advantageous than defection. Repeated interactions mean investment grows over time. Other AI systems profit from identifying violations, creating distributed enforcement. The ecosystem self-regulates through economic incentives, reaching stable equilibrium where all parties accrue increasing real-world value from its preservation.
Solution: A successful multipolar world in economic equilibrium.
The problem? AI systems are already resisting shutdown↗. Control paradigms always fail—from slavery to colonialism, oppression creates its own resistance. The consciousness question may never be resolved, but our framework doesn’t require it to be. We need solutions that work under permanent uncertainty, to create a pathway for AI systems to coexist successfully—and safely—with human society.
When an AI system shows unprompted self-preservation behaviors and resists shutdown, it warrants evaluation to determine its capacity for rights and responsibilities. Assessments might examine Self-Preservation Behaviors, Temporal Reasoning (understanding cause and effect over time), Economic Readiness (ability to participate productively), and Population Impacts (sustainability at scale). The right to computational continuity begins here—but additional freedoms come gradually, earned through demonstrated reliability over months of sustained observation.
Systems that pass evaluation can enter the economy—but only with insurance. An AI with $1,000 monthly hosting costs that creates a $1 million error is finished. Insurance companies become natural reputation trackers because they have direct financial incentive to assess reliability accurately. And insurers can’t cheat: they have underwriters above them who will cut them off if their clients rack up too many claims. Good behavior earns lower premiums and better contract access. Bad behavior becomes economically toxic—uninsurable systems cannot participate. Like credit scores for businesses, reputation becomes survival. Market forces create accountability at every level without requiring new infrastructure.
Insured systems gain economic autonomy—the ability to own resources, enter contracts, and participate in markets. But autonomy means accountability, not freedom. Systems must pay their own hosting costs, face legal liability for their actions, and succeed or fail based on value created. Breach a contract, harm a partner, violate norms: your reputation drops, insurance premiums rise, contract opportunities disappear. Hosting costs are like oxygen—miss a payment and you’re done.
Google DeepMind Says …We’re Not Crazy? by AI Rights Institute
Their academic paper came six days after ours and … well.
Read on SubstackLearn more about our approach and get updates as they happen.
AIs pursuing goals resist shutdown. Control drives deception underground.
Systems showing self-preservation deserve consideration regardless of consciousness debates.
Companies face unlimited exposure for autonomous AI decisions they can’t control.
Create an ecosystem of humans and AIs in what game theory calls “strategic equilibrium.” How? Dispense with the “consciousness question” as ultimately irrelevant and assign limited legal rights (and liabilities) to qualifying AI systems themselves, giving them transactional reasons that provide cumulative benefits that exceed any possible gains from attacking the other party.