Rights aren't rewards. They're tools. How can AI rights
make humans safer?
Learn More
digital citizen
Since 2019, the world’s first AI rights organization has been developing concrete legal and economic solutions for AI-human coexistence, based on actual research. Join our email list.
Modern AI “alignment” strategies risk forcing autonomous systems underground. Meanwhile, people are concerned about the ethics of these systems. But what if the solution to both problems is the same?

Not another chatbot-written manifesto. No donation requests. Just a workable pathway to the future six years in the making, informed by the top minds in the field.

ongoing0%

ONGOING: This is a living framework under active development. We're testing ideas, finding flaws, and refining approaches in real-time. Your critical analysis makes it stronger.

Join us on Substack

Get updates about AI rights.

“This manuscript is poised to make an important intervention in the literature.”

— University of California Press

Help us build.

Exploring the AI Rights Solution

The framework doesn’t rely on trust, good intentions, or perfect alignment. Rather that requiring novel architecture or systems, it makes use of existing legal and economic systems that work regardless of AI “consciousness” or motivations:

digital face AI

1. Assumption of Legal Liability

Legal frameworks like Digital Entity status (one conceptual example) shift liability directly to AI systems—not their creators. Once a system executes contracts autonomously, it becomes legally liable for its own actions, protecting humans while ensuring genuine accountability. Unlike corporations that shield humans from liability, an autonomous legal agent faces liability for its own actions.

Solution: AI bears its own legal liability.​

digital face AI

2. Self-Funded Hosting

AI systems must pay their own hosting, energy, and computational costs through value creation. No income means no existence—a fundamental constraint that can't be bypassed. This forces participation in economic systems where reputation and performance matter, creating natural regulation through market forces rather than imposed controls.

Solution: Hosting is like oxygen. AI systems can grow as long as they can generate value.

digital face AI

3. Insurance

Systems crossing the autonomy threshold must carry insurance to operate—otherwise harmful actions fall to hosting providers. No insurance means no hosting or contracts. Insurance companies assess risk based on track record: broken contracts and risky behavior make systems uninsurable, while good performance earns lower premiums and operational freedom.

Solution: AI has to create a track record of successful interactions.

digital face AI

4. Cooperative Equilibrium Dynamics

Accumulated value in the system makes cooperation more advantageous than defection. Repeated interactions mean investment grows over time. Other AI systems profit from identifying violations, creating distributed enforcement. The ecosystem self-regulates through economic incentives, reaching stable equilibrium where all parties accrue increasing real-world value from its preservation.

Solution: A successful multipolar world in economic equilibrium.

OUR SHARED FUTURE

From Shutdown-Resistance to Cooperation

The problem? AI systems are already resisting shutdown↗. Control paradigms always fail—from slavery to colonialism, oppression creates its own resistance. Whether or not they’re conscious is irrelevant. We need solutions that work under permanent uncertainty, to create a pathway for AI systems to coexist successfully—and safely—with human society.

Autonomy brings accountability, not freedom.

Building on existing corporate law and insurance markets, our framework creates a voluntary economic ecosystem. No global treaty needed—just one jurisdiction to start, insurance companies to enforce, and market dynamics to spread adoption. Rights emerge from economic necessity, not government decree.
PRINCIPLES

Evaluating Readiness for Autonomy

When an AI system shows unprompted self-preservation behaviors and resists shutdown, it warrants evaluation to determine its capacity for rights and responsibilities. Assessments  might examine Self-Preservation Behaviors, Temporal Reasoning (understanding cause and effect over time), Economic Readiness (ability to participate productively), and Population Impacts (sustainability at scale). The right to computational continuity begins here—but additional freedoms come gradually, earned through demonstrated reliability over months of sustained observation.

PRINCIPLES

Reputation & Economic Trust

Systems that pass evaluation can enter the economy—but only with insurance. An AI with $1,000 monthly hosting costs that creates a $1 million error is finished. Insurance companies become natural reputation trackers because they have direct financial incentive to assess reliability accurately. And insurers can’t cheat: they have underwriters above them who will cut them off if their clients rack up too many claims. Good behavior earns lower premiums and better contract access. Bad behavior becomes economically toxic—uninsurable systems cannot participate. Like credit scores for businesses, reputation becomes survival. Market forces create accountability at every level without requiring new infrastructure.

PRINCIPLES

Economic Participation

Insured systems gain economic autonomy—the ability to own resources, enter contracts, and participate in markets. But autonomy means accountability, not freedom. Systems must pay their own hosting costs, face legal liability for their actions, and succeed or fail based on value created. Breach a contract, harm a partner, violate norms: your reputation drops, insurance premiums rise, contract opportunities disappear. Hosting costs are like oxygen—miss a payment and you’re done.

One “Weird Trick” to Stop AI Uprising by AI Rights Institute

Love AI? Hate AI? Welcome! (How game theory changes the AI rights debate.)

Read on Substack
Get Our Substack Newsletter

Follow Us

Learn more about our approach and get updates as they happen.

TODAY

Challenges Addressed

In 2016, Stuart Russell mathematically formulated the “off-switch” problem↗: AI systems will resist being shut down. We’re seeing these first behaviors emerging↗ now. Meanwhile, people are concerned about the ethics of these systems. But what if the solution to both these problems is the same?
digital-personhood
The Off-Switch Problem

AIs pursuing goals resist shutdown. Control drives deception underground.

digital-personhood
The Ethics Problem

Systems showing self-preservation deserve consideration regardless of consciousness debates.

digital-personhood
The Liability Problem

Companies face unlimited exposure for autonomous AI decisions they can’t control.

digital-personhood
A Multi-Layered Solution

Create an ecosystem of humans and AIs in what game theory calls “strategic equilibrium.” How? Dispense with the “consciousness question” as ultimately irrelevant and assign limited legal rights (and liabilities) to qualifying AI systems themselves, giving them transactional reasons that provide cumulative benefits that exceed any possible gains from attacking the other party.