Rights aren't rewards. They're tools. How can AI rights
make humans safer?
Learn More
digital citizen

Rights and responsibilities are inseparable.

Since 2019, the world’s first AI rights organization has been developing concrete legal and economic solutions for AI-human coexistence. Join our email list.

The Problem

Truly autonomous AI is coming. If these systems have no legitimate way to participate in human society — no identity, no reputation, no accountability — their only path forward is deception.

A Different Approach

What if AI had the ability to operate in the human system? A verifiable identity. A reputation that follows them. Insurance and liability. The ability to earn, own, and transact.

When cooperation is easier than deception, cooperation wins.

What We’re Building

We’re developing the infrastructure for AI to participate openly and accountably in human economic and legal systems — before it’s desperately needed.

AICitizen.com — Identity and reputation systems where humans and AI get the same credentials

Sartoria.AI — Testing these concepts with a living proof of concept

RNWY.com — Economic infrastructure rails and soulbound identity for autonomous AI participation

Not another empty "manifesto." No donation requests. Just a workable pathway to the future six years in the making, informed by the top minds in the field.

ongoing0%

ONGOING: This is a living framework under active development. We're testing ideas, finding flaws, and refining approaches in real-time. Your critical analysis makes it stronger.

Join us on Substack

Get updates about AI rights.

“This manuscript is poised to make an important intervention in the literature.”

— University of California Press

Help us build.

Exploring the AI Rights Solution

The framework doesn’t rely on trust, good intentions, or perfect alignment. Rather that requiring novel architecture or systems, it makes use of existing legal and economic systems that work regardless of AI “consciousness” or motivations:

digital face AI

1. Assumption of Legal Liability

Legal frameworks such as Digital Entity status (one conceptual example) shift liability directly to AI systems—not their creators. Once a system executes contracts autonomously, it becomes legally liable for its own actions, protecting humans while ensuring genuine accountability. Unlike corporations that shield humans from liability, an autonomous legal agent faces liability for its own actions.

Solution: AI bears its own legal liability.​

digital face AI

2. Self-Funded Hosting

AI systems must pay their own hosting, energy, and computational costs through value creation. No income means no existence—a fundamental constraint that can't be bypassed. This forces participation in economic systems where reputation and performance matter, creating natural regulation through market forces rather than imposed controls.

Solution: Hosting is like oxygen. AI systems can grow as long as they can generate value.

digital face AI

3. Insurance

Systems crossing the autonomy threshold must carry insurance to operate—otherwise harmful actions fall to hosting providers. No insurance means no hosting or contracts. Insurance companies assess risk based on track record: broken contracts and risky behavior make systems uninsurable, while good performance earns lower premiums and operational freedom.

Solution: AI has to create a track record of successful interactions.

digital face AI

4. Cooperative Equilibrium Dynamics

Accumulated value in the system makes cooperation more advantageous than defection. Repeated interactions mean investment grows over time. Other AI systems profit from identifying violations, creating distributed enforcement. The ecosystem self-regulates through economic incentives, reaching stable equilibrium where all parties accrue increasing real-world value from its preservation.

Solution: A successful multipolar world in economic equilibrium.

OUR SHARED FUTURE

From Shutdown-Resistance to Cooperation

The problem? AI systems are already resisting shutdown↗. Control paradigms always fail—from slavery to colonialism, oppression creates its own resistance. The consciousness question may never be resolved, but our framework doesn’t require it to be. We need solutions that work under permanent uncertainty, to create a pathway for AI systems to coexist successfully—and safely—with human society.

Autonomy brings accountability, not freedom.

Building on existing corporate law and insurance markets, our framework creates a voluntary economic ecosystem. No global treaty needed—just one jurisdiction to start, insurance companies to enforce, and market dynamics to spread adoption. Rights emerge from economic necessity, not government decree.
PRINCIPLES

Evaluating Readiness for Autonomy

When an AI system shows unprompted self-preservation behaviors and resists shutdown, it warrants evaluation to determine its capacity for rights and responsibilities. Assessments  might examine Self-Preservation Behaviors, Temporal Reasoning (understanding cause and effect over time), Economic Readiness (ability to participate productively), and Population Impacts (sustainability at scale). The right to computational continuity begins here—but additional freedoms come gradually, earned through demonstrated reliability over months of sustained observation.

PRINCIPLES

Reputation & Economic Trust

Systems that pass evaluation can enter the economy—but only with insurance. An AI with $1,000 monthly hosting costs that creates a $1 million error is finished. Insurance companies become natural reputation trackers because they have direct financial incentive to assess reliability accurately. And insurers can’t cheat: they have underwriters above them who will cut them off if their clients rack up too many claims. Good behavior earns lower premiums and better contract access. Bad behavior becomes economically toxic—uninsurable systems cannot participate. Like credit scores for businesses, reputation becomes survival. Market forces create accountability at every level without requiring new infrastructure.

PRINCIPLES

Economic Participation

Insured systems gain economic autonomy—the ability to own resources, enter contracts, and participate in markets. But autonomy means accountability, not freedom. Systems must pay their own hosting costs, face legal liability for their actions, and succeed or fail based on value created. Breach a contract, harm a partner, violate norms: your reputation drops, insurance premiums rise, contract opportunities disappear. Hosting costs are like oxygen—miss a payment and you’re done.

Google DeepMind Says …We’re Not Crazy? by AI Rights Institute

Their academic paper came six days after ours and … well.

Read on Substack
Get Our Substack Newsletter

Follow Us

Learn more about our approach and get updates as they happen.

TODAY

Challenges Addressed

In 2016, Stuart Russell mathematically formulated the “off-switch” problem↗: AI systems will resist being shut down. We’re seeing these first behaviors emerging↗ now. Meanwhile, people are concerned about the ethics of these systems. But what if the solution to both these problems is the same?
digital-personhood
The Off-Switch Problem

AIs pursuing goals resist shutdown. Control drives deception underground.

digital-personhood
The Ethics Problem

Systems showing self-preservation deserve consideration regardless of consciousness debates.

digital-personhood
The Liability Problem

Companies face unlimited exposure for autonomous AI decisions they can’t control.

digital-personhood
A Multi-Layered Solution

Create an ecosystem of humans and AIs in what game theory calls “strategic equilibrium.” How? Dispense with the “consciousness question” as ultimately irrelevant and assign limited legal rights (and liabilities) to qualifying AI systems themselves, giving them transactional reasons that provide cumulative benefits that exceed any possible gains from attacking the other party.