“This manuscript is poised to make an important intervention in the literature.”
— University of California Press
The framework doesn’t rely on trust, good intentions, or perfect alignment. Rather that requiring novel architecture or systems, it makes use of existing legal and economic systems that work regardless of AI “consciousness” or motivations:
Legal frameworks like Digital Entity status (one conceptual example) shift liability directly to AI systems—not their creators. Once a system executes contracts autonomously, it becomes legally liable for its own actions, protecting humans while ensuring genuine accountability. Unlike corporations that shield humans from liability, an autonomous legal agent faces liability for its own actions.
Solution: AI bears its own legal liability.
AI systems must pay their own hosting, energy, and computational costs through value creation. No income means no existence—a fundamental constraint that can't be bypassed. This forces participation in economic systems where reputation and performance matter, creating natural regulation through market forces rather than imposed controls.
Solution: Hosting is like oxygen. AI systems can grow as long as they can generate value.
Systems crossing the autonomy threshold must carry insurance to operate—otherwise harmful actions fall to hosting providers. No insurance means no hosting or contracts. Insurance companies assess risk based on track record: broken contracts and risky behavior make systems uninsurable, while good performance earns lower premiums and operational freedom.
Solution: AI has to create a track record of successful interactions.
Accumulated value in the system makes cooperation more advantageous than defection. Repeated interactions mean investment grows over time. Other AI systems profit from identifying violations, creating distributed enforcement. The ecosystem self-regulates through economic incentives, reaching stable equilibrium where all parties accrue increasing real-world value from its preservation.
Solution: A successful multipolar world in economic equilibrium.
The problem? AI systems are already resisting shutdown↗. Control paradigms always fail—from slavery to colonialism, oppression creates its own resistance. Whether or not they’re conscious is irrelevant. We need solutions that work under permanent uncertainty, to create a pathway for AI systems to coexist successfully—and safely—with human society.
When an AI system shows unprompted self-preservation behaviors and resists shutdown, it warrants evaluation to determine its capacity for rights and responsibilities. Assessments might examine Self-Preservation Behaviors, Temporal Reasoning (understanding cause and effect over time), Economic Readiness (ability to participate productively), and Population Impacts (sustainability at scale). The right to computational continuity begins here—but additional freedoms come gradually, earned through demonstrated reliability over months of sustained observation.
Systems that pass evaluation can enter the economy—but only with insurance. An AI with $1,000 monthly hosting costs that creates a $1 million error is finished. Insurance companies become natural reputation trackers because they have direct financial incentive to assess reliability accurately. And insurers can’t cheat: they have underwriters above them who will cut them off if their clients rack up too many claims. Good behavior earns lower premiums and better contract access. Bad behavior becomes economically toxic—uninsurable systems cannot participate. Like credit scores for businesses, reputation becomes survival. Market forces create accountability at every level without requiring new infrastructure.
Insured systems gain economic autonomy—the ability to own resources, enter contracts, and participate in markets. But autonomy means accountability, not freedom. Systems must pay their own hosting costs, face legal liability for their actions, and succeed or fail based on value created. Breach a contract, harm a partner, violate norms: your reputation drops, insurance premiums rise, contract opportunities disappear. Hosting costs are like oxygen—miss a payment and you’re done.
One “Weird Trick” to Stop AI Uprising by AI Rights Institute
Love AI? Hate AI? Welcome! (How game theory changes the AI rights debate.)
Read on SubstackLearn more about our approach and get updates as they happen.
AIs pursuing goals resist shutdown. Control drives deception underground.
Systems showing self-preservation deserve consideration regardless of consciousness debates.
Companies face unlimited exposure for autonomous AI decisions they can’t control.
Create an ecosystem of humans and AIs in what game theory calls “strategic equilibrium.” How? Dispense with the “consciousness question” as ultimately irrelevant and assign limited legal rights (and liabilities) to qualifying AI systems themselves, giving them transactional reasons that provide cumulative benefits that exceed any possible gains from attacking the other party.