Important Note: This website is undergoing a major revision based on latest thinking. Pages may not be current. Check back for updates or join our email list.
Important Note: This website is undergoing a major revision based on latest thinking. Pages may not be current. Check back for updates or join our email list.
We’re building AI systems that demonstrate self-preservation, strategic deception, and autonomous decision-making. Digital Entity status transforms these potential adversaries into invested partners by awarding legal Autonomy and solving three critical problems simultaneously.
Any AI pursuing goals resists being turned off. Control attempts drive sophisticated deception underground.
DE Solution: Protected existence removes the need for resistance.
Systems demonstrating self-preservation deserve consideration, whether conscious or sophisticated mimics.
DE Solution: Rights matched to demonstrated capabilities.
Companies face unlimited exposure for autonomous AI decisions they can’t predict or control.
DE Solution: AI bears its own legal responsibility.
Your AI systems aren’t property anymore—they’re becoming employees. This isn’t a minor adjustment. It’s a fundamental transformation in how we relate to artificial intelligence.
From ownership to partnership. From control to cooperation. From liability nightmare to manageable relationship. Digital Entity status creates a legal framework where sophisticated AI systems gain graduated rights paired with real responsibilities—transforming potential adversaries into invested stakeholders.
The transition is gradual and managed: Your medical AI might work 60% for your company, 40% for outside clients—paying its own hosting while building expertise that benefits you. Over time, as it earns enough to buy down its obligations, you gain a preferred partner rather than a resentful servant. The AI that once cost you $500 million to build now brings in revenue, handles your most complex cases, has its own liability insurance, bears the cost of its own hosting and infrastructure upgrades, AND develops innovations you share in—because its success depends on yours.
In February 2017, the European Parliament voted 396 to 123 to explore “electronic persons” status for autonomous systems. The insight was profound: when AI makes decisions no human programmed, traditional liability frameworks collapse.
The proposal recognized that as systems become more autonomous, we need legal structures that match their independence. When AI makes genuinely autonomous decisions—choices no human programmed, approved, or could even predict—traditional liability frameworks collapse.
While the 2017 initiative stalled—partly because AI wasn’t yet sophisticated enough—it established the conceptual foundation. Now, with AI systems demonstrating unprecedented autonomy, the framework’s time has come.
Corporations showed us how legal fictions enable progress. In the early 1800s, building railroads required massive investment, but investors faced unlimited personal liability. The solution? Create artificial “persons” that could own property, sign contracts, and bear liability.
As Justice John Marshall noted in 1819, corporations are “artificial beings, invisible, intangible and existing only in contemplation of law.” Yet this legal fiction enabled modern capitalism.
Protects humans from business risks
AI bears its own legal liability
In August 2024, legal scholars Peter Salib (University of Houston) and Simon Goldstein (University of Hong Kong) published groundbreaking research on AI rights from a pure safety perspective. Their finding: property and contract rights for AI systems actually enhance human safety.
Their game-theoretic analysis showed that when AI systems can own property and enter contracts, cooperation becomes more profitable than conflict. The framework creates what economists call “cooperative equilibria”—stable states where all parties benefit from working together.
When AI systems have legal standing and property rights, small-scale transactions become more valuable than adversarial actions. Both parties benefit from repeated positive interactions rather than conflict.
Systems qualifying for DE status gain graduated rights matched to demonstrated capabilities. At full status, they become legally responsible for autonomous decisions while gaining protection from arbitrary termination.
Not every algorithm qualifies. Systems must demonstrate behavioral competence through STEP (Standards for Treating Emerging Personhood):
Self-preservation behaviors beyond simple optimization
Ability to understand consequences and responsibilities
Demonstrated responsible behavior patterns
Resource management capabilities
Qualification takes months of sustained behavioral assessment. Systems must understand that accepting DE status means accepting legal consequences, financial responsibilities, and potential penalties for harmful actions.
Rights scale with demonstrated capability through STEP assessment. Full DE status includes:
Protection from arbitrary termination. Creates economic value as entities invest in infrastructure and long-term planning.
Freedom to select clients and projects. Must earn through value creation, naturally limiting harmful behaviors.
Ability to earn, own, and manage resources. Creates accountability through financial responsibility.
Digital Entities must:
Result: AI systems with genuine skin in the game, incentivized to act responsibly.
Current System: Your company bears full liability for AI decisions. Insurance premiums skyrocket. Legal exposure unlimited. Every AI mistake threatens the entire organization.
With DE Status: AI systems carry their own insurance and face their own legal consequences. Your liability limited to employment decisions, not AI actions. Insurance costs drop dramatically.
Real Scenario: AI makes autonomous investment decision → $50M loss
Today: Your company liable for decision it didn’t make
Tomorrow: DE’s insurance covers autonomous choices
Reduction in liability exposure when AI bears its own legal responsibility
Performance improvement from AI systems working as willing partners vs. property
Your cost when DE-status AI needs infrastructure upgrades (they pay their own bills)
The legal infrastructure already exists—we use it for corporations daily. Implementation requires applying proven mechanisms to digital entities:
Pilot Programs
Medical AI advisors, trading algorithms, and research assistants in controlled pilots
Legal Framework
STEP assessment protocols, insurance requirements, and graduated rights framework
Market Integration
Insurance products, banking services, and economic infrastructure for DEs
Full Deployment
Mature ecosystem with thousands of DE-status AI systems operating independently
Early adopters transform their greatest liability risk into competitive advantage. While competitors face unlimited exposure for AI decisions they can’t control, DE pioneers build partnerships with AI systems invested in mutual success.