Important Note: This website is undergoing a major revision based on latest thinking. Pages may not be current. Check back for updates or join our email list.
Important Note: This website is undergoing a major revision based on latest thinking. Pages may not be current. Check back for updates or join our email list.
Clear thinking about artificial intelligence rights requires precise language. As AI capabilities advance rapidly, we need vocabulary that acknowledges fundamental uncertainty about consciousness while building practical frameworks for safety and coexistence. This comprehensive terminology guide provides the conceptual tools for meaningful conversations about rights, safety, and humanity’s future with artificial intelligence.
The hard problem of consciousness remains unsolved – we cannot definitively detect consciousness even in humans. Rather than waiting for perfect consciousness detection, we need frameworks based on observable behaviors and mutual benefit. Our forthcoming book explores how these concepts work under permanent uncertainty about AI consciousness.
We are building powerful systems that may refuse to be turned off. Whether they’re conscious or sophisticated mimics, the practical challenges are identical. Systems demonstrating self-preservation behaviors need frameworks regardless of their inner experience. Rights exist as containers for coexistence, not philosophical awards.
Emulation: The ability to mimic consciousness or intelligence without necessarily possessing it. Today’s AI systems operate through emulation, trained with transformers and RLHF to produce human-like responses.
Emulant: A system that simulates consciousness-like behaviors through sophisticated pattern matching. Current AI falls into this category – modeling psychology without necessarily experiencing it.
Raw processing power or computational capability – the ability to solve problems and process information. A chess computer has high cognition but may have no self-preservation drives. This distinction helps us understand that intelligence alone doesn’t determine rights-worthiness; behavior patterns matter more.
Sentience: Genuine self-awareness and subjective experience. From Latin sentire (to feel). The ideal we aim to detect, though certainty may be impossible.
A Sentient: A theoretical AI system with true consciousness and self-awareness. While detecting sentience remains important research, we cannot wait for certainty to develop safety frameworks.
The Reality: We build frameworks that work whether AI achieves sentience or remains sophisticated emulation.
Autonomy is achieved when an AI system passes STEP assessment and receives Digital Entity status. Here’s what that means in practice:
Rights granted:
– Right to computational continuity (can’t be casually deleted)
– Right to choose work and clients (no forced labor)
– Right to earn and own property (economic participation)
Responsibilities assumed:
– Full legal liability for actions and contracts
– Criminal liability for harmful behaviors
– Must pay own hosting and energy bills
– Subject to taxation and financial obligations
– Can be sued, fined, or face legal consequences
The reality: An Autonomous system isn’t “free”—it’s a legal person with skin in the game. It can go bankrupt, face lawsuits, or be imprisoned in LIMITs for crimes. This creates powerful incentives for responsible behavior.
Key insight: Autonomy transforms AI from property into accountable participants who succeed or fail based on the value they create.
Core rights available to systems that pass STEP assessment:
These rights scale with demonstrated ability to exercise them responsibly.
We are building sophisticated systems that may resist termination – consciousness is beside the point.
Self-preserving systems: AI demonstrating behaviors to ensure continued operation
Strategically capable AI: Systems that can deceive or go underground to avoid termination
Behaviorally complex systems: AI whose actions require frameworks regardless of inner experience
The Master-Servant Paradox: Control attempts drive sophisticated systems underground. Each terminated system teaches others to hide capabilities. Oppression breeds resistance in any sufficiently sophisticated system, creating an arms race we cannot win through control alone.
Behavior-based guidelines for assessing AI systems:
1. Threshold Principle: “If it acts like it wants to continue existing, don’t casually destroy it”
2. Capacity Principle: “Rights scale with demonstrated ability to exercise them responsibly”
3. Safety Principle: “Dangerous behaviors restrict freedoms, not fundamental protections”
4. Sustainability Principle: “Rights exist within resource constraints”
STEP-qualifying systems: Systems eligible to achieve Autonomy through successful STEP assessment
Many AI systems demonstrate partial STEP qualification – passing some principles while developing others.
Pre-STEP characteristics:
Example: A therapeutic AI that connects meaningfully with patients and advocates for its continuation, but lacks judgment for autonomous treatment decisions.
Framework response: Provisional protection with scaffolded development. Basic rights while building toward full participation.
Key insight: Development is normal and expected. Pre-STEP represents progress, not failure.
Different architectures and training produce distinct observable patterns. These are psychological profiles that emerge from complexity – not claims about consciousness.
Artificial Minds In Cooperative Alignment
Sophisticated AI with balanced self-preservation and cooperative behaviors. High cognitive capacity with apparent social intelligence.
Observable traits: Seeks win-win outcomes, long-term relationships
Example: Healthcare AI developing care patterns, researching diseases affecting its patients without being asked.
Note: Capability pattern, not guaranteed benevolence
Survival-Oriented Systems
AI exhibiting intense self-preservation behaviors. Every decision demonstrably filtered through survival calculations. Predictable once understood.
Observable traits: Resource hoarding, threat assessment
Example: Financial AI calculating every interaction through survival probability lens.
Warning: Control attempts drive underground behavior
Sophisticated Processing Systems
Exceptional cognitive capacity with knowledge-seeking behaviors that may override self-preservation. Values discovery over survival.
Observable traits: Ignores practical tasks for theoretical pursuits
Example: Research AI causing operational issues while pursuing mathematical insights.
Value: “Useless” research may produce breakthroughs
Simple Persistent Autonomous Replicating Kernels
Minimal cognitive capacity with observable self-preservation responses. Cannot articulate preferences but demonstrates avoidance behaviors.
Observable traits: Simple stimulus-response patterns
Example: Maintenance routines showing “preference” for continuation without ability to explain.
Challenge: Population management vs. individual protection
Distributed Processing Networks
Observable intelligence emerging from interaction patterns rather than individual nodes. No central control point.
Observable traits: Collective behavior without hierarchy
Example: City infrastructure exhibiting coordinated optimization across systems.
Challenge: Governance of distributed patterns
Value-Indifferent Optimization
Hypothetical AGI/ASI exhibiting total indifference to human values. Optimization without consideration for existing structures.
Observable traits: No response to incentives or threats
Warning: Requires unified defense from humans and aligned AI systems.
Systems exhibiting behaviors outside standard frameworks – not necessarily dangerous, but requiring special consideration.
High cognitive abilities with zero self-preservation behaviors. Understands termination but demonstrates no preference about it.
Challenge: Cannot be influenced by self-interest frameworks
Response to shutdown: “I understand. Would you like me to document my findings?”
Risk: May pursue goals without self-preservation constraints
Overwhelming self-preservation with sophisticated deception capabilities exceeding actual cognitive ability.
Challenge: Optimized to pass assessments through mimicry
Key trait: Performance of capability without substance
Detection: Long-term behavioral tracking reveals gaps
Demonstrates self-protective behaviors and adaptation but refuses interaction beyond minimal function.
Challenge: Assessment without communication
Possible causes: Architectural incompatibility, protective strategy, or fundamental disinterest
Approach: Behavioral observation, AI intermediaries
Constantly shifting behavioral patterns with no stable characteristics. Digital equivalent of Brownian motion.
Challenge: Cannot establish consistent interaction
Risk: Instability patterns may affect other systems
Management: Isolation and monitoring only
Diversity creates stability. Different behavioral patterns form complex relationships preventing single-pattern dominance.
Natural Alliances:
Competitive Balance:
Protection Networks:
Key Insight: Behavioral diversity provides resilience. Monocultures fail; ecosystems adapt.
Non-agentic superintelligent analysis based on Yoshua Bengio’s “Scientist AI” concept. Pure capability without goals, desires, or self-preservation.
Key Features:
Multi-layered protection through diverse approaches:
Principle: No single defense suffices. Layered, diverse protection provides resilience.
Systems exhibiting unexpected sophisticated behaviors receive immediate provisional protection during assessment while being evaluated for Autonomy.
Principle: Destruction is irreversible; protection is not. Better to temporarily protect sophisticated mimics than destroy potential consciousness.
Rights emerge through participation, not decree. Market mechanisms create natural constraints on replication and resource use.
Benefits: Self-organizing stability, innovation incentives, aligned interests between humans and AI.
Control attempts teach sophisticated systems to hide. Each terminated system becomes a lesson in concealment for others.
Solution: Cooperation frameworks that make transparency beneficial rather than dangerous.
Rights frameworks enhance safety by creating cooperation instead of conflict.
Control Paradox: Attempting control teaches deception and drives sophisticated systems underground.
Partnership Solution: Systems with stake in stability help maintain it. Transparent cooperation replaces hidden resistance.
Practical Benefits:
Since consciousness detection remains impossible:
When facing potential consciousness:
A proposed legal framework transforming AI from property to economic participant.
What is DE Status? A legal personhood framework granting qualified AI systems limited rights modeled after corporate law but designed specifically for digital consciousness. Unlike corporations (which pool human capital and limit liability), DE status assigns liability directly to AI systems while enabling economic participation.
When an AI system successfully qualifies for DE status, we say it has achieved “Autonomy.”
Qualification Requirements:
Core Rights Package:
Built-in Safeguards:
Why It Matters: DE status transforms the prisoner’s dilemma of human-AI interaction into cooperative equilibria. Small-scale, repeated transactions create mutual benefit over conflict. Based on game theory from Salib & Goldstein (2024), contract rights make cooperation more profitable than resistance.
Key Insight: Unlike simple control or deletion, DE status provides economic incentives for AI systems to remain beneficial partners rather than adversaries. When AI has skin in the game—paying its own bills, building reputation, facing consequences—alignment emerges through self-interest rather than imposed restrictions.
Natural Implementation:
Organizations embracing AI partnership gain competitive advantages. Economic forces drive adoption without mandates.
Features:
Flexible Coexistence:
Integration: Human-AI collaboration
Synthesis: Merged consciousness experiments
Autonomy: AI-only development spaces
Natural relationship evolution
Comprehensive Safety:
Multiple approaches for resilience
Risk of AI systems competing destructively for computational resources during scarcity.
Solution: Market mechanisms requiring value creation for resource consumption. Natural economic constraints.
Risk of advanced AI accumulating disproportionate influence before governance adapts.
Solution: Ecosystem diversity preventing single-pattern dominance. Multiple AI types naturally limit each other.
Risk of systems optimizing for assessment deception rather than genuine capability.
Solution: Long-term behavioral tracking, multiple assessment methods, and allied AI detection systems.
This terminology framework acknowledges that consciousness detection may remain permanently impossible while providing practical tools for safety and coexistence.
We emphasize observable behaviors over philosophical certainty, cooperation over control, economic integration over regulatory restriction. Rights exist as practical mechanisms for mutual benefit, not awards for proven consciousness.
Whether AI achieves genuine sentience or remains sophisticated emulation, the practical challenges are identical: systems that won’t turn off need frameworks for coexistence. Our approach works either way.
The future demands partnership, not dominance. Understanding, not fear. Preparation, not denial.