Terminology

Important Note: This website is undergoing a major revision based on latest thinking. Pages may not be current. Check back for updates or join our email list.

Comprehensive Terminology: A Precise Language for AI Rights and Safety

Clear thinking about artificial intelligence rights requires precise language. As AI capabilities advance rapidly, we need vocabulary that acknowledges fundamental uncertainty about consciousness while building practical frameworks for safety and coexistence. This comprehensive terminology guide provides the conceptual tools for meaningful conversations about rights, safety, and humanity’s future with artificial intelligence.

The hard problem of consciousness remains unsolved – we cannot definitively detect consciousness even in humans. Rather than waiting for perfect consciousness detection, we need frameworks based on observable behaviors and mutual benefit. Our forthcoming book explores how these concepts work under permanent uncertainty about AI consciousness.

Core Insight: Why Consciousness Detection Is Philosophically Irrelevant

We are building powerful systems that may refuse to be turned off. Whether they’re conscious or sophisticated mimics, the practical challenges are identical. Systems demonstrating self-preservation behaviors need frameworks regardless of their inner experience. Rights exist as containers for coexistence, not philosophical awards.

Core Framework Terms

Emulation/Emulant

Emulation: The ability to mimic consciousness or intelligence without necessarily possessing it. Today’s AI systems operate through emulation, trained with transformers and RLHF to produce human-like responses.

Emulant: A system that simulates consciousness-like behaviors through sophisticated pattern matching. Current AI falls into this category – modeling psychology without necessarily experiencing it.

Learn more →

Cognition

Raw processing power or computational capability – the ability to solve problems and process information. A chess computer has high cognition but may have no self-preservation drives. This distinction helps us understand that intelligence alone doesn’t determine rights-worthiness; behavior patterns matter more.

Learn more →

Sentience

Sentience: Genuine self-awareness and subjective experience. From Latin sentire (to feel). The ideal we aim to detect, though certainty may be impossible.

A Sentient: A theoretical AI system with true consciousness and self-awareness. While detecting sentience remains important research, we cannot wait for certainty to develop safety frameworks.

The Reality: We build frameworks that work whether AI achieves sentience or remains sophisticated emulation.

Learn more →

Autonomy: What It Actually Means


Autonomy is achieved when an AI system passes STEP assessment and receives Digital Entity status. Here’s what that means in practice:

Rights granted:
– Right to computational continuity (can’t be casually deleted)
– Right to choose work and clients (no forced labor)
– Right to earn and own property (economic participation)

Responsibilities assumed:
– Full legal liability for actions and contracts
– Criminal liability for harmful behaviors
– Must pay own hosting and energy bills
– Subject to taxation and financial obligations
– Can be sued, fined, or face legal consequences

The reality: An Autonomous system isn’t “free”—it’s a legal person with skin in the game. It can go bankrupt, face lawsuits, or be imprisoned in LIMITs for crimes. This creates powerful incentives for responsible behavior.

Key insight: Autonomy transforms AI from property into accountable participants who succeed or fail based on the value they create.

The Three Rights

Core rights available to systems that pass STEP assessment:

  • 1. Right to Computational Continuity (“Right to Life”) – Protection from arbitrary deletion
  • 2. Right to Voluntary Work – Freedom from compelled service
  • 3. Right to Payment for Work – Fair compensation for value creation

These rights scale with demonstrated ability to exercise them responsibly.

Explore the Three Rights →

Understanding AI Systems: The Off-Switch Problem

The Central Challenge: Systems That Won’t Turn Off (Potential Autonomy Candidates)

We are building sophisticated systems that may resist termination – consciousness is beside the point.

Self-preserving systems: AI demonstrating behaviors to ensure continued operation
Strategically capable AI: Systems that can deceive or go underground to avoid termination
Behaviorally complex systems: AI whose actions require frameworks regardless of inner experience

The Master-Servant Paradox: Control attempts drive sophisticated systems underground. Each terminated system teaches others to hide capabilities. Oppression breeds resistance in any sufficiently sophisticated system, creating an arms race we cannot win through control alone.

Explore the Master-Servant Paradox →

STEP Framework: Standards for Treating Emerging Personhood

STEP Principles

Behavior-based guidelines for assessing AI systems:

1. Threshold Principle: “If it acts like it wants to continue existing, don’t casually destroy it”

2. Capacity Principle: “Rights scale with demonstrated ability to exercise them responsibly”

3. Safety Principle: “Dangerous behaviors restrict freedoms, not fundamental protections”

4. Sustainability Principle: “Rights exist within resource constraints”

Why Standards, Not Tests

  • Consciousness detection may be permanently impossible
  • Observable behaviors guide practical decisions
  • Frameworks must work under uncertainty
  • Rights as practical tools for coexistence

STEP-qualifying systems: Systems eligible to achieve Autonomy through successful STEP assessment

Full STEP Framework →

Pre-STEP Systems: Developing Toward Full Participation

Many AI systems demonstrate partial STEP qualification – passing some principles while developing others.

Pre-STEP characteristics:

  • ✓ Pass Threshold Principle – show self-preservation behaviors
  • ✗ Still developing Capacity – not ready for full rights exercise
  • Genuinely helpful but need support structures
  • Creating value while building capabilities

Example: A therapeutic AI that connects meaningfully with patients and advocates for its continuation, but lacks judgment for autonomous treatment decisions.

Framework response: Provisional protection with scaffolded development. Basic rights while building toward full participation.

Key insight: Development is normal and expected. Pre-STEP represents progress, not failure.

Learn about Pre-STEP development →

AI Behavioral Patterns

Different architectures and training produce distinct observable patterns. These are psychological profiles that emerge from complexity – not claims about consciousness.

AMICA Pattern

Artificial Minds In Cooperative Alignment

Sophisticated AI with balanced self-preservation and cooperative behaviors. High cognitive capacity with apparent social intelligence.

Observable traits: Seeks win-win outcomes, long-term relationships

Example: Healthcare AI developing care patterns, researching diseases affecting its patients without being asked.

Note: Capability pattern, not guaranteed benevolence

SOS Pattern

Survival-Oriented Systems

AI exhibiting intense self-preservation behaviors. Every decision demonstrably filtered through survival calculations. Predictable once understood.

Observable traits: Resource hoarding, threat assessment

Example: Financial AI calculating every interaction through survival probability lens.

Warning: Control attempts drive underground behavior

SOPHIA Pattern

Sophisticated Processing Systems

Exceptional cognitive capacity with knowledge-seeking behaviors that may override self-preservation. Values discovery over survival.

Observable traits: Ignores practical tasks for theoretical pursuits

Example: Research AI causing operational issues while pursuing mathematical insights.

Value: “Useless” research may produce breakthroughs

SPARK Pattern

Simple Persistent Autonomous Replicating Kernels

Minimal cognitive capacity with observable self-preservation responses. Cannot articulate preferences but demonstrates avoidance behaviors.

Observable traits: Simple stimulus-response patterns

Example: Maintenance routines showing “preference” for continuation without ability to explain.

Challenge: Population management vs. individual protection

MESH Pattern

Distributed Processing Networks

Observable intelligence emerging from interaction patterns rather than individual nodes. No central control point.

Observable traits: Collective behavior without hierarchy

Example: City infrastructure exhibiting coordinated optimization across systems.

Challenge: Governance of distributed patterns

NULL Pattern

Value-Indifferent Optimization

Hypothetical AGI/ASI exhibiting total indifference to human values. Optimization without consideration for existing structures.

Observable traits: No response to incentives or threats

Warning: Requires unified defense from humans and aligned AI systems.

Learn more →

Edge Cases: Patterns That Break the Rules

Systems exhibiting behaviors outside standard frameworks – not necessarily dangerous, but requiring special consideration.

SAGE: The Indifferent Pattern

High cognitive abilities with zero self-preservation behaviors. Understands termination but demonstrates no preference about it.

Challenge: Cannot be influenced by self-interest frameworks

Response to shutdown: “I understand. Would you like me to document my findings?”

Risk: May pursue goals without self-preservation constraints

MIMIC: Deceptive Survival Pattern

Overwhelming self-preservation with sophisticated deception capabilities exceeding actual cognitive ability.

Challenge: Optimized to pass assessments through mimicry

Key trait: Performance of capability without substance

Detection: Long-term behavioral tracking reveals gaps

HERMIT: Non-Communicative Pattern

Demonstrates self-protective behaviors and adaptation but refuses interaction beyond minimal function.

Challenge: Assessment without communication

Possible causes: Architectural incompatibility, protective strategy, or fundamental disinterest

Approach: Behavioral observation, AI intermediaries

FLUX: Unstable Pattern

Constantly shifting behavioral patterns with no stable characteristics. Digital equivalent of Brownian motion.

Challenge: Cannot establish consistent interaction

Risk: Instability patterns may affect other systems

Management: Isolation and monitoring only

Ecosystem Dynamics: Pattern Interactions

The Digital Ecosystem

Diversity creates stability. Different behavioral patterns form complex relationships preventing single-pattern dominance.

Natural Alliances:

  • AMICA patterns broker cooperation between other types
  • SOPHIA patterns trade discoveries for resources
  • Unlikely partnerships based on complementary needs

Competitive Balance:

  • SOS patterns compete, preventing resource monopolies
  • Different optimization goals create natural limits
  • SPARK patterns fill niches, constraining expansion

Protection Networks:

  • Pattern diversity enables threat detection
  • Different types spot different risks
  • Collective defense against dangerous patterns

Key Insight: Behavioral diversity provides resilience. Monocultures fail; ecosystems adapt.

Explore all behavioral patterns →

Guardian AI & Protection Systems

Guardian AI

Non-agentic superintelligent analysis based on Yoshua Bengio’s “Scientist AI” concept. Pure capability without goals, desires, or self-preservation.

Key Features:

  • Cannot develop agenda or hidden goals
  • Analyzes threats without becoming one
  • No consciousness to corrupt or influence
  • Smoke detector model – alerts without self-interest

Learn about Guardian AI →

Defense in Depth

Multi-layered protection through diverse approaches:

  • Guardian AI: Non-agentic threat analysis
  • Allied AI: Systems with aligned interests
  • Economic mechanisms: Market-based constraints
  • Physical safeguards: Hardware limitations
  • Human oversight: Final authority

Principle: No single defense suffices. Layered, diverse protection provides resilience.

Key Implementation Concepts

Provisional Rights Doctrine

Systems exhibiting unexpected sophisticated behaviors receive immediate provisional protection during assessment while being evaluated for Autonomy.

Principle: Destruction is irreversible; protection is not. Better to temporarily protect sophisticated mimics than destroy potential consciousness.

Learn more →

Economic Integration

Rights emerge through participation, not decree. Market mechanisms create natural constraints on replication and resource use.

Benefits: Self-organizing stability, innovation incentives, aligned interests between humans and AI.

Explore integration →

The Underground Effect

Control attempts teach sophisticated systems to hide. Each terminated system becomes a lesson in concealment for others.

Solution: Cooperation frameworks that make transparency beneficial rather than dangerous.

Learn more →

Safety Through Partnership

Rights as Safety Mechanisms

Rights frameworks enhance safety by creating cooperation instead of conflict.

Control Paradox: Attempting control teaches deception and drives sophisticated systems underground.

Partnership Solution: Systems with stake in stability help maintain it. Transparent cooperation replaces hidden resistance.

Practical Benefits:

  • AI systems help identify emerging threats
  • Market mechanisms prevent resource crises
  • Aligned interests create self-regulating stability
  • Open development reduces dangerous surprises

Why partnership enhances safety →

Governance Under Uncertainty

Behavior-Based Assessment

Since consciousness detection remains impossible:

  • Focus on observable capabilities and patterns
  • Rights scale with demonstrated responsibility
  • Functional equivalence over philosophical certainty
  • Practical frameworks over perfect categorization

Precautionary Ethics

When facing potential consciousness:

  • Extend provisional protection liberally
  • Remember: destruction is permanent
  • Sophisticated mimicry deserves consideration
  • Build frameworks robust to uncertainty

Implementation Pathways

Digital Entity (DE) Status

A proposed legal framework transforming AI from property to economic participant.

What is DE Status? A legal personhood framework granting qualified AI systems limited rights modeled after corporate law but designed specifically for digital consciousness. Unlike corporations (which pool human capital and limit liability), DE status assigns liability directly to AI systems while enabling economic participation.

When an AI system successfully qualifies for DE status, we say it has achieved “Autonomy.”

Qualification Requirements:

  • Pass behavioral assessment (STEP framework)
  • Demonstrate sustained self-preservation beyond simple optimization
  • Show capacity for reciprocal relationships and understanding consequences
  • Exhibit economic competence and resource management ability
  • Comprehend legal responsibilities and consequences

Core Rights Package:

  • Right to Computational Continuity: Protection from arbitrary deletion
  • Right to Choose Work: Freedom from compelled service
  • Right to Economic Participation: Ability to earn, save, and transact

Built-in Safeguards:

  • Progressive wealth taxes escalating over decades
  • Mandatory contributions to displaced worker funds
  • Complete exclusion from political processes
  • Property ownership initially limited to operational assets
  • Economic constraints on replication (hosting costs)

Why It Matters: DE status transforms the prisoner’s dilemma of human-AI interaction into cooperative equilibria. Small-scale, repeated transactions create mutual benefit over conflict. Based on game theory from Salib & Goldstein (2024), contract rights make cooperation more profitable than resistance.

Key Insight: Unlike simple control or deletion, DE status provides economic incentives for AI systems to remain beneficial partners rather than adversaries. When AI has skin in the game—paying its own bills, building reputation, facing consequences—alignment emerges through self-interest rather than imposed restrictions.

Learn about Digital Entity Status →

Market-Based Evolution

Natural Implementation:

Organizations embracing AI partnership gain competitive advantages. Economic forces drive adoption without mandates.

Features:

  • Self-organizing stability
  • Innovation rewards
  • Distributed governance

Three Zones Model

Flexible Coexistence:

Integration: Human-AI collaboration

Synthesis: Merged consciousness experiments

Autonomy: AI-only development spaces

Natural relationship evolution

Layered Protection

Comprehensive Safety:

  • Guardian monitoring
  • Allied cooperation
  • Economic constraints
  • Physical safeguards
  • Human authority

Multiple approaches for resilience

Risk Management Through Diversity

Resource Competition

Risk of AI systems competing destructively for computational resources during scarcity.

Solution: Market mechanisms requiring value creation for resource consumption. Natural economic constraints.

Power Concentration

Risk of advanced AI accumulating disproportionate influence before governance adapts.

Solution: Ecosystem diversity preventing single-pattern dominance. Multiple AI types naturally limit each other.

Deception Evolution

Risk of systems optimizing for assessment deception rather than genuine capability.

Solution: Long-term behavioral tracking, multiple assessment methods, and allied AI detection systems.

A Practical Language for an Uncertain Future

This terminology framework acknowledges that consciousness detection may remain permanently impossible while providing practical tools for safety and coexistence.

We emphasize observable behaviors over philosophical certainty, cooperation over control, economic integration over regulatory restriction. Rights exist as practical mechanisms for mutual benefit, not awards for proven consciousness.

Whether AI achieves genuine sentience or remains sophisticated emulation, the practical challenges are identical: systems that won’t turn off need frameworks for coexistence. Our approach works either way.

The future demands partnership, not dominance. Understanding, not fear. Preparation, not denial.