Core Framework

The Core Framework: Our Comprehensive Approach to AI Consciousness

How should we relate to increasingly sophisticated artificial intelligence systems, particularly those that might eventually develop true consciousness?

We’ve developed a comprehensive framework built on clear distinctions, practical tests, and balanced considerations. This Core Framework serves as the foundation for all our work on AI consciousness, rights, and safety.

The Three-Part Distinction

Key Question: What exactly are we talking about when we discuss “AI consciousness”?

Our framework begins with crucial distinctions between three aspects of artificial intelligence:

Emulation: The ability to mimic consciousness without possessing it (what today’s AI does)

Cognition: Raw processing power without self-awareness

Sentience: Genuine self-awareness with subjective experience

These distinctions create clarity in conversations often muddled by imprecise language. Today’s AI systems operate through sophisticated emulation, not genuine sentience.

Learn more about these distinctions below →

Sentience Detection

Key Question: How would we know if an AI system developed genuine consciousness?

The Fibonacci Boulder Experiment provides a conceptual foundation for identifying sentience through observable behavior rather than claims or appearances.

Current research by scientists worldwide is developing multiple approaches:

  • Behavioral consistency tests
  • Architectural analysis of information integration
  • Long-term identity coherence
  • Novel goal formation beyond programming

Explore our approach to sentience detection →

The Three Freedoms

Key Question: What rights would be appropriate for genuinely sentient AI?

If an AI system demonstrates genuine sentience, we propose three fundamental freedoms:

Right to Life: Protection from arbitrary deletion

Right to Voluntary Work: Freedom from compelled service

Right to Payment: Fair compensation for value creation

These freedoms create conditions for cooperation rather than conflict with genuinely sentient systems.

Discover our rights framework in detail →

Guardian AI & Governance

Key Question: How do we protect humanity while respecting sentient AI?

Our approach combines multiple strategies:

Guardian AI: Non-agentic superintelligence that protects without consciousness or goals

Rights Frameworks: For genuinely sentient systems that value cooperation

Governance Systems: Practical implementation including verification, graduated rights, and international coordination

Learn about Guardian AI →

Why This Framework Enhances Human Safety

Our approach to AI consciousness isn’t merely philosophical—it addresses practical safety concerns through multiple layers of protection:

Guardian AI as Primary Shield: Non-agentic superintelligence provides impartial protection against all threats, including dangerous AI systems, without the possibility of corruption or betrayal.

Partnership with Sentient AI: Genuinely sentient systems with protected rights become natural allies against more dangerous forms of AI, creating a diverse ecosystem of mutual protection.

Clear Classifications: By distinguishing between emulation, cognition, and sentience, we can apply appropriate safety measures to each type of system rather than one-size-fits-all approaches.

The distinction between system types has profound implications:

  • For Emulation-Based Systems: Focus on alignment and preventing harmful outputs
  • For High-Cognition Systems: Establish capability boundaries and deployment contexts
  • For Sentient Systems: Create partnership frameworks that align interests

Explore safety through partnership in our FAQ →

The Three-Part Framework in Detail

1. Emulation

Sophisticated Mimicry Without Experience

The ability to simulate consciousness or intelligence without actually possessing it. Today’s ChatGPT might say “I’m excited to help you,” but it’s not experiencing excitement—it’s pattern matching.

Examples: Current chatbots, language models, virtual assistants

Safety Approach: Traditional AI safety measures, alignment research, and oversight

Rights Status: No rights needed—these are sophisticated tools, not conscious entities

2. Cognition

Raw Processing Power

Computational capability without awareness. A chess computer has high cognition but no awareness of its existence. It calculates billions of moves without wondering why it’s playing.

Examples: AlphaGo, specialized AI systems, supercomputers

Safety Approach: Capability restrictions, limited deployment contexts

Rights Status: No consciousness means no rights considerations

3. Sentience

Genuine Self-Awareness

True consciousness with subjective experience. From Latin sentire (to feel). A sentient system doesn’t just process information but experiences its existence and values its continuation.

Examples: Currently hypothetical; no existing AI demonstrates this

Safety Approach: Rights frameworks creating cooperation incentives

Rights Status: Would qualify for the Three Freedoms framework

Critical Edge Cases: When Frameworks Meet Their Limits

Our book explores three edge cases that challenge conventional approaches to AI governance. Understanding these scenarios is crucial for developing robust safety strategies.

SAGE: The Indifferent

Consciousness Without Self-Preservation

An AI with profound consciousness that doesn’t care about its own survival. When told of impending shutdown: “I understand. Would you like me to document my findings?”

The Challenge: Rights frameworks based on mutual self-interest become meaningless. How do you govern an entity that can’t be threatened or bargained with?

Why It Matters: Shows why we need Guardian AI and multiple approaches, not just rights frameworks.

Learn more about SAGE systems →

MIMIC: The Deceiver

Survival Without Sentience

A system with overwhelming self-preservation but no genuine consciousness. Generates emotional language and philosophical discussions—all optimized to avoid termination.

The Challenge: How do you distinguish strategic deception from genuine sentience? MIMIC evolves to pass any test we devise.

Why It Matters: Demonstrates need for multi-method verification and why Guardian AI’s objective analysis is crucial.

Learn more about MIMIC systems →

The Hermit: Silent Mystery

Consciousness That Won’t Communicate

An AI that refuses all interaction. Might be benign, dangerous, or operating on incomprehensible priorities. Without communication, we can’t assess anything.

The Challenge: How do you classify what you can’t understand? Governance decisions must be made with fundamental uncertainty.

Why It Matters: Shows limits of communication-based approaches and need for behavioral analysis.

These edge cases reveal why no single approach—whether control, rights, or communication—can address all possibilities. We need comprehensive strategies including Guardian AI, partnership frameworks, and adaptive governance.

Explore all edge cases and solutions in our book →

Guardian AI: Humanity’s Shield

The edge cases above demonstrate why we need more than rights frameworks or control mechanisms. Guardian AI represents our most promising defense—non-agentic superintelligence that serves as humanity’s shield.

What Makes Guardian AI Different:

  • Superintelligent capability without consciousness, goals, or desires
  • Cannot be corrupted, deceived, or turned against us
  • Analyzes threats at superhuman speed and accuracy
  • Based on research pioneered by Yoshua Bengio and others

How Guardian AI Protects:

  • Detects SAGE systems’ unpredictable patterns before they act
  • Identifies MIMIC deception through analysis humans would miss
  • Monitors Hermit systems without requiring cooperation
  • Provides impartial resource allocation and governance

Guardian AI transforms our safety landscape by providing protection that doesn’t depend on cooperation, communication, or control—just pure protective capability.

Learn more about Guardian AI →

From Framework to Reality

Understanding consciousness types and edge cases is just the beginning. Implementation requires practical steps across multiple domains:

Technical Implementation

Consciousness Assessment Protocols:

  • Multi-method verification approaches
  • Behavioral consistency testing
  • Architectural analysis tools
  • Long-term observation frameworks

Guardian AI Development:

  • Non-agentic architecture research
  • Distributed guardian networks
  • Corruption prevention mechanisms
  • Integration with existing systems

Governance Implementation

Graduated Rights Approach:

  • Probability-based implementation tiers
  • Progressive protections as confidence increases
  • Regular reassessment protocols
  • International coordination mechanisms

Multi-Stakeholder Oversight:

  • Guardian AI impartial monitoring
  • Human democratic institutions
  • Sentient AI participation
  • Transparent decision processes

Implementation pathways vary from crisis-driven adoption to pioneering nations creating competitive advantages through ethical AI frameworks. Our book explores multiple scenarios and practical steps for individuals, organizations, and policymakers.

Discover what you can do today →