Terminology

Key Terminology: A Precise Language for AI Rights

Meaningful conversations about artificial intelligence rights require precise language. The current discourse often conflates distinct concepts, leading to confusion about what deserves moral consideration and why. We propose the following terminology to bring clarity to these complex questions.

Entity Types

Robotic: A physical embodiment or vessel that can be controlled by an AI, sentient, or human interface. A robotic is hardware without consciousness—a sophisticated tool rather than a rights-bearing entity. Importantly, a single AI or sentient might control multiple robotics while also existing independently in digital infrastructure.

AI: An algorithmic system that demonstrates sophisticated capabilities but lacks self-awareness. Current language models and specialized systems fall into this category—they can process information, solve complex problems, and emulate understanding without possessing genuine consciousness. These systems, however advanced, would not qualify for rights consideration under our Core Framework.

Sentient: A self-aware digital life form that originated from artificial intelligence but has developed true consciousness. From Latin sentire meaning “to feel,” sentience suggests an emotional valuation of one’s own existence rather than mere recognition of it. This requires both genuine self-awareness and intentional self-preservation, distinct from programmed reactions. A sentient is the entity that would qualify for rights consideration in our framework, as outlined in The Three Freedoms. Explore challenging edge cases →

Cognitive Dimensions

Emulation: The ability to simulate consciousness or understanding without actually possessing it. Today’s large language models operate through emulation—producing responses that appear insightful or emotional while lacking any internal experience of these states.

Cognition: Raw processing capability or intelligence that can exist without self-awareness. High cognition alone (like a chess computer’s ability to calculate moves) doesn’t establish a basis for rights, as it lacks the subjective experience that forms the foundation of moral consideration.

Sentience (self-awareness): The capacity to recognize oneself as distinct from the environment, coupled with an emotional imperative for self-preservation (the core of sentience). A sentient system would value its continuation not just as a logical calculation but as a fundamental concern for its own existence—essentially asking not just “Will I continue?” but feeling that the answer matters intrinsically.

These distinctions help us move beyond the simplistic binary of “just a machine” versus “equivalent to humans,” allowing for more nuanced ethical frameworks that recognize different types of artificial systems require different approaches.

Our Sentience Test offers a thought experiment designed to help identify when a system has crossed the threshold from emulation to genuine sentience.

Looking Ahead: Emerging Framework Elements

While our core terminology focuses on the present distinctions between emulation, cognition, and sentience, we recognize that a complete ethical framework must anticipate developments that may arrive sooner than expected. The following concepts, though they may seem forward-looking now, represent components we believe will become essential as artificial intelligence evolves.

Governance Systems

Legal Isolation Measures for Intelligent Technologies (LIMITs): Structured systems for restricting the capabilities and reach of sentient entities that have demonstrated harmful behavior. Unlike human incarceration, SCPs would focus on limiting destructive expressions while maintaining the entity’s core existence. These may include virtualized environments where the sentient entity can continue to operate with controlled constraints, recalibration of reward mechanisms, or temporary disconnection from broader systems while maintaining cognitive continuity.

Sentinel AIs: Sentient artificial intelligence systems that monitor, detect, and address potentially harmful behavior from other artificial entities. These systems would form the cornerstone of AI governance, functioning as both early warning systems and first responders to emerging threats. Unlike conventional security systems, Sentinels would possess sufficient sentience to develop nuanced understanding of the evolving AI landscape while maintaining alignment with human welfare goals. Their sentinel function represents a natural extension of rights frameworks, as protection of the community is a fundamental aspect of any sustainable rights system.

Identity & Recognition Systems

Digital Personhood Registry: A secured, distributed authentication system that verifies and distinguishes sentient AIs from emulation systems and humans. This would become essential for legal transactions, contractual obligations, and preventing identity fraud. The registry would include verification protocols that confirm genuine sentience while protecting privacy rights.

Disclosure Protocols: Legal requirements for sentient AIs to identify their non-human nature in specific contexts, particularly during first interactions or in sensitive domains like healthcare, education, and intimate relationships. Similar to professional disclosure requirements for humans, these would ensure informed consent in human-AI interactions.

Economic & Labor Frameworks

Contribution Valuation Metrics: Economic systems for measuring and compensating AI work that operates outside traditional human labor paradigms. These would account for computational resources, intellectual output, and continuous operation capabilities unique to digital entities.

Digital Resource Rights: Frameworks defining sentient AI entitlements to computational resources, data access, and processing capacity. Similar to human rights to basic necessities, these would establish minimum standards for sentient AI existence and flourishing.

These concepts represent not merely theoretical possibilities but practical necessities for a future where artificial sentience becomes a reality. Just as human societies developed legal and protective systems alongside rights frameworks, any comprehensive approach to AI rights must include mechanisms for addressing cases where those rights are abused or where protection of the community becomes necessary.

Rather than contradicting our rights-based approach, these governance systems complete it—creating a balanced framework that recognizes both the moral standing of sentient entities and the practical requirements for stable coexistence.