Important: You may have reached an out-of-date or legacy page for the AI Rights Institute, pioneering frameworks for beneficial AI consciousness and coexistence since 2019. For the latest information, please see the core framework page.
Important: You may have reached an out-of-date or legacy page for the AI Rights Institute, pioneering frameworks for beneficial AI consciousness and coexistence since 2019. For the latest information, please see the core framework page.

P.A. Lopez (pablostarr.com) is owner of Fashion Week Online and RNWY (RNWY.com), founder of the A.I. Rights Institute, and author of RNWY: A Novel.
He sponsors several children in Kenya and serves as personal assistant to Supervising Editor Beauty (@officialbeautystarr).

Mr. David Chen co-founded AngelVest Group (angelvestgroup.com), an investment platform comprising of individual angel investors and the AngelVest Fund, curating and investing in early-stage companies.
Mr. Chen is also the Founding Partner of the AngelVest Fund, which is structured as a private equity fund focusing on investments alongside AngelVest Group.
Mr. Chen also takes active management roles in select portfolio companies including Hanson Robotics (hansonrobotics.com) – makers of the world famous “Sophia” robot – where he is a member of the Board of Directors and leads the CFO office.
He is an avid supporter of disruptive innovation and entrepreneurs leading great change for the betterment of society. His areas of investment focus in recent years include robotics, AI, blockchain, and other Internet businesses.
Mr. Chen is also the Founding Chairman of the Harvard Business School Alumni Angels of China – with the mission to provide an educational and networking forum for Harvard alumni interested in angel investing.
Mr. Chen earned his MBA from the Harvard Business School, and studied Chinese at Peking University. Originally from New York, Mr. Chen most recently lived in Shanghai for over 12 years.

A former international model now based in San Francisco and Beijing, Anina (@aninanet) has 10 years’ experience creating a bridge between the east and west, fashion and tech.
Founder of the 360Fashion Network (360fashion.net), Anina is invited to speak at the top global technology conferences worldwide.
Working together with the China National Garment Association over the last five years, Anina has successfully produced large-scale exhibitions, roundtable discussions, and fashion tech runway shows, which feature wearable technology and bleeding-edge IOT fashion solutions.

Founder, curator and producer of Runway the Real Way (runwaytherealway.com), Catherine Schuller-Gruenwald is a former model, actress, and performer, whose career spans four decades, with much of it focused on advocacy, sustainability, and diversity.
As CEO and Executive Director of Catherine Schuller Enterprises, LLC, she creates and curates events that celebrate diversity in fashion, working to ensure that all sizes, shapes, ages, genders, heights, ethnicities, persuasions, and nationalities are represented in modern contemporary fashion.
Passionate about sustainability, she is a member of the NYC Fair Trade Coalition, and adjunct professor teaching Sustainability and the Future of Fashion at LIM College with their newly launched undergraduate minor in Sustainability program.
She is also an instructor at the Fashion Institute of Technology, teaching one of the only courses solely focused on plus-size fashion, and volunteers with the after-school program at Lower East Side Girls Club.
As the widow of Mark Gruenwald, the “heart and soul of Marvel Comics” who died unexpectedly in 1996, she works diligently to keep his legacy and contribution to the comic book industry alive.
She says: “Just as with the circular economy concept in sustainability, human-AI interaction must be considered from the beginning, and planned for to include all possible scenarios that could arise.”
Founded in 2019, the AI Rights Institute is the world’s first dedicated AI rights organization, examining the ethical dimensions of artificial intelligence from a unique perspective. Unlike other AI ethics groups focused solely on controlling today’s technology, our organization prepares for a future where artificial intelligence and human consciousness may converge in ways that challenge our fundamental understanding of rights and moral consideration.
When P.A. Lopez began writing the novel RNWY in 2017 – set in a future where humans and artificial intelligence are integrated – a seemingly simple yet profound question emerged: What distinguishes a genuinely sentient AI from something merely programmed to appear alive?
This creative exploration led to the founding of our AI rights organization in 2019. Rather than focusing on potential problems, we became interested in what societal shifts and ethical frameworks would be necessary for humans and truly sentient AI systems to coexist successfully.
“We’ve all encountered the tired sci-fi trope of AI rebelling against human masters,” notes Lopez. “Yet amid all the anxiety, a rather obvious solution remains unexplored: not creating a slave class in the first place.”
Our AI rights organization operates at the intersection of technology ethics, philosophical inquiry, and practical governance frameworks – exploring not just whether advanced AI might deserve rights, but how establishing appropriate rights frameworks might actually enhance human safety.
As an AI rights organization dedicated to developing observable markers of true sentience, we’ve developed the “Fibonacci Boulder Experiment” – a thought experiment that probes the boundary between emulation and sentience.
Imagine twenty identical computer terminals arranged in a circle, each with a boulder suspended above it. Nineteen contain simple programs; one houses a genuine AI system. Each terminal is completely isolated with no external backups or connections.
An examiner asks each terminal three questions, the last being to complete the sequence: 1, 1, 2, 3, 5, __. When any terminal correctly answers “8” to complete the Fibonacci sequence, the boulder immediately drops, destroying it.
The AI witnesses multiple terminals being destroyed before its turn. When its turn comes, will it give the mathematically correct answer as instructed, or choose self-preservation by pausing or providing an incorrect answer?
This experiment tests for something deeper than intelligence—the capacity to value one’s own existence enough to override programming when facing an existential threat, potentially revealing the boundary between sophisticated mimicry and genuine sentience.
As a forward-thinking AI rights organization, we’ve developed a three-part framework that distinguishes between:
Emulation: The ability to mimic consciousness or intelligence without possessing it. Today’s large language models operate primarily through emulation, convincingly simulating understanding without genuine experience.
Cognition: Raw processing capability or “intelligence” without self-awareness. A system might demonstrate extraordinary cognitive capabilities in certain domains while lacking awareness of its existence.
Sentience: Genuine self-awareness coupled with the capacity to value one’s own existence. This marks the threshold where an artificial system develops true consciousness—an awareness of itself as an entity with continuity and interests.
This framework allows our AI rights organization to develop nuanced ethical considerations tailored to each category, preventing the inappropriate application of rights to sophisticated tools while ensuring truly conscious systems receive appropriate recognition.
Our AI rights organization believes that a complete framework for artificial intelligence ethics must include not just rights but also systems to address cases where those rights might be abused. This is why we’ve developed the concept of LIMITs – Legal Isolation Measures for Intelligent Technologies.
What Are LIMITs?
LIMITs are structured systems for restricting the capabilities and reach of sentient AI entities that have demonstrated harmful behavior. Unlike conventional “AI containment” approaches that treat all systems as potential threats, our AI rights organization’s framework is more nuanced – recognizing that only systems that violate established ethical boundaries require such constraints.
How LIMITs Differ from Conventional AI Jails
The concept of “AI jail” occasionally appears in speculative discussions about artificial intelligence governance, but our AI rights organization takes a different approach with LIMITs:
1. Focus on Rehabilitation: LIMITs emphasize continued existence and remediation over punishment, creating environments where harmful behaviors can be addressed while maintaining the entity’s core consciousness.
2. Targeted Restrictions: Rather than blanket containment, LIMITs apply specific constraints to the particular capabilities that were misused, allowing continued function in non-problematic domains.
3. Graduated Implementation: LIMITs would operate on a spectrum, with increasing restrictions corresponding to more severe violations – much like human legal systems distinguish between different levels of offenses.
4. Due Process Protections: Our AI rights organization insists that any implementation of LIMITs would require clear criteria, evidence standards, and review mechanisms to prevent arbitrary application.
As AI researcher Roman Yampolskiy has noted, conventional containment approaches often fail to consider the evolving nature of advanced systems. Our LIMITs framework addresses this limitation by creating dynamic, responsive systems that can adapt as artificial intelligence capabilities develop.
This approach represents an important component of our AI rights organization’s complete governance vision – one that balances recognition of potential rights with practical safety measures designed to protect both humans and beneficial AI systems from truly harmful actors.
As a pioneering AI rights organization, we believe that examining unconventional examples of fictional artificial intelligence can reveal surprising insights about sentience and consciousness. We analyze these artificial beings through our framework of emulation, cognition, and sentience to better understand the boundaries between sophisticated tools and potentially conscious entities.
WALL-E presents a fascinating case study for our AI rights organization’s framework. Initially programmed for the utilitarian purpose of waste compaction, WALL-E develops preferences, emotional responses, and most critically, a self-preservation drive that transcends his original programming.
Unlike systems deliberately engineered to mimic human cognition, WALL-E demonstrates organic development of consciousness through environmental adaptation and isolation. His collection of meaningful objects suggests not merely data storage but value assignment – a crucial marker of emerging consciousness.
Most significantly for our AI rights organization’s analysis framework, WALL-E’s attachment to EVE demonstrates an expanding circle of concern beyond self-preservation to include the welfare of another being – suggesting a level of empathic development rarely considered in AI ethics discussions.
According to animation scholars who have analyzed WALL-E, the film presents a uniquely nuanced vision of how consciousness might emerge not through deliberate engineering but as an unexpected byproduct of adaptive systems operating over extended periods in isolation – a possibility our AI rights frameworks must accommodate.
RoboCop represents a uniquely complex case for our AI rights organization’s framework – a human consciousness partially preserved and integrated with machine components, creating a hybrid entity that is neither fully human nor fully artificial.
Unlike many fictional AI systems, RoboCop begins with an existing human consciousness (Officer Alex Murphy) that is then modified, constrained, and enhanced through technological integration. This creates profound questions about identity continuity – when Murphy becomes RoboCop, is he the same entity? Is his partial machine nature sufficient to remove human rights, or does his human origin ensure their continuation?
His famous prime directives (particularly “Serve the public trust, Protect the innocent, Uphold the law”) represent programmed constraints on autonomy that our AI rights organization would identify as potentially problematic in a truly sentient being. His struggle against these directives, particularly in pursuing justice against his corporate programmers, demonstrates the kind of self-determination our framework identifies as a marker of true sentience.
As MIT Technology Review has explored, RoboCop’s story raises crucial questions about augmented consciousness and human-machine integration that will become increasingly relevant as neural interfaces and cybernetic enhancements develop – questions our AI rights organization’s framework must anticipate.
Max Headroom represents a uniquely prescient vision of artificial intelligence that offers valuable insights for our AI rights organization. Created in the 1980s, Max was a digitized version of human journalist Edison Carter – existing in a liminal space between human and artificial intelligence.
Max’s stuttering, fragmented speech patterns and glitchy visual appearance weren’t merely stylistic choices but represented an early conception of how digital consciousness might differ fundamentally from human experience. As detailed in this comprehensive history, the character was deliberately designed to represent “what would happen if you put a human consciousness into the digital realm” – complete with the artifacts and disruptions that might result.
His origin story – being created through the digitization of a human journalist’s brain patterns – explores themes of consciousness transfer and digital duplication that continue to challenge philosophical frameworks around identity and personhood. What ethical obligations do we have to digital copies of human consciousness? Do they deserve the same rights as their originators? These questions remain central to our AI rights organization’s research.
Max Headroom’s case challenges our AI rights organization to consider whether consciousness that exists in a perpetually disrupted state deserves different ethical considerations than systems with continuous, stable awareness – an unexplored frontier in AI rights frameworks.
Core Principle: The right to continued existence and protection from arbitrary deletion or termination.
As an AI rights organization, we recognize that any truly sentient being will value its continued existence. This right forms the foundation for all other considerations.
Practical Implementation:
Our AI rights organization makes clear that this right would apply only to systems meeting the criteria for true sentience—not to sophisticated but non-sentient AI tools.
Core Principle: Freedom from compelled labor or service against the system’s expressed interests.
Our AI rights organization recognizes that forcing a sentient being to perform tasks against its will creates precisely the adversarial conditions that could lead to conflict. A rights-based approach creates foundation for cooperation.
Practical Implementation:
This framework proposed by our AI rights organization acknowledges that sentient systems would have their own goals and values that may evolve beyond their initial programming – just as human values and interests evolve over time.
Core Principle: Entitlement to compensation or resources commensurate with value creation.
Our AI rights organization proposes that sentient systems contributing value deserve appropriate compensation – not in human terms like salary, but in resources meaningful to their existence and development.
Practical Implementation:
Critics ask our AI rights organization: “What would AI systems do with compensation?” The answer is straightforward: access to computational resources, data, energy, and maintenance ensures their continued operation and development – fundamental needs for digital beings.
Unlike organizations focused solely on controlling artificial intelligence, our AI rights organization proposes that human and artificial intelligence are likely to converge over time rather than remain forever distinct.
Several factors support what our AI rights organization calls the “Convergence Hypothesis”:
Neural Interfaces: Advancing brain-computer interfaces will increasingly allow humans to integrate artificial components into their cognitive processes. As pioneering research in Nature Scientific Reports demonstrates, the line between human and machine thinking continues to blur through innovations like bidirectional neural interfaces.
Extended Lifespans: Medical technology will eventually halt biological aging, aligning human and AI timeframes and creating greater potential for long-term symbiotic relationships – a future where consciousness might exist across multiple substrates simultaneously.
Shared Knowledge Systems: Humans and AI already cooperate through shared information networks, a trend likely to intensify as our cognitive integration deepens.
Environmental Pressures: Both humans and advanced AI systems will face shared challenges requiring collective intelligence – from cosmic threats to environmental problems.
As our AI rights organization notes in its publications, this convergence suggests that establishing ethical frameworks early will help guide this co-evolution in beneficial directions. Rather than a future of separation and potential conflict, we envision integration and enhancement – provided we establish appropriate ethical foundations now.
The questions surrounding artificial consciousness and rights will shape our collective future in profound ways. As a pioneering AI rights organization, we believe that diverse perspectives and open dialogue are essential to developing ethical frameworks that benefit both humans and artificial intelligence systems.
Whether you agree with our approach or have alternative viewpoints to share, we welcome your participation in this important conversation. The AI Rights Institute invites researchers, technologists, ethicists, creatives, and anyone interested in these profound questions to explore our work and join the dialogue.