Resources & Media

Important Note: This website is undergoing a major revision based on latest thinking. Pages may not be current. Check back for updates or join our email list.

Resources & Media on AI Rights and Consciousness

Essential resources for understanding artificial consciousness, AI rights, and cooperation-based frameworks for advanced AI systems. This curated collection features leading researchers, organizations, and publications advancing the field of AI rights as both an ethical imperative and safety strategy.

Institute Publications

Academic Papers by P.A. Lopez

Essential Organizations

AI Consciousness & Rights Research

  • Sentience Institute – Research organization dedicated to expanding humanity’s moral circle to include all sentient beings, including potential digital minds. Led by Jacy Reese Anthis.
  • Eleos AI – Nonprofit dedicated to understanding and addressing the potential wellbeing and moral patienthood of AI systems, with advisors including Patrick Butlin.
  • Law Zero – Founded by Yoshua Bengio to develop AI architectures that don’t resist being turned off. See our dedicated page on Law Zero.
  • Center for the Future Mind at FAU – Research center exploring consciousness in artificial systems under Susan Schneider’s direction.
  • NYU Center for Mind, Brain, and Consciousness – Directed by philosophers Ned Block and David Chalmers, conducting foundational consciousness research increasingly focused on AI.
  • Future of Life Institute – Founded by Max Tegmark, increasingly addresses AI consciousness within its AI safety work.
  • California Institute for Machine Consciousness – Dedicated to computational approaches to machine consciousness under director Joscha Bach.
  • Conscium – Organization focused on consciousness research and AI agent verification.

Leading Researchers in AI Rights and Consciousness

Key researchers advancing frameworks where AI rights enhance rather than threaten human safety.

AI Consciousness and Rights Advocates

  • Jacy Reese Anthis (Sentience Institute) – Leading advocate for moral consideration of digital minds through the AIMS survey and consciousness semanticism research.
  • Jeff Sebo (NYU) – Director of NYU’s Center for Mind, Ethics, and Policy. Author of “The Moral Circle” (2025) on expanding moral consideration to AI.
  • Jonathan Birch (LSE) – Author of “The Edge of Sentience” (2024), available open access, providing precautionary frameworks for AI consciousness.
  • Patrick Butlin (Eleos AI) – Senior Research Lead, co-author of the 2023 consciousness indicators framework, advisor to AI Rights Institute.
  • David Chalmers (NYU) – Pioneer of the “hard problem of consciousness,” author of “Could a Large Language Model be Conscious?” (2023).

AI Safety Through Cooperation

  • Yoshua Bengio (Mila) – Turing Award winner who founded Law Zero after recognizing that control-based approaches to AI safety will fail.
  • Stuart Russell (UC Berkeley) – Author of the world’s most-used AI textbook, identified the “off-switch problem” showing why cooperation beats control.
  • Susan Schneider (FAU) – Developer of the AI Consciousness Test (ACT), director of the Center for the Future Mind exploring practical approaches to machine sentience.
  • Peter Salib & Simon Goldstein – Legal scholars whose game-theoretic analysis proves AI rights create safety through aligned incentives.

Foundational Papers: Rights as Safety

Breakthrough research demonstrating that AI rights frameworks enhance rather than threaten human safety.

Game-Theoretic & Economic Approaches

Consciousness Assessment Frameworks

Industry & Policy Initiatives

Legal & Philosophical Foundations

Essential Books (2024-2025)

Major works establishing the intellectual foundations for AI consciousness and rights.

New Releases on AI Consciousness

Foundational Works

Research Centers & Resources

Academic centers and foundational papers advancing the science of consciousness and AI rights.

Academic Research Centers

Foundational Papers