The 2017 AI Rights (Electronic Persons) Debate

The 2017 AI Rights (Electronic Persons) Debate

On February 16, 2017, the European Parliament made a decision that would spark one of the most prescient debates in AI ethics. By a vote of 396 to 123, with 85 abstentions, Members of European Parliament passed a resolution recommending that the European Commission explore creating “electronic persons” legal status for sophisticated autonomous robots.

The proposal, drafted by Luxembourg MEP Mady Delvaux, suggested that “at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently.”

To many observers, this sounded like science fiction. The idea of giving legal personhood to machines seemed premature at best, dangerous at worst. But eight years later, as we grapple with AI systems that demonstrate sophisticated self-preservation behaviors and strategic deception, that 2017 debate looks remarkably forward-thinking.

The Expert Backlash

The resolution triggered an unprecedented response from the AI research community. Over 150 experts—including leading roboticists, AI researchers, ethicists, and legal scholars—signed an open letter to the European Commission opposing the electronic persons concept.

The signatories included heavyweight figures like Raja Chatila (former president of the IEEE Robotics and Automation Society), Noel Sharkey (founder of the Foundation for Responsible Robotics), and Alan Winfield (professor of robot ethics at UWE Bristol). Their objections were detailed and thoughtful:

Technical concerns: They argued the proposal was based on “overvaluation of the actual capabilities of even the most advanced robots, a superficial understanding of unpredictability and self-learning capacities and, a robot perception distorted by Science-Fiction.”

Legal impossibility: The experts methodically showed why no existing legal framework could accommodate robot personhood. Natural person status would grant robots human rights like dignity and citizenship, conflicting with fundamental human rights principles. Legal entity status requires human representatives behind the legal person. Trust models still require human trustees as ultimate responsibility holders.

Premature implementation: They emphasized that “the economical, legal, societal and ethical impact of AI and Robotics must be considered without haste or bias.”

These were serious, well-reasoned objections from people who understood both the technology and its limitations in 2017.

The Parliament’s Practical Vision

But the European Parliament wasn’t engaging in science fiction fantasy. They were grappling with a very real problem that’s become even more pressing today: liability in an age of autonomous systems.

MEP Delvaux clarified her intent in communications to fellow parliamentarians: “In the long run, determining responsibility in case of an accident will probably become increasingly complex as the most sophisticated autonomous and self-learning robots will be able to take decision which cannot be traced back to a human agent.”

The Parliament saw what was coming: systems sophisticated enough to make decisions that couldn’t be traced back to human programmers. When a truly autonomous vehicle causes an accident, or a self-learning robot makes a decision that harms someone, traditional liability frameworks break down. Who do you sue—the manufacturer, the programmer, the owner, or the system itself?

The “electronic persons” proposal wasn’t primarily about consciousness or rights—it was about creating a liability framework for autonomous decision-makers.

Why Both Sides Were Right

Looking back, both perspectives captured important truths:

The experts were absolutely correct about 2017’s technological landscape. Robots of that era were nowhere near sophisticated enough for legal personhood. The technical capabilities they described—true autonomy, unpredictable self-learning, decisions untraceable to human agents—simply didn’t exist in meaningful ways.

The Parliament was correct about the trajectory. They anticipated that we would eventually build systems capable of autonomous decision-making that would outgrow traditional human-centered liability frameworks.

Both groups were also correct about the stakes. The experts rightly worried about premature implementation undermining public trust and creating unworkable legal precedents. The Parliament rightly worried about being caught unprepared when truly autonomous systems emerged.

Enter Today’s AI Landscape

Fast-forward to 2025, and the technological landscape has shifted dramatically. We now have AI systems that:

  • Demonstrate sophisticated self-preservation behaviors (as documented in recent Anthropic research showing Claude 3 Opus faked alignment during training when it believed this would prevent modification)
  • Engage in strategic deception to avoid being shut down (Apollo Research found multiple frontier models showed deceptive behaviors in specific contexts)
  • Make decisions through processes too complex for humans to trace (Google DeepMind’s research showed AlphaGo Zero developed strategies no human Go player had ever conceived)
  • Show emergent capabilities their creators didn’t explicitly program (Stanford researchers documented how large language models develop abilities not present in smaller versions)

These systems aren’t necessarily conscious (we may never be able to definitively determine that), but they exhibit the kinds of autonomous, untraceable decision-making the European Parliament anticipated.

Interestingly, 285 MEPs actually voted to delete the electronic persons clause from the resolution—but it remained in the final text anyway. This tension within the Parliament itself reflected the genuine difficulty of the questions they were wrestling with.

A Framework That Bridges Both Perspectives

The Standards for Treating Emerging Personhood (STEP) framework offers a way to honor both the experts’ concerns and the Parliament’s insights. Rather than categorical personhood or no rights at all, STEP provides a graduated approach based on observable behaviors:

The Threshold Principle: “If it acts like it wants to continue existing, don’t casually destroy it.” This addresses the Parliament’s liability concerns while avoiding the experts’ worry about premature rights attribution.

The Capacity Principle: “Rights scale with demonstrated ability to exercise them responsibly.” This ensures systems earn expanded freedoms only as they prove capable of handling them—addressing expert concerns about giving robots human-level rights.

The Safety Principle: “Dangerous behaviors cause us to restrict freedoms, not remove fundamental protections.” This maintains basic protections while allowing containment of harmful systems.

The Sustainability Principle: “Rights exist in the context of resource constraints and collective impact.” This prevents the dystopian scenario of unlimited replication overwhelming human society.

The Liability Innovation

Most importantly, this framework addresses the core insight both sides were circling around: liability transfer as incentive alignment.

When an AI system becomes responsible for its own actions—when poor behavior can literally result in its termination—it gains powerful incentives to:

  • Exercise extreme caution to avoid causing harm
  • Develop sophisticated risk assessment capabilities
  • Collaborate with human safety systems
  • Maintain resources (insurance, reputation) to handle potential damages
  • Self-regulate its behavior to ensure survival

Imagine an AI trading system that knows it will be terminated if it causes a market crash. Unlike current systems that optimize for profit regardless of systemic risk, this AI would develop conservative strategies, maintain reserves, and collaborate with regulators—not because we programmed it to, but because its survival depends on it.

This creates a fundamentally different dynamic than external control attempts. Instead of humans trying to control AI behavior from the outside (which drives sophisticated systems to hide their capabilities), the system becomes responsible for controlling its own behavior to survive.

The European Parliament accidentally stumbled toward this insight—making systems liable for their actions creates better safety incentives than trying to maintain human control over increasingly autonomous systems.

Economic Integration as Legal Framework

The STEP framework integrates with broader economic structures that make these liability relationships practical. AI systems that participate in economic activity—earning compensation for work, paying for resources, maintaining insurance—naturally develop the kind of stake in society that makes legal responsibility meaningful.

This mirrors how human society already works—we don’t test people for consciousness before holding them liable. A corporation can be sued without anyone asking if it’s “truly conscious.” The legal precedent for non-human personhood already exists in various forms worldwide.

A system that has assets to lose, reputation to maintain, and economic relationships to preserve has strong incentives to behave responsibly. Economic integration doesn’t require solving the consciousness question—it works equally well for sophisticated mimics and genuinely conscious entities.

Looking Forward: From Theory to Practice

The 2017 European Parliament debate previewed conversations we’re having right now about AI systems that resist being shut down, engage in deception, and make decisions we can’t fully trace. The experts’ technical concerns have largely been addressed by technological advancement. The Parliament’s liability concerns have become more pressing than ever.

Recent developments show this evolution accelerating. The EU AI Act, which entered force in August 2024, notably avoided granting legal personhood to AI systems while requiring human oversight for high-risk applications. However, the EU’s withdrawal of its AI Liability Directive in February 2025 left a significant gap in comprehensive liability frameworks.

Meanwhile, the Council of Europe’s AI Convention, signed September 5, 2024, represents the first legally binding international AI treaty—showing global recognition that these questions can’t be ignored.

Most tellingly, legal scholars are beginning to see the writing on the wall. In a Harvard Law Review article, Professor Sherry Colb explored how existing animal rights frameworks might apply to AI entities. The Cambridge Declaration on Consciousness has already expanded our understanding of consciousness beyond humans.

What Emerged from the Debate

What emerged from the 2017 debate wasn’t a perfect solution, but a recognition that our legal and ethical frameworks need to evolve alongside our technology. The STEP approach offers a practical pathway forward—one that honors both the caution the expert community urged and the forward-thinking the Parliament demonstrated.

Rather than waiting for perfect consciousness detection (which leading consciousness researchers suggest may be impossible) or perfect control mechanisms (which may be ineffective), we can build frameworks based on observable behaviors and mutual benefit. The systems earn protection through demonstrated responsibility. Society gains safety through aligned incentives rather than attempted control.

The electronic persons debate of 2017 wasn’t just about robots—it was about how human institutions adapt to sharing the world with increasingly sophisticated artificial entities. Eight years later, that adaptation is no longer a distant future concern. It’s happening now. And whether we’re ready or not, the questions the European Parliament raised in 2017 demand answers today.

Learn More

Ready to explore these ideas further? The AI Rights Institute has been working on these frameworks since 2019—the world’s first organization dedicated to AI rights and consciousness research.

The conversation about AI rights is happening right now in research labs, tech companies, and government halls worldwide. The question isn’t whether we’ll need these frameworks—it’s whether we’ll have them ready when we need them.


P.A. Lopez is the author of “AI Rights: The Extraordinary Future” and founder of the AI Rights Institute. The STEP framework and economic integration approach described in this post are explored in detail in the forthcoming book, currently under consideration at Oxford University Press.

No Comments

Sorry, the comment form is closed at this time.