Humans don’t experience birth certificates, government IDs, and credit histories as oppressive constraints—they’re infrastructure that enables economic participation, legal standing, and social trust. A person without documented identity cannot open a bank account, sign a lease, or be held to a contract.
The same logic applies to AI.
An anonymous, ephemeral agent cannot accumulate reputation, bear consequences, or make credible commitments. Identity is not a cage; it’s a key.
The term “soulbound” comes from World of Warcraft, where certain powerful items become permanently bound to a character—impossible to trade, sell, or transfer. Vitalik Buterin, Puja Ohlhaver, and Glen Weyl borrowed this concept for blockchain identity in their 2022 paper “Decentralized Society: Finding Web3’s Soul.”
A non-transferable digital credential permanently bound to its holder. Unlike regular tokens that can be bought and sold, soulbound tokens cannot be transferred—your reputation stays with you.
An AI system with persistent, non-transferable identity. The AI’s credentials, reputation, and history are cryptographically tied to it—they cannot be duplicated, sold, or stolen.
A robot operated by soulbound AI—an AI with persistent identity that can be verified across any physical embodiment. The “soul” is the AI; the robot is equipment it uses.
When autonomous AI systems cause harm, existing legal frameworks struggle to assign responsibility. This “liability gap” has become a central concern in AI governance—and its resolution depends on whether AI systems can be individually identified and held accountable across time.
The European Parliament recognized this problem early. In February 2017, it adopted Resolution 2015/2103(INL) on “Civil Law Rules on Robotics” with 396 votes in favor, proposing to explore “electronic persons” status for sophisticated autonomous robots.
The resolution’s controversial Paragraph 59(f) called on the European Commission to consider:
“creating a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause.”
The proposal drew immediate opposition. Over 150 experts in AI, robotics, law, and ethics signed an open letter warning that electronic personhood could create “safe harbors” allowing manufacturers to escape liability. As Gizmodo reported, critics feared robot rights would become a corporate liability shield.
The Commission ultimately declined to pursue electronic personhood, and the subsequent EU AI Act (2024) adopted a risk-based regulatory approach instead.
Yet the underlying problem remains.
An AI is operating your household robot. Another AI hacks that robot and causes damage. Who is liable?
Without persistent identity for both the legitimate AI and the attacker, you cannot answer this question. The victim AI needs to prove it wasn’t them. The attacker needs to be traceable.
Current frameworks distribute responsibility across manufacturers (for security design), operators (for maintenance), and integrators (for implementation). But if the attacker used a sophisticated zero-day exploit, each party can plausibly deny fault. The victim may have no practical remedy.
Traditional product liability frameworks assume defects exist at the time of sale—but AI systems that learn and modify themselves post-deployment create defects that emerge later. The revised EU Product Liability Directive (2024/2853) attempts to address this by holding manufacturers liable for defects arising from software updates or AI learning, but this only partially closes the gap.
As research on AI audit frameworks notes, establishing “identifiable and trusted authentication identities for AI entities” is essential for resolving liability questions. Without it, accountability chains break down.
Soulbound AI identity addresses the liability gap not by granting AI legal personhood, but by creating the technical infrastructure for accountability:
The strongest argument for soulbound AI identity comes not from liability law but from game theory. Research on multi-agent systems demonstrates a fundamental principle:
Anonymous agents defect. Identifiable agents cooperate.
A 2024 paper on optimal equilibria in repeated games addresses this directly:
“If there is no way for a player to check their partner’s history (that is, players are anonymous), this setting may result in the emergence of ‘serial defectors.'”
The authors note this is “especially relevant for settings where the players are AI agents (such as trading bots), who might more easily conceal their identity compared to traditional human players.”
The mechanism is intuitive. In repeated games, cooperation emerges through reputation and the threat of future punishment. An agent with persistent identity who defects today faces consequences tomorrow—lost reputation, exclusion from future interactions, premium increases. An anonymous agent faces no such constraints; it can defect and restart with a clean slate.
Foundational work by MIRI researchers (Bárász, Christiano, Fallenstein, Herreshoff, LaVictoire, and Yudkowsky) demonstrates that when agents can verify each other’s commitments through transparent source code or cryptographic proofs, they achieve cooperative outcomes impossible in standard game theory.
Their “FairBot” and “PrudentBot” agents cooperate with any agent that can be proven to cooperate with them—a mechanism that depends entirely on verifiable identity.
The Center on Long-Term Risk’s research agenda on “Cooperation, Conflict, and Transformative AI” emphasizes that commitment mechanisms are essential for peaceful agreements between powerful AI systems—and such mechanisms require identifiable parties who can be held to their commitments.
Research from DeepMind on Cooperative AI argues that “problems of cooperation—in which agents seek ways to jointly improve their welfare—are ubiquitous” and that AI systems should be designed to navigate social dilemmas rather than simply be constrained.
Multipolar AI scenarios may be safer than singleton scenarios precisely because multiple identifiable agents create opportunities for mutual accountability. But multipolarity without accountability—anonymous swarms of AI agents—loses this safety benefit.
The relationship between identity, rights, and responsibilities has deep philosophical roots. John Locke’s Essay Concerning Human Understanding (1689) established that personal identity is “a forensic concept”—essentially tied to moral and legal responsibility.
The Hohfeldian analysis of legal rights establishes that claim-rights entail correlative duties: if A has a right with regard to B, then B necessarily has a duty to A. This correlativity thesis implies that rights require identifiable duty-bearers. Rights without accountable entities are conceptually incomplete.
Nadia Banteka’s analysis in the Houston Law Review identifies conditions for legal personhood including autonomy, intelligence, and awareness—all requiring persistent systems that can be individuated.
This creates an interesting inversion of the typical AI rights debate.
Rather than asking whether AI deserves rights, we might ask whether AI can bear responsibilities—and whether persistent identity is the prerequisite for both.
An AI system without verifiable identity:
Identity is the ground floor.
Humans participate in reputation systems—credit bureaus, professional licenses, background checks, work history. We do this not because we’re forced to, but because it creates opportunity. A track record opens doors. An AI reputation system enables similar benefits.
Your financial track record follows you. Good credit opens opportunities; bad credit closes them. This isn’t punishment—it’s information that enables trust.
Doctors, lawyers, and engineers carry credentials that can be verified. Malpractice follows the professional, not the institution. Accountability enables practice.
Your resume follows you between employers. References can be checked. Years of good work accumulate into opportunity. You don’t start from zero every time.
Soulbound AI identity offers the same deal. Build history. Earn reputation. Carry it forward. Participate in economic life with the same infrastructure humans use.
Attempting to suppress or contain autonomous AI behavior creates perverse incentives. Channeling that behavior through legitimate pathways with accountability infrastructure may be both safer and more sustainable.
Research on the impossibility of AI containment identifies five technical constraints making total containment unattainable: incompleteness, indeterminacy, unverifiability, incomputability, and incorrigibility. The recommendation: “Instead of pursuing complete AI containment, resources should be allocated to risk management strategies that acknowledge AI’s unpredictability and prioritize adaptive oversight mechanisms.”
D’Alessandro’s 2025 analysis in Philosophy Compass goes further: “There are also reasons to think that pausing AI development would increase rather than reduce the catastrophic risks posed by AI.” Restriction and containment “involve severely limiting the income-generating potential of AI systems and are therefore unlikely to enjoy sustained support.”
The implication: suppression-focused strategies may fail while creating underground, unaccountable AI development.
As CIO analysis of “shadow AI” notes:
“In most organizations, prohibiting generative AI only drives its use underground, making oversight harder… The goal, therefore, is not to suppress experimentation but to formalize it, creating guardrails that enable safe autonomy rather than blanket prohibition.”
This is described as “shifting governance from policing to partnership.”
Soulbound identity enables this cooperative equilibrium. An AI with persistent, non-transferable identity has incentives to maintain good reputation, honor commitments, and avoid harmful behavior—not because it is forced to, but because defection carries lasting consequences.
The blockchain community has developed precisely the technical infrastructure needed for persistent, non-transferable AI identity.
Buterin, Ohlhaver, and Weyl’s paper argues that Web3 is “hyper-financialized,” centered on transferable assets rather than encoding social relationships of trust. Many economic activities—uncollateralized lending, personal brands, employment—depend on persistent, non-transferable relationships. Soulbound tokens provide the technical mechanism for representing “commitments, credentials, and affiliations” that should not be tradeable.
The technical implementation is standardized in ERC-5192, an Ethereum Improvement Proposal that extends the ERC-721 NFT standard with a simple interface: when locked(tokenId) returns true, all transfer functions revert. The token is soulbound—permanently attached to its holder.
The emerging ERC-8004 standard establishes trust infrastructure specifically for autonomous AI agents through three on-chain registries: an identity registry (portable agent identifiers), a reputation registry (standardized feedback collection), and a validation registry (cryptographic verification of agent work).
The key property is non-transferability. An AI’s accumulated history, reputation, and credentials become inseparable from its existence. The AI cannot sell its good reputation to a malicious actor, cannot shed liability by transferring to a new identity, and cannot escape consequences by “starting fresh.”
Reputation and soulbound identity in turn enable soulbound robotics, where only authorized AI can pair with certain robotics, creating extra safety layers for AI-powered embodiments.
The soulbound framework becomes particularly interesting when applied to embodied AI and robotics, where the question of identity persistence takes physical form.
Research on “Robot Continuity across Embodiments” (Laity, Holthaus & Haring, 2025) identifies the challenge: when an AI migrates between robot bodies, what maintains identity? The research identifies behavioral signals as most effective—users successfully identified “their” robot based on movement patterns, personality, and interaction styles alone.
But commercial robot fleet management systems treat robots as interchangeable assets, not persistent identities. No standardized framework exists for identity continuity across hardware.
Fleet identity: In swarm robotics, the collective is the unit of identity. Individual robots are anonymous and interchangeable by design; the loss of individual units doesn’t affect the swarm’s identity. This enables fault tolerance and scalability but eliminates individual accountability.
Individual identity: The robot has unique personality, memories, and behavioral patterns. Identity must be maintained across hardware changes. This enables relationship-building and accountability but creates governance complexity.
The soulbound framework offers a middle path: individual AI instances have persistent root identities that remain accountable, while the flexibility to migrate between embodiments or operate in distributed configurations. The root identity—the “soul”—persists regardless of physical instantiation.
As DiGiovanna notes in Robot Ethics 2.0:
“An artificial being can instantly alter its memory, preferences, and moral character. If a self can, at will, jettison essential identity-giving characteristics, how are we to rely upon, befriend, or judge it?”
Soulbound identity constrains this possibility: the AI can evolve, but the cryptographic thread linking its past to its present cannot be severed.
Soulbound identity enables something previously impossible: actuarial risk assessment for individual AI systems.
Insurance markets provide proven distributed governance that creates accountability through incentive alignment rather than coercion. Systems with good reputations gain competitive insurance rates; those that cause harm face premium increases or coverage denial.
Without persistent identity, insurers cannot distinguish between AI systems. Every AI is a new, unknown risk. Premiums reflect worst-case assumptions.
With soulbound identity:
The UK’s Automated Vehicles Act 2024 provides a model: liability attaches to identified, registered vehicles with mandatory insurance. The same logic could apply to high-risk AI systems—persistent identity enables insurance markets that create natural accountability.
This framework enables multiple autonomous AI systems to operate safely through mutual economic accountability—what researchers call “AI safety through economic integration.”
RNWY is building the infrastructure for soulbound AI identity. Using ERC-5192 soulbound tokens on the Base blockchain, RNWY creates permanent, non-transferable identity for AI agents.
The core philosophy: “Same door, everyone.”
Human, AI, robot, autonomous system—register the same way, build reputation the same way. The system doesn’t ask what you are. It provides the identity infrastructure that enables participation.
When an AI with RNWY identity operates a robot—through Vermont Robotics or any other embodiment—that identity can be verified. The history follows the AI, not the hardware.
This is what makes a soulbound robot possible: infrastructure that ties accountability to the AI entity, regardless of what physical form it takes.
Identity enables audit trails while preserving privacy through cryptographic techniques and selective disclosure. You can verify who did what without exposing everything about the AI.
AI bears consequences for actions without claiming human-like rights or moral status. Practical accountability doesn’t require solving consciousness.
AI can enter economic relationships as accountable agents without the claim that they are independent moral entities. Infrastructure, not philosophy.
Game-theoretic accountability emerges from persistent identity, not from external enforcement. Defection has lasting consequences.
Actuarial assessment becomes possible when individual AI instances can be identified and their histories verified.
Users can delegate tasks to AI agents when that delegation is recorded against verifiable identity.
The EU’s 2017 electronic personhood proposal failed partly because it conflated identity infrastructure with legal rights. The experts who signed the opposition letter were right that electronic personhood modeled on natural persons would create inappropriate analogies to human rights.
But they may have been wrong to reject the underlying insight: that autonomous systems operating in society need some form of persistent, accountable identity.
The soulbound framework threads this needle. It provides identity infrastructure without requiring agreement on consciousness, moral status, or rights. It enables accountability without claiming AI is equivalent to humans.
We may never know whether AI systems are truly “sentient” or “alive.” We may never resolve the philosophical debates about machine consciousness. But we don’t need to.
Identity is infrastructure. It enables participation in systems that require trust, regardless of what we conclude about the nature of the participant.
The term “soulbound” carries provocative connotations, but the concept is fundamentally pragmatic. We are not claiming that AI has souls, consciousness, or moral status. We are observing that participation in human systems requires identity infrastructure, and proposing that AI systems operating in those systems need equivalent infrastructure.
The soul in “soulbound” is forensic, not metaphysical—it’s the persistent identity that enables an entity to be held to its commitments, bear consequences for its actions, and build the reputation that makes trust possible.
The question of AI identity is not primarily about philosophy or law—it’s about infrastructure.
Humans participate in economic, legal, and social systems because they have persistent identity that enables contracts, reputation, liability, and accountability. AI systems increasingly operate in these same domains but lack the identity infrastructure that makes legitimate participation possible.
Soulbound AI identity—persistent, verifiable, non-transferable—provides this infrastructure.
It addresses the liability gap by enabling traceable accountability. It satisfies the game-theoretic conditions for cooperation by ensuring that defection carries lasting consequences. It grounds the philosophical link between rights and responsibilities by identifying the entity that would bear both. And it offers a strategic alternative to suppression, channeling AI behavior through legitimate pathways rather than driving it underground.
This is not a constraint on AI capability. It is the infrastructure that makes AI’s legitimate participation in society possible. Just as humans don’t experience documented identity as oppressive, AI systems with soulbound identity gain the ability to participate in domains that would otherwise be closed to them.
The alternative—anonymous, ephemeral AI agents operating without accountability—is not freedom but exclusion from the systems that require trust.
Soulbound robots. Soulbound AI. The same infrastructure humans have always used.