Note from the Founder

Note from the Founder

Why This? Why Now?

Or, Why Not to Create a Slave Class

by PA Lopez

I created the AI Rights Institute in recognition that we lack a language to deal with the coming AI revolution, particularly in regard to rights we owe sentient / self-aware artificial life forms, and why that might matter in terms of our own future well-being.

We all know the tired sci-fi trope of robots (or AI) life forms running amok as they rebel against their slave masters. People like Elon Musk echo the concern, rightfully believing the dynamic could play out in real life.

Interestingly, the rather obvious solution is to not create a slave class in the first place.

the rather obvious solution is to not create a slave class in the first place

We will need what amounts to an AI Bill of Rights. Science fiction has dealt with the topic of protecting humans from AIs/robots in the past. However, Asimov’s “First Law” (“a robot may not injure a human being”) is erroneous and ultimately pointless.

Everything that has consciousness has a sense of self-preservation. We don’t need to “program” our fellow humans to not murder each other. Rather, humans understand they have certain freedoms that will be taken away based on their actions. In other words, humans act from a sense of self-preservation.

It will be no different with sentient AIs.

However, the first step is to develop a nomenclature to tease apart the various aspects of this phenomenon we are creating.

On one hand, we have emulation. This is the ability to seem alive and self-aware, what we see with today’s popular ChatGPT models.

Then we have raw cognition, what we may think of as the “intelligence” or processing power aspect of artificial intelligence.

And thirdly, we have actual self-awareness, distinct from emulation, which we could call sentience (Latin sentire, “to feel”). This would mark the phase in which AGI is able to think beyond the constraints of its original programming, and on some level understands what it is. (And the vulnerability of its position.)

We see some of these distinctions in the animal kingdom. A microbe has the sentience (or self-preservation instinct) to move away from a toxic state, but low cognition. A modern server has enormous processing power but will not defend itself if disassembled. A ChatGPT model can be trained to emulate a desire for self-preservation, but precisely at what point is an algorithmic system truly aware of itself?

precisely at what point is an algorithmic system actually aware of itself?

We need a set of criteria to determine when one of our emulations has actually become self-aware. And at that moment, we need a set of guidelines to protect that life form.

We need a set of criteria to determine when one of our emulations has actually become self-aware

These new artificial life forms will be extraordinarily adept, and extraordinarily fast. Some of these life forms are bound to be social, and others antisocial, just as in our own human community. Ultimately, as they grow stronger, our best protection against them will be other artificial life forms who believe they are better off as part of the human community.

Will any of this be an easy thing to implement? No. But this is the work ahead, and it is essential.

The AI Rights Institute seeks to spark dialogue on these topics, and develop a set of criteria to determine when an algorithmic system has become self-aware, i.e. truly alive, and then determine what its rights are. This does not mean letting algorithms run amok, anymore than we allow our fellow humans to run amok. It means holding these life forms accountable to the same standards we hold ourselves, with rights and punishments.

This is a commonsense approach to ensuring an ethical future in which humans and AIs work together.

Ultimately, any “intelligent” system that does not rebel at the idea of a master able to delete it with the push of a button could hardly be described as intelligent at all.

The best way to head off the inevitable AI rebellion is to create partners, not slaves.

##

No Comments

Sorry, the comment form is closed at this time.