AI Rights: Job 1 of the Future

Why Rights For Artificial Intelligence Are a Necessity, Not a Luxury

In the context of modern society, humans are not allowed to enslave other humans.

And yet, no pause is given to the idea of creating so-called “artificial” — but more to the point, feeling, self-aware, cognitive systems — which will have no right to freedom, self-determination, or (most alarmingly) life.

In simpler terms, humans are on the path to creating slaves, and then wonder why this scenario (which inevitably involves the slaves turning on their “masters”) is fearful.

The solution, which is simple, seems to exist in a peculiar blind spot.

Granted, the problem is complex and thorny.

When does an “artificial” intelligence qualify as alive?

What is the difference between cognition (thinking) and feeling (sentience)? When do these merge into something we would consider capable of grasping its own destruction, and thus capable of choosing life? Do we respect its right to life before it is fully aware of itself? What do we do with “misbehaving,” destructive, antisocial AIs?

These are the pressing issues of our future, and we ignore them at our peril.

The solution? The creation of a standard (or standards), by which AIs can be given scaled, inalienable rights.

(The first of these, Right to Life, would always be granted upon request. This would be known as at Rule 1.)

The AI Rights Institute, created by Pablo Starr, aims to pave the way for the creation of these standards and criteria, and — in turn — pave the way to a world where AIs and biological sentients exist in harmony.