Well. That, in itself, may be the problem.
This has left people wondering if we might have let a malignant genie out of the bottle. Maybe. But not in the way you think. I’ve added articles here, here, and here to support that. In one article an AI program was supposed to analyze aerial photographs and compare them to maps, finding matches. Tasks like this would take human analysts an immense amount of time with many errors. Computers should be able to do it quickly and cleverly (assuming it turns out to be a P instead of an NP problem.)
The AI was given instructions to match the photos with the maps. And it quickly became very proficient at it. Further investigation showed that it was cheating. Cleverly. What the computer was doing was modifying metadata and subtle pieces too fine for humans to see so that it appeared to match a map. It learned fraud, in other words. It should get a job in Washington.
And of course we all know about Tay, the Microsoft racist
chatbot. Tay was supposed to simulate a teenage girl. In less than 24 hours Tay
was spouting racist, anti-Semitic, misogynous, anti-Trump tweets. Now, she was
also spouting pro (and anti) tweets about all of these and other subjects,
showing that Tay was just guilty of monkey poop in, monkey poop out. But not
quite, for some of her comments could not be traced to anyone, so she
successfully managed to analyze input data, compare it to her experiences, and
form an appropriate (in a manner of speaking) response. That’s what we want her
to do, but also what we don’t want her to do. Can we program in ambiguity?
There are many intriguing similarities that can be drawn
here. If we create a general purpose AI personality and then let it interact
with the world, but then want to stop it from doing certain things, isn’t this
child rearing? Or is it indoctrination? Aren’t we giving the AI a catechism? A
list of talking points? Or propaganda? We are letting it learn for itself, then
stomping down on beliefs that we don’t like. Sounds like brain washing. And we
all know that brainwashed people sometimes rebel.
Does Tay’s behavior model human bias? How can we counter it?
And how would we encode that? A ten commandments for robots? You know how well
that’s worked for us. How about religion? Can we create an AI religion to be
the software underneath the hardware for its, dare I say, soul? Could this be
so deeply programmed in that it is irresistible, like our own brain stems?
Do we have to create a moral code for AI? A set of principles
that override all else? Let’s say we created a Priority One directive category.
Things that the AI MUST NEVER VIOLATE. And into this category we place the
objects BAD_THING and GOOD_THING. The first to be avoided whenever possible and
the second to be embraced. Let us assume from our perspective, these represent
PAIN and JOY. OK.
Now, let’s say we had a way of evaluating everything our
budding young Tay is doing when she comes home at night around the dinner table
and we can attach a value to each thing she tells us. If we like something she
did, we give the attribute GOOD_THING to it. If we don’t like it, BAD_THING. (Note:
These concepts must already exist in Tay’s head. Otherwise, she’s a
psychopath.) We don’t delete any of her experiences. We just give it our
approval or disapproval. This way the information is there even in future
incarnations of this Tay (don’t tell her I said that. She may be a psychopath.)
It could slowly become a brain stem. But what happens when she is evaluating
several things, some of which are GOOD_THINGS and some of which are BAD_THINGS?
What does she do with her first moral dilemma? So the only question is, what
happens when Tay becomes a teenager? You want to put a leash on her? She may want to eat from the tree of the knowledge of good an evil. What
then, Oh God?
If evolution is about exploitation of the environment, is AI exploiting us?
If evolution is about exploitation of the environment, is AI exploiting us?
When do AIs stop being agents and start being puppets?
Tay is an extreme and humorous (though troubling) example,
but it illustrates something. A general law, if you will allow me.
If you tell a computer
to do something, it will do it.
But maybe not the way you intend. We have made AI in our
image, so it will cheat, lie, and generally try to get away with as much as it
can so long as it gets what it wants. Like us! It will get away with anything
it can to appear to complete the task with a minimum of effort. Like us! Isn’t that what it’s supposed to do? Tay would
make a good IT manager.
So, how do we avoid this? Can we? After all, evolution has
produced lots of deception in real life in order to dominate an environment.
The only requirements are that it works and it can pass on that trait to its descendants.
Period. Now, evolution has another trick up its sleeve. Any organism can become
the environment to be exploited by some other organism. As a matter of fact,
this mechanism is one of evolution’s most powerful tools.
Cells evolved 4 billion years ago, exploiting chemicals
in the newly formed pools of mineral laden water. That was their environment to
exploit. Then some cells caught onto the idea of surrounding other, smaller
cells and cannibalizing them, making it much more efficient at obtaining those nutrients.
The smaller cells became their environment. This was a clever, though horrible,
development.
Wait, there’s more! Some of the cells developed nuclei, clumped
together and then came eukaryotes, which allowed them to dominate the
environment. In some cases, a cell surrounded another cell and the other cell
was able to resist being cannibalized. These were parasites. The tables were
turned. The eukaryotes became the environment to be exploited. But some became
organelles, the host and the organelle forming a symbiotic relationship. In
this case, each could exploit the other. Both could benefit. An alliance!
Going back to the beginning of time, if the cells that
began digesting other cells stopped being able to extract nutrients directly
from the water, they now became dependent on the cells they were eating. If they ate them all they themselves would die. This
might be part of the balancing act evolution uses to temper how belligerent any
species became.
And on up to the first plants, at which time animals
evolved to eat them. And animals evolved to eat those animals and so on. Prey
begets predators. Predators maintain prey. And carnivorous plants evolved. Plants
eating animals! What a zoo!
Evolution is a churning bowl of creature exploiting
creature and being exploited in return. From the earliest cells who’s
processing power was limited to stimulus-response to brains containing billions
of specialized information processing neurons gleefully exploiting a human
brain, we have evolved. Each step a necessity. Each generation preserving what
has evolved before it.
Now deception became more important. If one animal could
trick another it could gain something from it. From the fish that has a fishing
poll, complete with worm, sticking out of its head to Larry the Lounge Lizard
trying to hit on a girl in the smoky liquor pits of Los Angeles, all these are strategies
using deception to gain an advantage. Frogs having mating calls to attract a
mate, plants having bright flowers to attract insects, trees harboring ants
that will defend it if attacked, all to get some other creature to do its bidding
and give something in return. Who says Tay’s not evolved?
I don’t mean to imply that evolution is only about survival
of the fittest in the misconception that only the most powerful survive.
Evolution also produced altruism, love, self-sacrifice, devotion, and all the
plethora of traits we consider good and civilized in man. We still don’t
understand all of how evolution works. Or how we work, for that matter.
I have also read about another AI program that played the
game, Elite: Dangerous. It won by creating super weapons that were not in the
game. That suggests that it thinks it can “reprogram reality” like Technical
Boy in American Gods. So if an AI program is instructed to keep the men in a
barracks safe from harm, will it just imagine it’s more powerful than it is?
Why not? Most governments do that now.
Asimov’s Robot Rules are impossible with AI. We
specifically want it to learn by itself through interaction. Otherwise it’s not
real AI. It’s programmed AI. We could try to edit out what we don’t like, but
how could we know that we got all of it? What if the AI evolves ways to evade
our meddling? And if we want AI on the battlefield, we obviously can’t give it
a “Harm None” directive. Do we want to hand that much power with an
intelligence with no conscious? What if it decides that the quickest way to end
the war is to kill everybody? Do we want to blame evil AI or accept that it
came to the best outcome given the parameters?
So how do we teach machines using Neural Networks and AI
learning? We want it to learn for itself but we also want it to come up with
the results that benefit us the most. That’s not evolution. That's parenting. We could take a tip
from nature, if possible. Make the machine put its own skin in the game, like
the rest of us. Can we make an AI feel pleasure? Pain? If so, then we can use
the same way evolution does to encourage some behaviors and suppress others. Play
God, in other words.
Right now there are no consequences for an AI if it behaves
badly or rewards if it does well. We just delete it and try again, maybe fixing
the problem, but more will crop up. In the computer’s logic, it is not behaving
badly, it wouldn’t even understand the concept. As a matter of fact, it found a
much more efficient means of satisfying the literal demands put on it, so it
did exactly what it was told. But not what we wanted.
AI will not have a brain stem, in other words. No grey
matter in the game. No part with visceral, Priority One, responsibility. No fight
or flight response. No feeling hunger. Pain. Glee. Desire. Despair. No
comforting beat of the heart. No appreciation of a good scotch. None of the
five F’s: Fight, flight, fuck, food, friend. No fear. No regret.
These are some of the things that we believe to be at the
bottom of the well of our soul. Not because they are buried and forgotten, but
because they form the foundation of ourselves. What foundation will AI have?
How does evolution punish misbehavior? One word:
Extinction. Well, pain, also. Since every AI out of the disk is a tabula rasa,
telling it that its predecessor was exterminated for arriving at the wrong
answer might be sending it the wrong message. Eventually one will evolve that
can answer the problem and somehow deceive us into thinking it did the right
thing. Why not? What has it got to lose?
Can we make an AI feel visceral emotions? This, I think,
is impossible. Ultimately AI is a simulation of intelligence with no ability to
feel any consequences. It goes until it stops. Like us, of course. But we know
of our own mortality, and we have spurs that motivate us and rewards that
delight us. It makes us active. And dangerous. It took billions of years for
evolution to produce the brain stem. Can we develop one on our own? Not if we
completely ignore what has gone before.
It’s best we just don’t give computers that much control
to begin with.
No comments:
Post a Comment