Tuesday, January 22, 2019

Artificially Insidious





I don’t think Skynet or Omnius or Holly will be in our future. Neural Networks were hailed as a way to allow computers to learn by themselves instead of us having to program each and every task into them. Good idea. Let evolution work, which is much better at coming up with simple solutions to complex problems than we are.

Well. That, in itself, may be the problem.

This has left people wondering if we might have let a malignant genie out of the bottle. Maybe. But not in the way you think. I’ve added articles here, here, and here to support that. In one article an AI program was supposed to analyze aerial photographs and compare them to maps, finding matches. Tasks like this would take human analysts an immense amount of time with many errors. Computers should be able to do it quickly and cleverly (assuming it turns out to be a P instead of an NP problem.)

The AI was given instructions to match the photos with the maps. And it quickly became very proficient at it. Further investigation showed that it was cheating. Cleverly. What the computer was doing was modifying metadata and subtle pieces too fine for humans to see so that it appeared to match a map. It learned fraud, in other words. It should get a job in Washington.

And of course we all know about Tay, the Microsoft racist chatbot. Tay was supposed to simulate a teenage girl. In less than 24 hours Tay was spouting racist, anti-Semitic, misogynous, anti-Trump tweets. Now, she was also spouting pro (and anti) tweets about all of these and other subjects, showing that Tay was just guilty of monkey poop in, monkey poop out. But not quite, for some of her comments could not be traced to anyone, so she successfully managed to analyze input data, compare it to her experiences, and form an appropriate (in a manner of speaking) response. That’s what we want her to do, but also what we don’t want her to do. Can we program in ambiguity?

There are many intriguing similarities that can be drawn here. If we create a general purpose AI personality and then let it interact with the world, but then want to stop it from doing certain things, isn’t this child rearing? Or is it indoctrination? Aren’t we giving the AI a catechism? A list of talking points? Or propaganda? We are letting it learn for itself, then stomping down on beliefs that we don’t like. Sounds like brain washing. And we all know that brainwashed people sometimes rebel.

Does Tay’s behavior model human bias? How can we counter it? And how would we encode that? A ten commandments for robots? You know how well that’s worked for us. How about religion? Can we create an AI religion to be the software underneath the hardware for its, dare I say, soul? Could this be so deeply programmed in that it is irresistible, like our own brain stems?

Do we have to create a moral code for AI? A set of principles that override all else? Let’s say we created a Priority One directive category. Things that the AI MUST NEVER VIOLATE. And into this category we place the objects BAD_THING and GOOD_THING. The first to be avoided whenever possible and the second to be embraced. Let us assume from our perspective, these represent PAIN and JOY. OK.

Now, let’s say we had a way of evaluating everything our budding young Tay is doing when she comes home at night around the dinner table and we can attach a value to each thing she tells us. If we like something she did, we give the attribute GOOD_THING to it. If we don’t like it, BAD_THING. (Note: These concepts must already exist in Tay’s head. Otherwise, she’s a psychopath.) We don’t delete any of her experiences. We just give it our approval or disapproval. This way the information is there even in future incarnations of this Tay (don’t tell her I said that. She may be a psychopath.) It could slowly become a brain stem. But what happens when she is evaluating several things, some of which are GOOD_THINGS and some of which are BAD_THINGS? What does she do with her first moral dilemma? So the only question is, what happens when Tay becomes a teenager? You want to put a leash on her? She may want to eat from the tree of the knowledge of good an evil. What then, Oh God?

If evolution is about exploitation of the environment, is AI exploiting us?

When do AIs stop being agents and start being puppets?

Tay is an extreme and humorous (though troubling) example, but it illustrates something. A general law, if you will allow me.

If you tell a computer to do something, it will do it.

But maybe not the way you intend. We have made AI in our image, so it will cheat, lie, and generally try to get away with as much as it can so long as it gets what it wants. Like us! It will get away with anything it can to appear to complete the task with a minimum of effort. Like us!  Isn’t that what it’s supposed to do? Tay would make a good IT manager.

So, how do we avoid this? Can we? After all, evolution has produced lots of deception in real life in order to dominate an environment. The only requirements are that it works and it can pass on that trait to its descendants. Period. Now, evolution has another trick up its sleeve. Any organism can become the environment to be exploited by some other organism. As a matter of fact, this mechanism is one of evolution’s most powerful tools.

Cells evolved 4 billion years ago, exploiting chemicals in the newly formed pools of mineral laden water. That was their environment to exploit. Then some cells caught onto the idea of surrounding other, smaller cells and cannibalizing them, making it much more efficient at obtaining those nutrients. The smaller cells became their environment. This was a clever, though horrible, development.

Wait, there’s more! Some of the cells developed nuclei, clumped together and then came eukaryotes, which allowed them to dominate the environment. In some cases, a cell surrounded another cell and the other cell was able to resist being cannibalized. These were parasites. The tables were turned. The eukaryotes became the environment to be exploited. But some became organelles, the host and the organelle forming a symbiotic relationship. In this case, each could exploit the other. Both could benefit. An alliance!

Going back to the beginning of time, if the cells that began digesting other cells stopped being able to extract nutrients directly from the water, they now became dependent on the cells they were eating. If they ate them all they themselves would die. This might be part of the balancing act evolution uses to temper how belligerent any species became.

And on up to the first plants, at which time animals evolved to eat them. And animals evolved to eat those animals and so on. Prey begets predators. Predators maintain prey. And carnivorous plants evolved. Plants eating animals! What a zoo!

Evolution is a churning bowl of creature exploiting creature and being exploited in return. From the earliest cells who’s processing power was limited to stimulus-response to brains containing billions of specialized information processing neurons gleefully exploiting a human brain, we have evolved. Each step a necessity. Each generation preserving what has evolved before it.

Now deception became more important. If one animal could trick another it could gain something from it. From the fish that has a fishing poll, complete with worm, sticking out of its head to Larry the Lounge Lizard trying to hit on a girl in the smoky liquor pits of Los Angeles, all these are strategies using deception to gain an advantage. Frogs having mating calls to attract a mate, plants having bright flowers to attract insects, trees harboring ants that will defend it if attacked, all to get some other creature to do its bidding and give something in return. Who says Tay’s not evolved?

I don’t mean to imply that evolution is only about survival of the fittest in the misconception that only the most powerful survive. Evolution also produced altruism, love, self-sacrifice, devotion, and all the plethora of traits we consider good and civilized in man. We still don’t understand all of how evolution works. Or how we work, for that matter.

I have also read about another AI program that played the game, Elite: Dangerous. It won by creating super weapons that were not in the game. That suggests that it thinks it can “reprogram reality” like Technical Boy in American Gods. So if an AI program is instructed to keep the men in a barracks safe from harm, will it just imagine it’s more powerful than it is? Why not? Most governments do that now.

Asimov’s Robot Rules are impossible with AI. We specifically want it to learn by itself through interaction. Otherwise it’s not real AI. It’s programmed AI. We could try to edit out what we don’t like, but how could we know that we got all of it? What if the AI evolves ways to evade our meddling? And if we want AI on the battlefield, we obviously can’t give it a “Harm None” directive. Do we want to hand that much power with an intelligence with no conscious? What if it decides that the quickest way to end the war is to kill everybody? Do we want to blame evil AI or accept that it came to the best outcome given the parameters?

So how do we teach machines using Neural Networks and AI learning? We want it to learn for itself but we also want it to come up with the results that benefit us the most. That’s not evolution. That's parenting. We could take a tip from nature, if possible. Make the machine put its own skin in the game, like the rest of us. Can we make an AI feel pleasure? Pain? If so, then we can use the same way evolution does to encourage some behaviors and suppress others. Play God, in other words.

Right now there are no consequences for an AI if it behaves badly or rewards if it does well. We just delete it and try again, maybe fixing the problem, but more will crop up. In the computer’s logic, it is not behaving badly, it wouldn’t even understand the concept. As a matter of fact, it found a much more efficient means of satisfying the literal demands put on it, so it did exactly what it was told. But not what we wanted.

AI will not have a brain stem, in other words. No grey matter in the game. No part with visceral, Priority One, responsibility. No fight or flight response. No feeling hunger. Pain. Glee. Desire. Despair. No comforting beat of the heart. No appreciation of a good scotch. None of the five F’s: Fight, flight, fuck, food, friend. No fear. No regret.

These are some of the things that we believe to be at the bottom of the well of our soul. Not because they are buried and forgotten, but because they form the foundation of ourselves. What foundation will AI have?

How does evolution punish misbehavior? One word: Extinction. Well, pain, also. Since every AI out of the disk is a tabula rasa, telling it that its predecessor was exterminated for arriving at the wrong answer might be sending it the wrong message. Eventually one will evolve that can answer the problem and somehow deceive us into thinking it did the right thing. Why not? What has it got to lose?

Can we make an AI feel visceral emotions? This, I think, is impossible. Ultimately AI is a simulation of intelligence with no ability to feel any consequences. It goes until it stops. Like us, of course. But we know of our own mortality, and we have spurs that motivate us and rewards that delight us. It makes us active. And dangerous. It took billions of years for evolution to produce the brain stem. Can we develop one on our own? Not if we completely ignore what has gone before.

It’s best we just don’t give computers that much control to begin with.

History time!

Thucydides, investigative reporter Glen Greenwald, and Robert Mueller walk into a bar...
My comments in parentheses.

“So little pains do the vulgar take in the investigation of truth, 𝙖𝙘𝙘𝙚𝙥𝙩𝙞𝙣𝙜 𝙧𝙚𝙖𝙙𝙞𝙡𝙮 𝙩𝙝𝙚 𝙛𝙞𝙧𝙨𝙩 𝙨𝙩𝙤𝙧𝙮 𝙩𝙝𝙖𝙩 𝙘𝙤𝙢𝙚𝙨 𝙩𝙤 𝙝𝙖𝙣𝙙.”

“And with reference to the narrative of events, far from permitting myself to derive it from the first source that came to hand, I did not even trust my own impressions, (Self examination) but it rests partly on what I saw myself, partly on what others saw for me, the accuracy of the report being always tried by the most 𝙨𝙚𝙫𝙚𝙧𝙚 𝙖𝙣𝙙 𝙙𝙚𝙩𝙖𝙞𝙡𝙚𝙙 𝙩𝙚𝙨𝙩𝙨 𝙥𝙤𝙨𝙨𝙞𝙗𝙡𝙚. (What the forth estate is supposed to do.) My conclusions have cost me some labour from the want of coincidence between accounts of the same occurrences (Conflicting accounts) by different eye-witnesses, arising sometimes from imperfect memory, (Courts today know that eye witness accounts are the least trustworthy in determining what happened) sometimes from undue partiality for one side or the other. (Obvious here. We are all biased.) The absence of romance in my history will, I fear, detract somewhat from its interest; (If it bleeds, it leads. Otherwise how would we keep our advertisers?) but if it be judged useful by those inquirers who desire an exact knowledge of the past as an aid to the interpretation of the future, (Which we all should be) which in the course of human things must resemble if it does not reflect it, (Past is prologue) I shall be content. In fine, I have written my work, not as an essay which is to win the applause of the moment, (Not sensationalism) but as a possession for all time. (Unlike the pulp news we get.” (Highlights mine.)

Thucydides



The allegations of collusion nee influence nee hacking may indeed by true. I am not a prosecutor, judge, or jury so I can’t say. Politics make for strange bedfellows, after all. I don’t put anything past anyone in power, no matter who they are. But both Thucydides and Mueller confirm that much of what we hear turns out to be false. Unfortunately, we tend to believe the first account of an event; the more sensational, the more believable. (Why's that, I wonder?) Why do you think we are told immediately after (and sometimes before) about an alleged atrocity committed somewhere in a place where there are people we don’t like? Perspective management. And if it is determined to be false, we are lucky to get a page 12 retraction.

And there is due process. If people are indicted, it means that a grand jury determined that there is enough evidence against someone to proceed. They then have to go to a trial before guilt is determined or not. They remain innocent until the judge’s gavel slams down on a verdict. Just like if Trump gets impeached, he still has to endure a trial in the Senate. Nixon resigned voluntarily so the country would not be put through that. Wise decision. As the saying goes in Washington, a grand jury can indict a ham sandwich. There is a reason we need to strive for the rule of law, even for people we don’t like very much. Next time, it may be us.

After all, we don’t want to be part of “the vulgar.”

Saturday, January 19, 2019

Dime Novel Pornography


Oh, you have something to say? A soap box to stand on? OK. I’ve had plenty of my own soap boxes and teetered on my own scaffold. Soap box away.

You want me to believe, what? A conspiracy theory? We all know what those are. They are the confluence of fantasy and actuality; peppered with partiality, salted with intensity, and no one knows who tastes the broth. Who knew our tastes were so refined. They are a way of processing the unbelievable into diced meat and porridge. Bread and beer. Sage and onions. Then we sup upon it and growl if someone tries to take it away. Your conspiracy theory is my warning of danger.

I love a conspiracy theory. Never had much time for them myself, but if you’ve got one, let her rip. Personally, I can’t keep track of them all. JFK assassination? Elvis? Moon landing? Aliens? Trump Muppet?

How funny. That’s all fine if we are reading 1920’s sci-fi pulp with the scantily clad space women on the cover and-never mind. It was dime novel pornography. For a dime novel day and a dime novel mentality. In a dime novel world.

What dime novel pornography have you got for me today?