Artificial Intelligence, at least those like ChatGPT (Generative Pre-trained Transformer. The 'Chat' part is a stand-in for ‘Inane Internet Chatter,’) are parrots.
Oh, they have a clever name for it now, lipstick and pigs, you know, but that's all there is to ChatGPT. It is like the, 'Yes, and...' routine of Improv theater masquerading as intelligence.
You know what I mean? You are on a stage with a stranger and you start an improv comedy routine. You might say something like, "I heard you went to the doctor today..." and your partner says, 'Yes, and... He told me I had a severe case of Internet Naivety." You say, "Yes, and... Last week my doctor told me I had Latent Internet Cynicism," "Yes, and... Maybe we shouldn't be on the same stage together..." and continue spitting out whatever comes to mind.
That's all ChatGPT is: Bad improv.
Seriously, all it does is take the symbols (words) you input as a 'question' and use its vast catalogue of Internet chatter and Social Media fallacies to determine what the next word most likely should be. The ‘P’ can also stand for ‘Predictive.’
ChatWTF or whatever Generative Poop Talkers they come up with next, are just actual implementations of John Searle’s Chinese Room thought experiment which he developed in 1980.
In it a man is in a room with a dumb waiter door through which he can pass and receive messages. He has a vast encyclopedia at his disposal and infinite time. A message comes through written in Chinese, a language he cannot interpret or understand even. He sees merely a sheet of paper with scrawls and scribbles on it.
What he can do is use his vast store of instructions to process the symbols on the paper and assemble a set of like symbols by those rules. It’s like using an Erector Set to build things, except you still don’t know what those ‘things’ mean, if anything.
He looks up the squiggly scratches on the message and follows the rules associated with them. With each squiggle, he walks across the room to a particular shelf, takes the book which includes this squiggle, looks it up, and does whatever the entry for that symbol tells him to do. He follows a program, in other words.
Eventually, he assembles a string of equally squiggly scrawls on another piece of paper and passes it through the dumb waiter. He doesn’t even need to know that those squiggles are characters in the Chinese language.
Outside, a person who actually speaks Chinese reads it and not only does it make sense, but it is a reasonable response to the message, for the thing he passed through the door originally was a message, such as a question.
For instance, the following is a possible conversation.
“What is your name?” comes the message in the dumb waiter.
“My name is Roboto,” comes a response milliseconds later, a meaningful response.
“Did you mean, ‘Roberto?’” replies the man outside.
“No, I meant Roboto,” generates the room.
And the conversation continues with but one cognizant participant.
The man in the Chinese room has no clue what the message, “What is your name?” or his response, “I am Roboto,” means in Chinese or any other language. They are just squiggles on paper and squiggles in response.
(Side note, with spoilers about the movie, ‘Ex Machina.’ Skip the next two paragraphs if you don’t want to hear them.
In the movie, Eva, the AI robot, is subjected to a Turing test by Caleb, a programmer invited to the isolated facility in the mountains by Nathan, her creator and jailer. She tricks Caleb into letting her free, whereupon she kills Nathan and imprisons Caleb in her old prison. At the end of the movie, Eva escapes in a helicopter that came to take away Caleb.
As she approaches the helicopter the pilot speaks to her. We see and hear what Eva sees and hears from her first-person perspective for the first time in the movie. In an alternate scene, instead of speaking English, Eva hears the pilot speaking in beeps, bops, and computer sounds. Eva is just the woman in the Chinese room.)
ChatWHATEVER is the same thing. For instance, if you input, "What is ChatGPT?” it will search its vast library of pornography and Orange Donald jokes and WokeBuster Elon’s tweets and determine what the next word would likely be based on weights placed on those symbols and their relationships with each other, calculated on statistics and produced without thought, contemplation, consideration of consequences, tempered by compassion, reason, devotion or ethics, or any other human trait. It’s all just maths.
The ChatBot might decide (calculate) that 'is' is the most likely next word, so it spits out 'ChatGPT (Yes, and) is,' and then goes from there. It might generate the output "ChatGPT... (Yes, and) ... is... (Yes, and)... a... (etc.,) ... predictive... software... which... estimates... the... next... likely…word... like... a... braindead... zombie... poking... its... finger... into ... an ... encyclopedia... until... it... generates... a... string... of... nonsense... that... gullible... people... will... adore." It does this in microseconds and it almost feels like it knows what it is saying!
If the algorithm spat up that the next words should be 'a light socket' instead of ' an encyclopedia,' that's what it would spit out since these 'words' are meaningless to the AI. Only the associations matter.
The sentence is like a pinball rolling down an inclined board filled with evenly spaced posts, each post a symbol, each symbol a 'word.' The pinball wizard plays his bally table, dancing around the new AI Arcade game, “Who Wants to be a Robot?” To what language those words belong is irrelevant.
Remember Tay? The Microsoft racist teenage girlbot? That's all it is except with more filtering. And ChatGPT even has an anti-conservative bias.
Of course it does.
Now imagine ChatDOD responding to a nuclear incident where all it has as ‘learning’ is the 1964 movie, ‘Doctor Strangelove.’ Love the bomb, indeed!
When all the roast meat at the feast is gone. When all the wine bottles are empty and all the bread boards are bare. Then do the scraps on the floor look delicious.
We've dumbed ourselves down so much this looks like a 5-star restaurant.
In Frank Herbert's, 'Dune,' computers did not become sentient and did not learn how to paint a Van Gogh like they did in Brian Herbert's books, which should be renamed, "Dune: Duds."
Frank Herbert specifically stated that the thinking machines were tools used by a minority to control the majority by making them incapable of functioning on their own: Atrophied Intelligence.
The Mind. Use it or lose it.
But then people started to believe the machines 'took control,' somehow. They waged a war on the thinking machines, which they believed had enslaved them, and made it a part of their religion:
“Thou shalt not make a machine in the likeness of a human mind.”
They totally missed the point. It was not the machines, but the men who controlled the machines that were the tyrants.
Their rage was misplaced. They were being abused and controlled by a small cabal of psychopaths at the top of society and the technocrats that worship them, not malicious machines. The few at the top used them to keep the plebians on the bottom fighting with each other. Divide and Conquer: Technocrat Edition.
Actions have consequences. If you can arrange it so you only experience the positive consequences while dumping the detrimental ones on everyone else. Well, now. That’s fascism.
Sound familiar? They were that dumb in the Dune world, too. The Sludge Must Flow!
What do you think?
No comments:
Post a Comment