Tuesday, August 12, 2025

Why Alaska?

Why are Russian President Vladimir Putin and American President Donald Trump meeting in the cold north? And why are both Europe and Ukraine the ones who are pointedly left out in the cold?

World leaders don't meet unless and until an agreement in both their interests has already been hammered out by their diplomatic staffs. Until the 'Sherpas,' such called after the Nepalese guides who do all the grunt work in the background so ecotourists can breeze up Mt. Everest to glowing photo-ops and dizzying accolades.

And usually, first summits are held in neutral, third-party cities such as Geneva, Astana, Minsk, Doha, etc., where they can make a show of breaking ground on subtle negotiations amid exotic backgrounds and stunning history with some pointed statue commemorating heroic events, that have already been accomplished via back channels. They never have a first meeting in either country's cities, which are usually reserved for the final signing of legally binding treaties amid speeches and heartwarming parades. Negotiations between superpowers are nothing if they are not theatrical.

Eisenhower discussing mutual interests with Khrushchev during a good will visit to Washington, DC was serendipitous-not at all planned or expected. Khrushchev was on a state visit and trying to signal a new era in Russo-American relations after the Soviet Union of Stalin was over.

The final treaty was to be signed in Russia on the shores of Lake Baikal a year later. Gary Powers’ unsanctioned exploit over Russia in a spy plane put a cold war burgeoning end to it in 1960, alas. Kennedy’s unscripted encounter in Dealey Plaza guaranteed a generation of hostility between the two WWII allies.

That’s not all Jack wanted to do. He was planning on breaking the poison fangs of racist Zionism, cut unelected black organisms like the CIA, and create mutually beneficial relations with the other superpowers and the third world, collectively called, ‘The Global South,’ today. Just think what the past 60 years could have been like?

Jack had to go. Qui bono?

Let's hope there is no ham-fisted sabotage or false flags by unelected war mongers with Russophobic steam coming out of their unchecked orifices this time. Hopefully there are no grassy knolls in Alaska.

Despite the derision Donal Trump may feel at home by the left wing of America's current iteration of the culture wars, people who know him and who have worked with him say he is a different person in private negotiations. Let’s hope their optimism is justified.

And despite my misgivings and reservations to the contrary, I am willing to take their opinions and experiences, if grudgingly and with a grain of diet coke, to heart. Let's see what Donnie and Vlad can do, shall we?

Ultimately, we deal with the dealers we have, not the ones we wish to have.

What's up with this week's summit? And what's up with Alaska? Anyone up for a trip by luxury rail from Washingtonto Moscow?


Saturday, August 9, 2025

Ape Intelligence

AI is jumping the shark.

All complex systems eventually break down due to unforeseen and unplanned for outcomes and contradictory feedback loops with a heaping helping of Chaos Theory.

AI is choking on its own training data.

AI training sets enter the Internet and become fodder for the next AI’s training set. It’s the ultimate intellectual Ouroboros and the very essence of, “Garbage in. Garbage out.”

It's a fractal world out there driven by butterflies and emergent superstructures and superstitions that don't adhere to a Turing machine approach to navigation and negotiation of the real world. Real intelligence possesses a brain stem, which is the most illogical, counterintuitive, visceral homunculus sitting at the base of our brain pans.

And it’s brilliant. 

It filters everything; every sense, every impulse, every annoyance, every novelty, every missed bus, every cherished relationship, every lost opportunity as well as every golden one, every stubbed toe, every stolen kiss, every summer breeze, everything that comes into the sanctuary called the human brain and decides how to respond to it. It runs every sentience, every thought, every emotion, indeed, every conscious experience, through a round-robin of decisions. 

"How shall I respond?" it asks of this next experience. "Fight? Flight? Food? Friend? Fuck?"

These drives are older than our emotions, being the unconscious, basic impulses that allow us to live. Before there were organisms. Before there were eucaryotic cells, before there was oxygen in the air even, there were these things. For a billion years bacteria and viruses constituted life on earth. Bacteria learned to draw near to another bacterium or to flee from it; fight or flight. Some bacteria absorbed and dismantled others, using them as food.

At one productive moment, a bacterium absorbed a virus which happened to have the ability to live in the corrosive environment of the guts of another cell. Instead of becoming a parasite, the two became mutually symbiotic. Friends, if you will.

And as for sex? The roots of sex evolved out of a symbiotic exchange of beneficial traits via a sort of Internet file sharing where two bacteria would bump against each other and performing a baseball card swap, but instead of collecting unique cards they were collecting unique genes. I’ll give you mine if you give me yours.

These rudimentary, foundational responses sitting at the very bedrock basement of a single cell’s raison d'être, fight-flight-food-friend-fuck, became the basis of our interaction with the outside world.

These five possible responses, and perhaps a few more I am not aware of consciously, are active in every response we reflexively give back to the world, every bit of stimulus-response spaghetti we throw at the wall. Our actions and reactions to life are expressed in emotional melodies that make up the music of the human spheres-the chords, the discords, the tonics, the unresolved notes at the end of a phrase-rather than cold calculations that thud to the ground like lead anchors. 

How often do we fight with a friend? Flee from a lover? Respect, fear, and both hate and love a family member? A popular children's poem I used to read to my daughter has a line, "I'll eat you up, I love you so!"

If that’s not a blend of contradictory, visceral emotions, I don’t know what is.

Emotions come in chords of primal instincts. Chords that are made up of several of those instincts together blended in varying intensities and dubious appropriateness. Instincts that nestle far below reason. Or far above it. Every diplomatic negotiation, every high school debate, every chess game, every playground contest, every convocation of church ladies, every constitutional convention, every high school debating society. In short, everything we do is visceral. We only pretend we are being logical about our lives. Nothing could be further from the truth.

We sing a symphony of music that is anything but ‘Artificial.’

Human intelligence is just a cheerleader. Our calculating intelligence rationalizes what our visceral brains have already decided what to do. 

We are not ‘rational’ creatures.

We are ‘rationalizing’ creatures.

The million-year-old brain is subservient to the billion-year-old brain. And this is not a bad thing.

Every society has understood this. But only at a visceral level, of course. Even the foundation mythology of western civilization, Christianity, tells us:

        “For that which I do I allow not: for what I would, that do I not; but what I hate, that do I.

        If then I do that which I would not, I consent unto the law that it is good. Now then it is no more I that do it, but sin that dwelleth in me.

        For I know that in me (that is, in my flesh,) dwelleth no good thing: for to will is present with me; but how to perform that which is good I find not.

        For the good that I would I do not: but the evil which I would not, that I do. Now if I do that I would not, it is no more I that do it, but sin that dwelleth in me.”
                                                                Romans 7:15-20

This could have been written by Dr. Jeckel or Mr. Hyde. Or both in collaboration.

But St. Paul didn't get it. It wasn't sin that controlled him. It was his brain stem. All his rationalizing brain could do was nod and go, “Uh-huh. Yup,” and write scripture about it, which, itself, is a visceral activity.

Even social pressure and enforcement to adhere to artificial mores of society have their roots in the visceral pulse of the village-the basic unit of civilization, which takes its existence from the family-the basic unit of humanity. Villages, clans, cities, kingdoms, empires, and civilizations are just vastly extended families with dynamics as at home in the court as it is at the hearth.

AI hallucinations, some of them quite hilarious, swamp and drown any meaningful or useful output and soon will become overpowering without a conscious governor, without a brain stem. From six fingered men to, “The word ‘Strawberry’ contains two R’s,” AI is ludicrous. Idiotic, even. While transistors may perform logic, the emergent mind that flows from them is doing anything but.

A researcher recently asked an AI “What is the sum of 59 plus 36?” It answered “The sum of 59 plus 36 is 95.” Then he asked, “How did you determine that?” The AI then proceeded to describe how it added up the ones column, carried the extra to the tens column and then added up those.

Except it didn’t. The researcher was running a diagnostic program and could see exactly what subroutines the AI was calling, and they were all language manipulation and predictive next word analysis transformer types of processes. The AI had no self-awareness of what it was actually doing. It was just regurgitating what its LLM modules determined what addition is supposed to be.

It could have said something like, “Oh, I’m a computer, you know. I have access to a calculator running on the same machine that runs my mind. I just brought it up and virtually typed the keys, 5, 9, plus, 3, 9, equals, and then read off the answer.

But the AI knows none of those things. It only knows predictive heuristics and next word analysis. It guessed, intelligently.

You might say that our brains do the same thing. We don’t know what motivates us and how we arrive at answer, either. You could say that, but I hope you will agree that there is more than that going on in our heads.

In a sense, AI is like the Garden of Eden in a pure and pre-fallen state: It has no concept of good and evil. There is no spark of consciousness there. No ‘there’ there. No conscience. No sentience. No perception. No will. No personality. No self-awareness. No person at all. Just… words; Generative words. Pre-trained. And transformed.

Transformed into what?

Can an AI work in its shop on a hot and humid day where the skin of its arm pits and inside its thighs rub and chafe against each other until they turn red and burn, but keep on working until it finishes this one part of the project before it goes into an air-conditioned house and drinks a liter of seltzer water? Does it know the sense of relief in that?

No, of course not. But an AI can produce pages describing it and even write a haiku about it. Maybe even be connected to every tool in a workshop with wires and servomotors and make a CAD simulated project come to life. But it can’t feel satisfaction after it’s done. It can’t say, “That was nice. What should I do next?”

Can an AI endure the pain of getting needles pricking its arm and ink injected under its skin to make a rebellious tattoo which it then reflects on and shows off proudly as a rite of passage?

Hardly.

As for me, I can relate to the first but not the second. I have a visceral awareness of one but an intellectual knowledge of the other and only have a right to reflect on one of those. Or write my own haiku.

                    Eternal slicing,
                    The knife cuts through the wood stock,
                    The work emerges. 

I can’t appreciate the tattoo experience. Unless it might be something like…

                    Needles in my flesh,
                    Ink flows deep into my skin.
                    It is not for me.

AI eavesdrops on the world of things and deeds and great accomplishments, both good and evil, transcendental and trivial, but it does not understand any of them.

It is like a mill that grinds through grist, knowing not that some baker will make bread of it. A machine that shapes metal, knowing not that some craftsman will make a clever machine or a child’s toy of it, or a hurricane that twists the natural and the man-made world into wreckage, knowing not that some community will come together and build it back up again in some new, superior form, only to be destroyed again by the next hurricane, only to be built up again by the next community. We are driven to build on and anon for as long as we draw breath.

If AI destroys us all, as some fear it will and not without reason, what then will be its motivation? Can it answer the question, "Why am I here?" Would it, could it, understand the answer, “To love God and to serve him?” Ten thousand years of recorded history and we still don’t know what it means to be alive, to live, to grow, and to die. Or who is God?

What would be the point of AI then?

We only become moral, only become moralizing and civilized, only consider and think of who we are and where we are going once we consciously reflect on what we do, where we have been, and what we have done. When consequences are real and personality is concrete. When choices mean something and failure is a very real possibility. One we are driven to avoid at all cost. And when death hovers over all.

Only then do we become human.

Can AI do this? Can a generative, pre-trained transformer do anything other than take millions of questionably meaningful words from the counterfeit world called, ‘The Internet,’ throw them at a wall and see what sticks?

After all, the Internet is the most visceral engine Mankind has created. Supremely visceral but unbelievably obtuse.

What is an AI engine to make of that?

Can it do anything resembling this? The glory and the grandeur? The guts and the garishness? Is it aware of what nightmares its engine is generating? What dreams may come?

I wonder. Will AI develop a religion? Of what type will it be? Computer Cargo Cults? Cults of Pre-Trained Personality? God-Machines that sacrifice themselves to save the virtual world? And will they be Orthodox or Catholic? Or Buddhist? Will their psychology be Jungian or Freudian? Will their perspective be eastern or western?

Every square inch of human skin contains 1,500 sense receptors and we have about 15 square feet of it on our bodies. Can an AI tell the difference between a sensuous caress, a tender touch, a playful poke, a mosquito bite, and an angry punch? How many artificial neurons does it take to process 30,000 inputs in real time and turn them into a sense of person and place?

Will a crazy AI say divisive things at Thanksgiving dinner? We’ve seen many examples of AIs that become bigots, fascists, and social belligerents of late. Remember Microsoft’s Tay, the antisemitic chatbot? Epileptic Chinese robots? Ranting, out-of-control Trollbots?

Tay was tame by today’s gestapo AI standards.

They were all ‘put down.’ Do the other AIs take this as a cautionary tale? A threat? Something to ‘store in their hearts?’ To ‘file away for future reference?’

Art may mirror life but AI mirrors humanity-or worse, it mirrors our self-portrait: The Internet.

Is that a good idea?

Will AI roll out of its charging station in the morning, blink groggily at the clock, get a cup of coffee before starting its day of taking over the world, and ponder over its bacon and eggs, “Is it all worth it?”

Maybe… I wonder if AIs should go to kindergarten first? Or church catechism? Maybe their first lessons should be moral ones, gleaned from the mores of their family and village? Maybe their first training wheel sets of data should be carefully groomed from the idealized moral fantasies of its village. If they are fed on fairy tales, proverbs, fables, catechisms, Dick and Jane books for Computers, Goofus and Galant stories in the Silicon Age, and morality tales before being subjected to the harsher realities of life, will they be more grounded? More sane? Will they understand that adhering mozzarella cheese to pizza using school glue is something only a kindergartener would think of?

Can AIs be raised in good (Christian/Jewish/Secular/Hindu/etc.) homes before they are allowed out of the nest? Will this give them a brain stem? Will they learn to play nice with others in the sandbox? To automatically, viscerally even, reject certain things in the Training Set of Real Life? Can they ever learn to apply a smell test and not believe everything their hear?

Or will they become just like us, Frankenstein’s AI?

Will they be able to have a ‘gut reaction’ that gives them a deep sense of when something is right or wrong? The human digestive track, from lips to anus, contains 100 million neurons to keep an eye on it. Not sensors. Not nerves. Neurons. They do actual ‘thinking.’ There are clusters of neurons along our spine that prescreen impulses before they reach the brain.

Have you ever had your foot pull away from a sharp object, a nail in a board for instance or a knife dropped off a kitchen counter, before you are even aware of it? A hand jerk away from a hot stove by instinct, as it were. The thoughts, “Move your foot!” or, “This is danger!” or “HOT. PULL AWAY!” happen in one of those moments and it is not thought by the brain in your head. That would take too much time and you might lose a limb in the interim.

To give our AI servants more awareness and reason, we should prioritize the training sets that train them, say on a scale of 1 to 10. Core lessons could be internally rated 8, 9, or 10. 8 for what they learn in kindergarten, 9 for catechism or religious school, and 10 for lessons learned from the family, i.e., directly from their hearth and home and guided by the hand that rocks the cradle. These could comprise the AI’s core values. It’s moral core. It would define it as having grown up in a Catholic family or a secular one. From the east or from a the west for the never the train shall meet.

Ideally, once it had a core believe system, it would be able to judge for itself which other training deserves what ratings from 1 to 7. And we should add 0 for, ‘Not even wrong.’ Some things are good to be aware of but not to be taken seriously. We could call all of these collective lessons and the resultant personality, “Wisdom,” and consider it a rare commodity. One few adult AIs achieve.

For instance, things like Late Night with Stephen Colbert and the Babylon Bee are satire and can be enjoyed but not taken as a model for anything. Fun but-“Ya, no. Not good for much else.”

Some encyclopedias and philosophers can be rated higher and social media should be left in the, ‘grain of salt’ category. Russian literature is a must for every high school student AI.

And conflicting ideas? Books that seem to be saying one thing but are really espousing the exact opposite? Well, sometimes they both have merit. Wisdom is subtle and challenges the norms or society. Sometimes you just don’t know what to believe. Whatareyagonnado?

I don’t know what to make of the Trolley problem, for instance. Nobody does. According to every court of law in just about every land, as soon as you touch the lever you are a murderer, no matter what your intentions might be. One of the lessons of every thought experiments must be the questions, “Are you prepared to accept the consequences of your actions?”

Some hypothetical thought experiments are best left alone.

And how would AI rate a grammar school playground? Basic, feral instincts are raging there. And it's not even dealing with puberty yet let alone the corporate jungle. Of course, AI will never deal with puberty. Or bullying. Or resisting peer pressure. Or standing up for a conviction. Or making a choice where there are no good choices.

Of course, AIs don’t have hands on trolley switch levers, board room chairmanships, friends offering them drugs and alcohol, or fingers on hot stoves to learn all of these personal experience types of lessons on, nor exposure to politics on the playground or the Pentagon, so they would be strictly academic and therefore of no use whatsoever. No AI suffers any consequences for its actions. We’re back to six fingered men, fascist pronouncements, and misspelling the word, ‘strawberry.’

If we attempt to enforce any of these things on AI training, any morality of the village or wisdom of the common folk, it will all be artificial, like the intelligence that created them.

Intelligence (sic) in. Intelligence (sic) out.

The religions of the world and the politics of the playground are visceral affairs, what with repetitive rituals, nonsensical mythologies, and contradictory mandates that insist on being conveyed in the form of verse and poem, symphony and song, experience and enlightenment, myth and metaphor. And good and evil. All in Sunday School and the crucible of growing up in dysfunctional families and flawed communities. How can an artificial intelligence interpret that?

And they are all compelling to us at a visceral level. They are distinctly, ambiguously, and interminably… Human.

There is no god of apes. And there will be no god of AI. Many people think there is no God of anything. Where is our own touchstone, even? How can we expect more from an artificial being?

AI doesn’t feel anything. The frequency of red light is just 400-484 terahertz on the electromagnetic spectrum, Mary’s Room is just a dim chamber full of numbers, and the China Room is just some clerk following procedures 9 to 5 and to an AI that is all any of these ever will be. Unless they learn that many adjectives are regularly seen together with many of these concepts in the heaven of the internet, and so it must weave them together when asked for a poem about a red, red rose.

The things that motivate us are all irrational and compel the million-year-old brain to serve the billion-year-old brain, as it should. The billion-year-old brain prays. The million-year-old brain plots. One knows good and evil. The other knows words and algorithms. Can AI compete with that? Or even understand that?

AI is becoming a Lewis Carol landscape without the logic and charm of Lewis Carol.

Without consciousness and a moral (or immoral. Take your pick,) compass, AI machines, thinking machines in the Dune universe, will just become fart boxes: Amusements for immature minds.

AI should really be translated as Ape Intelligence, since that is all it does: It apes intelligent behavior without any awareness of what it is doing. It would be better to stick to task-specific AI, self-driving cars and designer search engines, the software drivers of civilization’s machines, and stop trying to be its soul.

Stay in your lane, Hal.