A.I.

or

Asinine Intelligence

or

Artificial Ignorance

What are words? What is language? Starting with questions, I don’t like it, but it’s necessary in this case to establish a clear definition of each concept.

In basic terms, words are spoken or written units of meaning, a sign or symbol that stand for an object (cat), an action or state of being (run or feel), a notion (fast). Language is putting words together in a way that communicates something more than single units. It forms complex structures of thought, “The cat runs fast.”

I know, that’s obvious to a human who’s been listening to language since birth, and speaking it not long after. It’s not so obvious when we try to get a computer to use words and construct language.

Computers don’t think or learn the way humans do. In fact, computers neither think nor learn. Computers don’t have the necessary sensory input and feedback mechanisms to understand what any word means, and certainly no ability for stringing words together into language. Without the understanding of word meaning or how to form complete thoughts, there is no language. Dogs are smarter than the most powerful computer in the world. Rover knows the meaning of many words that he hears daily because they correspond to his experience of the world. These connections between the sound of the word and the phenomenal experience creates meaning in a dog’s brain. Computers have no sensory input to link experience with the associated words to create meaning. Think about how reading a story can stimulate in one’s brain an imaginary experience, or conversely, how we can recall a past experience and recreate it with language.

It’s a huge challenge to get computers to do anything with language. To start with, they need to be fed immense quantities of data, words and phrases, along with instructions for how to assemble sentences grammatically. They’re given massive amounts of statistical information on the likelihood of word sequence. Large Language Models (LLM) then get “trained” in part from being corrected by humans, in part by self-correcting algorithms. They’ve also been programmed to “learn” by searching the internet to collect samples for reference, for imitation, or to copy verbatim. Then with brute force processing power they crank out a semblance of language that on occasion sounds much like a real person may have spoken or written it. The major difference is that a human knows the meaning of its output, the computer doesn’t know dog shit from chocolate. There’re still more complications. To avoid being too real, certain words and phrases must be censored. Some special words, such as shit, that we all use in ordinary, everyday language are not allowed. Some phrases, like “go get fucked,” are out of line. Some ideas, insert your favorite ism, deemed wrong get nixed. We wouldn’t want our computers to be offensive.

But for all their training, learning, self-censoring and processing power, they’re totally lost with irony, satire, double meaning, with the subtleties of implication. Real humans put more into language than strictly literal meaning. We use words in ways computers can’t fathom. The way words sound, their rhyme, rhythm, assonance, alliteration—computers haven’t a clue. Consequently, computer Generative Predictive Training (the GPT in ChatGPT) tends towards bland homogenous writing. It’s writing without style, without cleverness, without conviction. It’s not intelligence. It’s barely smarter than a toaster, if the word smart actually means anything in this context.

GPT models keep growing, keep using more processing power, and yet, they cannot improve over time no matter how much power, training, or correction is pumped into them. The weaknesses of LLM and GPT are cumulative as they continue to indiscriminately scrape the internet sucking up an ever increasing amount of their own mediocre output—mediocrity squared—not a good recipe. For all the previously stated reasons, it’s impossible for these programs to improve. The spiral down is a result of the lack of understanding of words and language, by the computers, and by the programmers. There is no future for computers to write anything with insight, complexity, or competence.

Now, I’m going to spiral down with even more questions. Who really believes A.I. is smart? Why is so much time and energy being wasted on making computers do what people already do fairly well, and almost effortlessly? What’s the point of trying to make computers more like humans (emotional/irrational)? We have over eight billions of us already, ain’t that ‘nuff?

Understand more about : LLM & GPT

Too much hype makes me suspicious : AI is Overrated

More reasons for the decline : Degeneration

Here’s a little more confirmation bias : Futurism

This entry was posted in Thoughts and tagged , , , . Bookmark the permalink.

Leave a Reply