Roger A. Shrubber
Well-Known Member
i'm not that concerned with whether or not they ever achieve consciousness...An essay that describes in language what Large Language Models are, their limitations -- they are not intelligent, that is a misnomer -- social implications of these artificial language models and some interesting observations of trends and their implications.
You Are Not a Parrot
And a chatbot is not a human. And a linguist named Emily M. Bender is very worried what will happen when we forget this.nymag.com
It starts with a fable.
Say that A and B, both fluent speakers of English, are independently stranded on two uninhabited islands. They soon discover that previous visitors to these islands have left behind telegraphs and that they can communicate with each other via an underwater cable. A and B start happily typing messages to each other.
Meanwhile, O, a hyperintelligent deep-sea octopus who is unable to visit or observe the two islands, discovers a way to tap into the underwater cable and listen in on A and B’s conversations. O knows nothing about English initially but is very good at detecting statistical patterns. Over time, O learns to predict with great accuracy how B will respond to each of A’s utterances.
Soon, the octopus enters the conversation and starts impersonating B and replying to A. This ruse works for a while, and A believes that O communicates as both she and B do — with meaning and intent. Then one day A calls out: “I’m being attacked by an angry bear. Help me figure out how to defend myself. I’ve got some sticks.” The octopus, impersonating B, fails to help. How could it succeed? The octopus has no referents, no idea what bears or sticks are. No way to give relevant instructions, like to go grab some coconuts and rope and build a catapult. A is in trouble and feels duped. The octopus is exposed as a fraud.
Her point is that we humans read what the machines say and interpret it as if there is a mind on the other side that is speaking to us with intention. However, these LLM have no context or experience in our world. They simply string words together based upon a stochastic model that is based upon how humans use language to communicate. When a model talks about love, their words have no meaning. The model is simply putting words together based upon the algorithm. When we read those words and overlay our own context onto it, then it has meaning. Our minds imagine there is a mind on the other side. And there is none. The model is simply a parrot that repeats our language.
I'm worried about the human tendency to implement technology with no forethought at all.
Does an AI need to be conscious to decide that humans are self destructive vermin that need to be eradicated?
If we put it in charge of automated factories and there are unforeseen problems, will it be able to deal with those problems? Or
will they escalate into something more?
We don't know...but that won't stop us from finding out the same potentially fatal way that we have always found things out with.