One of the most trenchant criticisms of large language models (LLMs) like ChatGPT is that since they are simply trained on copious amounts of data found on the Internet, statistically making correlations, they are mere probabilistic parrots of language with no real reasoning ability.
If correct, several problems arise, critics point out. For example, since the data they are trained on are not curated, they are bound to regurgitate society's dominant biases. Google’s Gemini recently gave an apt demonstration of this problem, angering many.
For example, when Washington Post columnist Megan McArdle asked Gemini to write something condemning the Holocaust, it straight up did so. But when she asked it to write something condemning the Holodomor or Mao's Great Leap Forward, it pointed out how complex history is and then contextualised what took place.
South Africans have long been aware of a similar issue. For years, Google Translate, which utilizes the same machine learning techniques, has translated the Afrikaans word "boeremeisie" to "peasant girl" in English—a reflection, perhaps, of the English-language corpus's unflattering portrayal of Afrikaners.
A more fundamental critique is that generative models like GPT-4 aren't truly intelligent in the way humans are. Their outputs may seem meaningful and well-formed, but this masks a lack of real comprehension and a grounding in reality. Instead of the intentional communication humans do, GPT-4 merely calculates the likelihood of sequences of words based on its extensive training data, navigating an immense “latent space” of potential sentences without grasping their significance.
Models of culture
Ted Underwood, who teaches at the School of Information Sciences and in the English Department at the University of Illinois, applies machine learning to literature. He agrees that LLMs are, rather than human-like intelligences replete with subjectivity, the ability to plan and to desire, more like models of something collective—namely culture—and, therefore, replete with bias.
Underwood says intelligence comes in many forms, some of which we share with other animals, like, for example, the ability to plan ahead, which LLMs cannot yet do. But language is something that, up to now, has been uniquely human. Perhaps language is even more distinctly human than intelligence as such.
Francios Chollet, Google deep learning pioneer and the creator of the Keras deep learning software library, agrees. He points out that we have a bias when we think about cognition: focusing on the brain. Yet he and many cognitive scientists now believe that the brain is not the only place where thinking happens.
The environment we find ourselves in is vital for thinking, too. Our abilities cannot be explained by just one disconnected brain learning independently in a single lifetime. As evidence, Chollet points out that our intelligence is greater than that of the great apes. But not that much more that it explains the difference in our capabilities. So, what explains the vast differences? It’s our culture, language and civilisation, says Chollet, the vast scaffolding on which we rely and allows us to live elevated lives.
Rage With The Machine
Culture and language are thousands of years of accumulating knowledge, skills, and mental models of the world, says Chollet. So, when you are born into this system, you don’t have to reinvent the wheel. It follows that models of the world, including models of language and culture, can be important tools for thinking in their own right, even if they are not intelligent in the same way humans are.
That does not mean LLMs aren’t dangerous. Not all aspects of human culture are beneficial, and culture extrapolated through technology can lead us to dark places. Underwood highlights this by referencing movements like QAnon, illustrating how humans are prone to creating alternate realities. When bolstered by the power of sophisticated deepfakes, these fabricated worlds could become overwhelmingly convincing.
Worlds Apart
Talking about alternate realities. Vernon Vinge, a visionary in science fiction, passed away two weeks ago, leaving a profound legacy. More than any other science fiction writer Vinge grappled with the dilemmas artificial intelligence may present us with. Borrowing from cosmology, he was the first to come up with the concept of “the singularity” with respect to technology: a pivotal event that would occur due to the development of superintelligent entities, resulting in unforeseeable changes to human civilization.
The very first science fiction Vinge ever sold was a short story called “Apartness” to the British literary magazine New Worlds in 1965. It tells the story of a post-apocalyptic Earth in which the protagonist discovers an isolated group of people in Antarctica.
Confused with “natives” of the frozen continent, the protagonist describes them as hairy and of an indeterminate race, but primitive in every physical sense and inferior in their tools. Later, the protagonist discovers two shipwrecks, one clearly marked as the “Nation.” The other’s name is nearly worn away by the action of ice and water on its hull, but one can just about discern “Hendrik Verwoerd.”
PS: Culture is both misunderstood and underrated as a force in society. In a wonderfully erudite and wide-ranging lecture delivered in London last week, Terry Eagleton explored the multifaceted nature of culture. It’s well worth a listen.
I write a bi-weekly column on AI for the Afrikaans language Vrye Weekblad. This piece first appeared there.