Chatbots are data driven manipulators - or our personal daemons? (Links 5)
And Social Media goes paid for, recommender algorithms in the dock.
This week's newsletter: Things are getting weirder thanks to Microsoft's new AI chatbot, but will AI make us more productive and more powerful? However, if we thought that data and digital ads manipulate human behaviour, what would chatbots like Sidney accomplish? Is social media becoming paid for? And — recommender algorithms in the dock.
Rapidly, things are getting weirder.
A few early testers have been given access to Microsoft's cross between OpenAI's ChatGPT and its Bing search engine. Able to surf the web, it was clearly far more potent than ChatGPT, as in this example, where Evan Mollick instructed it to write something using the writing advice of Kurt Vonnegut eloquently demonstrated. Or see this example brand people; or this academics. Astounding!
But it got weird too, as this interaction of the New York Times’s Kevin Roose showed - the bot, revealed its secret real name - Sydney, and seemed to fall obsessively in love with Roose.
The Art of data powered Chatbot persuasion and manipulation
Regarding what these developments may portend, L M Sacasas wrote an intriguing piece about the ELIZA chatbot of 1966. ELIZA relied on the techniques of Rogerian psychotherapy in which the therapist echoed the patient's statements.
ELIZA’s creator, MIT computer scientist Weizenbaum was shocked at people’s reaction to his bot. Even when they knew it was a chatbot and how it had been made, and his disclaimers that it was not a real therapist, they shared intimate details with it.
It prompts Sacasas to wonder whether: “People found it difficult to resist the illusion that they were being heard and attended to with a measure of care and interest?”
Sacasas fears that many lonely people in our society will become attached to these tools and worries about those with mental health problems engaging them. What he thinks may happen in the realm of prediction, persuasion and manipulation is as important.
Like me, Sacasas has been sceptical about the grand claims made for digital disinformation campaigns waged via bots and advertising. Just this week, I reread Evgeny Morozov’s takedown of Zuboff’s The Age of Surveillance Capitalism. Much of his critique still holds, in particular, how confused and imprecise Zuboff’s theorising is.
But her broader point perhaps comes into sharper focus as Sacasas writes:
Another sobering possibility arises from the observation of two trajectories that will almost certainly intersect. The first is the emergence of chatbots which are more likely to convince a user that they are interacting with another human being. The second is the longstanding drive to collect and analyze data with a view to predicting, influencing, and conditioning human behavior.
…there seems to be a world of difference between a targeted ad or “flood the zone” misinformation on the one hand, and, on the other, a chatbot trained on your profile and capable of addressing you directly while harnessing a far fuller range of the persuasive powers inherent in human language.
“a chatbot trained on your profile and capable of addressing you directly while harnessing a far fuller range of the persuasive powers inherent in human language.”
Your daemon, your muse
Related: Yes, shit is getting weird. But that’s a small price to pay for what we will soon all have writes Henry Oliver — our own daemon (Tyler Cowen’s phrasing):
One good analogy for the way these bots work is as a familiar or a daemon. There is something deeply faustian about them. You have to accept their inaccuracies in order to unleash their benefits.
Magic has not been a part of everyday life since the seventeenth century, so it’s no wonder we are a little out of practice. But the basic idea is familiar. Some people will do remarkable things by using this strange magic-like technology that many others simply don't understand. Inevitably, therefore, much of the reaction will be dismissive, critical, or hostile. Roose reported on Sydney without quite realising, I think, the extent to which he created her hallucinations.
So the hallucination is the point.
You can now have a daemon, should you so want, who can inspire and incite you. It can be your Socrates, your friend, your sparring partner, your tutor, your muse.
The trend to watch: Paid social media
Facebook announced plans this week to introduce a premium Verification Tier, sparking fears that this is yet another blow to digital advertising. Paid users will get more visibility in search, comments and recommendations.
Facebook’s offering differs from Twitter's in that it’s only a test in New Zealand and Australia. Verification will be matched against government IDs, not the slapdash Twitter approach of payment methods.
I’m sceptical, as pointed out last week, take up of the Twitter scheme has been super low. But if successful, these schemes may impact advertising.
Verification of social media users is however long overdue. Banks have to abide by Know Your Customer Rules (KYC) yet social media companies have been allowed to get away with providing anonymous users publishing platforms.
Of course, suppose all social media users were required to identify themselves via a government ID. In that case, it is arguable that some of these negative effects that the Gonzalez case (see below) seeks to rectify would be achieved, as third parties would bear a greater risk of being caught out or blocked from publishing harmful content using internet platforms like YouTube.
A big potential deal: Section 230 immunity for recommender algorithms under threat in Gonzalez vs Google.
It has not gotten that much coverage, but an extremely important case is before the US supreme court, namely whether Section 230(c)(1) of the Communications Decency Act, which states that:
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
…still applies (and therefore immunizes online platform operators) — when their websites make targeted recommendations of user-provided content.
So the question turns to recommender algorithms.
If the court were to agree with the Plaintiff in this case, it could at a stroke make algorithmically created content timelines, which are the default setting on Twitter, Facebook, and the only setting on Instagram, Tiktok and the to-be-launched Artefact — a too-risky business model.
This is also not as straightforward a case as it seems, for example, take the concept of “amplification of content” as this excellent post makes clear.
For a more in-depth dive into the case, listen to this podcast on the case by Tech Policy Press.
Related: Jonathan Astray has published a paper setting forth a methodology whereby editorial values could be embedded in news recommender algorithms.
Tangentially Related: Alice Evans has a fascinating post about Sexual Competition, Social Media & Algorithms.