A sideways look at economics

I have a confession. It was Father’s Day recently and so, being the good son that I think I am, I wrote my father a card. I wanted to incorporate a pun in the message that was comic book themed to match his interests and ‘bad’ enough to achieve the level of dad humour he is notorious for. This however was a struggle, perhaps because I am not yet a father myself and so not privy to this kind of humour. My confession is that during this struggle I gave in to the temptation to consult Chat-GPT and in the process triggered a moral dilemma.

“Have a MARVELous Father’s Day”. It was terrible. Compared to some of the alternatives though, it was pretty good. “No villain stands a chance against your thermostat control”. Need I say more? Curiously though, it did illicit a chuckle from its recipient and in doing so proved that this humour is indeed still elusive to me (and likely many many others). But it also prompted an awkward feeling in myself. Was I feeling guilty that I had outsourced part of what should have been a personal message of appreciation? Had the style strayed a little too far from my own and I worried of being found out? Or was it something else? Clearly, on some level it felt wrong.

By now, most of us will have experienced a similar kind of discomfort. Take for instance the frustrating realisation mid conversation with customer support that there isn’t actually a human on the other end but an AI chatbot instead. It’s an odd feeling, like someone was trying to fool you into thinking you were having a real conversation that now feels uncanny and hollow. It can also come from receiving emails that have the markings of AI, overly formal and devoid of the personality that you would usually associate with its sender. It’s a phenomenon our brains have only had to face in the last few years, but one that is becoming more frequent.

It should be of no surprise that human interactions are increasingly being replaced by synthetic ones — be it chatbots, online reviews or even the driverless taxis that I saw on a recent trip to Washington DC. One of the most notable examples is the rise in bots on social media and in particular X where, despite a much-publicised pledge from Elon Musk to purge them, estimates suggest ‘non-human’ accounts are in the majority. Trends like these have led to the coining of the term ‘Dead Internet Theory’, which reflects the belief that a large and growing portion of online content is no longer generated by humans but rather by automated bots and AI models. The term captures the feeling that these shifts are leading to a kind of hollowing-out of the internet in which it still feels busy on the surface but much of the underlying activity is increasingly automated and disconnected from genuine human interactions. The concept is exaggerated of course, the entire internet isn’t dead, but directionally it is correct.

However, the fact that AI is replacing ever more of our human interactions doesn’t make me feel any better about my Father’s Day card. I think the main reason for this feeling is the impersonality of AI-generated text that is drawn from vast but anonymised datasets. As humans we are inherently tied to our personal and often shared experiences and context. A chatbot might appear empathetic by mimicking the language it has scraped from real conversations, but it doesn’t understand the emotional context behind it. ‘The Chinese Room’ is a 1980s thought experiment along these lines. A person is locked in a room with no knowledge of Chinese. They’re given a set of Chinese symbols and a rulebook in English. This rule book tells them how to manipulate those symbols to produce an output that is also in Chinese. To a Chinese speaker, the outputs seem to be appropriate responses to the inputs, and it could be implied that the person in the box is fluent in Chinese. In reality, whilst they can adhere to the structure of the language, using the rules they were given, the meaning remains a mystery. This is how programs operate, simulating without understanding. Even the most advanced AIs do this, just with much larger rulebooks. That gap in understanding is still perceptible to us and it helps to explains that uncanny feeling we experience sometimes when communicating with a program that seems ever more fluent from outside the box.

Outside of these theoretical thought experiments, study after study has found humans, on the whole, are more comfortable communicating with other people. A recent study looking into human versus AI-generated empathy gave participants responses to their emotional situations and asked them to score them. They found that the human attributed responses were rated as more empathic and supportive, and elicited more positive and fewer negative emotions, than AI-attributed ones. Other research found that users reported significantly higher communication experience, connection and trust when dealing with a human agent over a chatbot. But, if we don’t prefer it, then why does AI-generated content seem to be permeating to all corners of our lives?

In my case, it was convenience. The AI gave me an array of puns to pick from faster than I could think of one. For business the drivers are similar: AI is cheap and scalable. A customer support chatbot, for example, can greatly reduce the number of queries needing a human agent’s attention, even if it is less helpful. Once running, they cost a fraction of the price of a human, can run 24/7 and don’t need holidays or a manager. They are categorically more efficient tools for communicating information than a human. User experience may suffer, but in competitive markets lower costs mean lower prices that are likely much more important to consumers. One survey found that, whilst 93% of respondents prefer interfacing with a human over AI, less than half (42%) would be willing to pay extra to access it. Saying you’d pay extra on a survey is one thing, while actually doing it in practise is another and so I suspect that 42% is likely even lower.

But there is another factor at play here. What I didn’t mention about the study comparing human and AI empathy was that all the responses in the study were AI-generated, there was no human ones. And so, when participants rated human attributed responses as more empathic and supportive they were actually just highlighting the difference in their perceptions of humans and AI. In fact another study, this time looking into how mental health advice was received, found that when the origin of the advice was concealed AI was rated significantly higher on all metrics: authenticity, professionalism and practicality. The study was then repeated six months later with the same participants and responses, but now with the sources revealed. This time AI responses only came out top in professionalism. Ironically, the largest swing was seen in authenticity with participants rating the same words as more authentic now knowing another human had written them.

So, it wasn’t the words in the message themselves that mattered. I could very plausibly have written the pun myself given sufficient time and inspiration. What clearly does matter is my father’s perception of where it had come from. I did reveal after he laughed at the pun that it was really Chat-GPT who should take the credit. I could sense when I revealed this that it now wasn’t quite as warmly received as it had been moments ago. If I hadn’t revealed it to him maybe he would have continued to enjoy this pun in his ignorance, presumably sharing it with all the other dads as I imagine they do, but it wouldn’t have settled my own guilt. I would still have known the words had come from a program that lacked the understanding and shared experience of my father and I.

This whole ordeal reminds me of one of my favourite books, Klara and the Sun (spoilers ahead). The book is set in a slight dystopia and narrated by Klara, an AI robot designed to be a companion for children, who in this world are increasingly isolated. The child that Klara is bought for becomes gravely ill over the course of the book and Klara slowly realises her role is actually to observe, mimic and then to replace her if necessary. The book ends with a retrospective realisation from Klara that this would never have worked out. Not because she wouldn’t have achieved the perfect mirror of the child but rather that there was something she could never have replicated: what the child meant to those who loved them.

I think I’ll write the pun myself next time.

 

More by this author

The illusion of confidence

When to take a chill pill

O Fortuna