Chat-GPT and the automation of bullshit
Will AI only become civilised when the bullshit piles up high enough that we need a system to deal with it?
“Civilization did not rise and flourish as men hammered out hunting scenes on bronze gates and whispered philosophy under the stars, with garbage as a noisome offshoot, swept away and forgotten. No, garbage rose first, inciting people to build a civilization in response, in self-defense. We had to find ways to discard our waste, to use what we couldn't discard, to reprocess what we couldn't use. Garbage pushed back.”
― Don DeLillo, Underworld
There are many reason to question the value and justifiability of Large Language Models - LLMs or "stochastic parrots" - on the grounds that their environmental impact, unreliability and bias seem to increase with the scale of their training data.
But the topic I want to focus on is the fact that Chat-GPT represents the automation of bullshit, and the way this shines a light on the role of bullshit in our society and economy.
On Bullshit
Harry G. Frankfurt was the first to take seriously the idea of bullshit, philosophically.
"Someone who lies and someone who tells the truth are playing on opposite sides, so to speak, in the same game. Each responds to the facts as he understands them, although the response of the one is guided by the authority of the truth, while the response of the other defies that authority and refuses to meet its demands. The bullshitter ignores these demands altogether. He does not reject the authority of the truth, as the liar does, and oppose himself to it. He pays no attention to it at all." (On Bullshit, 2005)
LLM's lack of concern with the truth is well-documented and has produced both comedy and concern. The concern is based on the downstream impacts when systems begin to rely on LLMs for judgements that have significant consequences (like mistranslating an Arabic greeting on Facebook into Hebrew as "Attack Them" 1).
The comedy is based on the absurd things that can be imagined by a machine that has good syntax but no understanding (eg "Single Mother points to her husband's bones in the corner of the room. They are all giftwrapped in eggnog." 2 ). Over the years, as linguistic competence has increased, the comedy has declined. LLM's stopped being funny.
But to judge a tool like Chat-GPT purely on its ability to understand is to miss the economic value of such a system, and with the release of Chat-GPT, we've hit a threshold where the LLM's competence in two areas actually exceeds many humans.
Despite understanding none of what it writes, Chat-GPT can write better than most people, and it can summarize a topic better than most people.
As an example, at the bottom of this post, I've included a "conversation" I had with Chat-GPT about AI Ethics.
Now, it's easy to argue that Chat-GPT can't actually write or summarize better than people because it doesn't understand anything. So let me clarify:
Chat-GPT can "write some crap" and "summarize some crap" better and faster than most people.
Comparing Chat-GPT to a Google search might be appropriate, given that a Google search is a fast way of finding human knowledge, but also human bullshit. Depending on an "I'm feeling lucky" Google search is risky in the same way that Chat-GPT is risky, from a truth-telling perspective.
Worries about Chat-GPT possibly focus too much on the fact that it generates bullshit, and not enough on the fact that it replaces human-generated bullshit.
People Love Writing... Right?
But how much of people's work is actually spent doing these two tasks? Well, that's a good question, isn't it?
If you want to get a sense of how many of us are in the business of creating bullshit, or work in a truth-agnostic way, at least some of the time, ask yourself:
"How much of my job is focused on making an existing point of view intelligible to others, rather than generating an intelligent point of view?"
For example a product review is perhaps 90% about synthesis, and 10% opinion.
Not wanting to single anyone out, but I've been thinking about how communications managers and consultants, may be at risk here.
Many communications managers actively work to reduce the level of bullshit in an organisation by critiquing proposed messages in terms of their relevance and impact for internal staff, subjecting them to a "sniff" test for bullshit. However, their primary role in the organisation is to be able to write better than the managers and executive they serve. Chat-GPT offers the prospect of developing stochastic parrots that give execs the writing powers of comms manager without those pesky moral or semantic sensibilities. Those sensibilities provide a value to the organisation that is much harder to see, and so many organisations will happily do without them.
Similarly one of the primary roles of consultants is to synthesize large bodies of specialised information for the consumption of managers. When that task can be delegated to a system like LLM, it may become much harder to justify paying for the second-order judgement or evaluation that add value over the top. It may even be harder to develop and maintain that higher-order judgement in staff, if they don't have the formative experience of synthesizing source material.
Will we even miss writing, though?
One article I read this week - which also saw the connection to Harry Frankfurt - made the interesting argument that writing is a one of the "meaningful or enjoyable forms of work". "For many people, writing is ... a craft you can hone that involves little of the drudgery of the assembly line."
For many people, I believe this is just... well... bullshit. Writing is hard, slow, unnatural work, often with questionable genuine utility, depending on how its consumed. I'm reminded of "Dance Dance Dance", where Haruki Murakami's main character - a writer themselves - describes writing as "shovelling cultural snow".
“I don’t know. The way I see it, it’s like shoveling snow. You do it because somebody’s got to, not because it’s fun.”
“Shoveling snow, huh?” she mused.
“Well, you know, cultural snow,” I said.
No one wants a bullshit job right?
We're just getting rid of bullshit jobs... or are we?
Making Bullshit More Profitable, and More Prolific
Robin Sloan made this interesting connection: “the cotton gin, a canonical example of automation, perversely increased the demand for slave labor because it made cotton growing so much more profitable 3”.
The point here is that, by making bullshit cheaper, we will improve the business case for generating it, and it will proliferate. Good syntax is one of the key tests many of us use for distinguishing between spam and genuine communication. Chat-GPT and LLMs like it threatens to efface that distinction, while making it far cheaper to generate effective spam and phishing attacks.
Civilising bullshit
Sloan goes on:
" So, the obsolence of a particular human task is, like, the LEAST interesting thing about these processes, and cackling about the end of task X or job Y is like staring at a spot on the carpet while they rebuild the whole house around you. When you encounter someone doing that, I encourage you to ignore them, because it’s a sign they’re not a serious thinker, and/or they’re not curious about how the world really changes.
For me, the interesting questions sound more like
What new or expanded kinds of human labor might AI systems demand?
What entirely new activities do they suggest?
How will the world now be reshaped to fit their needs? "
Well, a few jobs leap to mind:
Prompt writers
The art of writing in a particular style - diplomatic, informal, provocative - will slowly give way to the art of prompting an LLM to write in a particular style
Since LLMs are inherently unreliable (after all, bullshit is unreliable precisely because it doesn't care whether it's true or not), a whole hosts of jobs will be created for people to write prompts that avoid the limitations of the model (we already see this in AI art - the emergence of expert prompt writers).
Editors
There will always be a need for editors to solve the "last mile" problem bridging the distance between generic training and specific contexts
A key decision for organisations will be whether to invest in a "small world" language model, specific to context or cultural group , or to try to edit an LLM. Much of this will come down to relative cost, tolerance for errors and the cost of editorial staff
Training data generators
We already see this with the interest in capturing the valuable raw data from indigenous language speakers, and their legitimate resistance to that as another form of colonisation.
One longer term consideration is realising that Chat-GPT has been trained primarily on an internet containing 30yrs of human writing. Once the proliferation of bullshit starts, the next generation of LLMs won't have purely human-generated training data to work with. The internet will have been tainted with outputs of other LLMs, creating a dog food situation and risking semantic drift. Where, exactly, future generations of LLMs will source their training data from is a significant question, especially as they replace human writers in many contexts. Just as the cotton gin ended up increasing demand for slave labout - LLMs may end up increasing demand for human-generated bullshit.
The Necessity of Bullshit
I want to return to Harry Frankfurt, who asks another great question: Why is there so much bullshit?
"Bullshit is unavoidable whenever circumstances require someone to talk without knowing what he is talking about. Thus the production of bullshit is stimulated whenever a person’s obligations or opportunities to speak about some topic are more excessive than his knowledge of the facts that are relevant to that topic. This discrepancy is common in public life, where people are frequently impelled — whether by their own propensities or by the demands of others — to speak extensively about matters of which they are to some degree ignorant. Closely related instances arise from the widespread conviction that it is the responsibility of a citizen in a democracy to have opinions about everything, or at least everything that pertains to the conduct of his country’s affairs. The lack of any significant connection between a person’s opinions and his apprehension of reality will be even more severe, needless to say, for someone who believes it his responsibility, as a conscientious moral agent, to evaluate events and conditions in all parts of the world."
Perhaps, when one feels both overwhelmed by the demands of truth - given the volume and dynamism of information in the fourth industrial revolution - and at the same time, compelled by a globalised network economy where identity and belonging are mediated through social media to express ourselves to sustain our very presence in the world - bullshit must flourish.
While there is such a strong market for bullshit, technologies to automate the creation of it will continue to be developed. The much harder question is how to corral or suppress the market for bullshit.
Quid Bono?
Of course, some people don't want to suppress this market, and this is where questions about the purpose and stated goals of the OpenAI lab come into question.
Back in 2015, Elon Musk justified setting up the OpenAI lab as a kind of "ethical innovation" factory:
"What is the best thing we can do to ensure the future is good? We could sit on the sidelines or we can encourage regulatory oversight, or we could participate with the right structure with people who care deeply about developing AI in a way that is safe and is beneficial to humanity." 4
It's difficult today not to view the transformation of OpenAI into a capped for-profit, and Musk's takeover of Twitter, as related. After all, if anywhere is a marketplace for bullshit, it's Twitter. Musk may not be on the board of OpenAI anymore, but he doesn't need to be to benefit from the proliferation of bullshit that it enables.
ChatGPT as AI Ethics expert
I had a chat with ChatGPT, to see what it understood about AI Ethics. (Oh, the irony!) - My prompts are in bold.
What's the best way to incorporate ethics into the development and use of AI in a large organisation?
Incorporating ethics into the development and use of AI in a large organization is an important step for ensuring that the technology is used responsibly and effectively. One of the best ways to do this is to establish an ethics board or committee that is responsible for evaluating the potential ethical implications of AI projects and making recommendations for how they should be handled. This group should be composed of experts in AI, ethics, and the relevant business areas, and should be empowered to make decisions and provide guidance on ethical issues.
Who should be included on an ethics board or ethics committee?
An ethics board or committee should be composed of individuals who have expertise in AI, ethics, and the relevant business areas. This could include AI researchers and developers, ethicists, lawyers, and representatives from the business areas that will be affected by the AI technology. It's important to have a diverse group of individuals with different backgrounds and perspectives to ensure that all potential ethical considerations are taken into account. The board or committee should also be empowered to make decisions and provide guidance on ethical issues related to the use of AI.
What are the ethical issues related to the use of AI?
There are many ethical issues related to the use of AI, including issues of fairness, transparency, accountability, and bias. Some of the specific ethical concerns that have been raised in relation to AI include:
Bias: AI systems can be biased against certain groups of people if the data they are trained on is biased. This can lead to discriminatory outcomes and undermine the fairness of the AI system.
Explainability: Many AI systems are difficult to understand and interpret, which can make it difficult to explain their decisions and actions. This lack of explainability can be a barrier to accountability and trust in AI.
Autonomy: As AI systems become more advanced, they may be able to make decisions and take actions without direct human oversight. This raises ethical questions about the responsibility for the actions of these systems and how they should be controlled.
Privacy: The use of AI often involves collecting and analyzing large amounts of personal data, which raises concerns about privacy and the potential for abuse of this data.
Unemployment: Some people are concerned that the increasing use of AI could lead to widespread job losses, particularly in industries where tasks can be automated. This could have negative social and economic consequences.
How can an organisation minimise bias for its AI systems?
There are several steps that an organization can take to minimize bias in its AI systems. These include:
Ensuring that the data used to train the AI system is representative and diverse, and does not contain any biased or discriminatory information.
Regularly auditing and testing the AI system to identify and correct any biases that may have been introduced during the training process.
Implementing fair and transparent algorithms and decision-making processes that take into account the potential for bias.
Establishing clear policies and procedures for addressing bias in AI systems, and providing training to AI developers and users on how to identify and mitigate bias.
Engaging with stakeholders, including affected groups and members of the public, to solicit feedback and input on the potential for bias in the AI system and how it can be minimized.
What are the biggest technical challenges associated with creating responsible AI?
There are several technical challenges associated with creating responsible AI. Some of the biggest challenges include:
Developing AI systems that can operate in complex, real-world environments and handle a wide range of tasks and situations.
Ensuring that AI systems are transparent and interpretable, so that their decisions and actions can be understood and explained.
Building AI systems that are fair and unbiased, and do not discriminate against certain groups of people or make decisions that are unfair or unethical.
Designing AI systems that are secure and protect the privacy of individuals and their data.
Developing AI systems that can learn and adapt over time, and continue to improve their performance without human intervention.
Thanks Justin - food for thought. I'm not yet sure where I stand on ChatGPT. I've been doing a lot of 'playing with text' over the past few weeks, and I love what I can do with the tool. But I'm glad someone's addressed the bullshit problem (at least we don't have to continue politely ignoring the smell!). We can certainly expect a 10x increase in the production of bullshit from this point, can't we? It fills me with dread. Every BS artist in the world just got really smart about .... well, everything, and they're not going to be shy about it. Maybe Fact Checking should be among the growing professions you've listed in your article?
On a philosophical note, it's not the machines that are responsible for bullshiting, is it? A machine has no intention to speak the truth - no stake in the 'truth game' at all. So I don't see how we can level a moral charge against it (unless we acknowledge it's sentience, which you're clearly not doing). It is we humans, caught up in the discourse machines of the knowledge economy, who bear the responsibility for bullshit. Reading your article, I had a terrifying vision of the bullshit economy to come: on the demand side, the insatiable hunger of corporations and social media ecosystems, demanding ever more content and provocative speech-acting to keep the wheels of marketing in motion; on the supply side, millions of happy BS artists fed by AI, trading their integrity for a buck. Machines feeding machines, with humans greasing the wheels - it really does sound like the Matrix, doesn't it?
At the same time, the somewhat sanguine and/or incautious part of me is looking forward to seeing what amazing things intelligent and creative people can do with these new tools. Not everyone cares to be a bullshit artist, after all. ChatGPT has just removed a lot of the shit work from the labours of intelligent and creative people - I think that's a good thing. A point I'd add to your reflection is that the unconscious is a bullshit machine too. The majority of what pops into our head is sheer, unadulterated nonsense - the work of intelligence is to filter, hone, and fact check ourselves until we have something we can present to the world. I find that ChatGPT helps accelerate this process immensely. I don't need to sit around churning through my own bullshit to produce the raw material for intelligent thought - I can hand that labour over to the machine.
Of course, as soon as I've posted this, I discover Ian Bogost's excellent article in the Atlantic, where he makes similar points, but goes on to describe something like Chat-GPTs best self as a kind of "aesthetic instrument" with which to "play text", like one plays music on a synthesizer. Bogost implores us to interpret Chat-GPT without being boring, by focusing on things like take home exams. "Imagine worrying about the fate of take-home essay exams, a stupid format that everyone hates but nobody has the courage to kill." I'll let you be the judge of whether I've fallen into that trap. https://www.theatlantic.com/technology/archive/2022/12/chatgpt-openai-artificial-intelligence-writing-ethics/672386/