Crafting Narratives for Artificial Intelligence
We’ve witnessed a renaissance in the field of artificial intelligence over the past 10 years and it has been very exciting to follow. With an exponential increase in compute, some key algorithmic breakthroughs and a flood of talent and money into the space - we really have seen tremendous progress in a very short space of time.
This excitement has led, as expected, to a hype that should be familiar to anyone who experienced the dot-com bubble or the ICO craze in the cryptocurrency space. A revolutionary technology with extraordinary potential gets appropriated by those outside of the field as a vehicle for hope, abundance and, let’s be honest here, clicks.
Journalists, writers and creators jumped on to the AI buzzword and started to tell stories about the future of the technology because it’s exciting! It’s how I got drawn into the field myself - through reading an incredible book-length blog post from WaitButWhy about the future of superintelligence. It is a well-understood phase of every technological life cycle when what was once discussed only in conferences, white papers and web forums - is translated into the mediums of news, radio, journalism and opinion writers. Nevermind the bloggers, tweeters and podcasters.
This is an important step. Because for any technology that will impact the world at such a scale, everyone must be educated as to what is coming down the pipe. It is the responsibility of scientists and technologists to communicate their breakthroughs in a way that can be understood by those whose lives will change as a result of the new capabilities.
So, as a society, we craft narratives for ourselves. We distill the key concepts into stories that we can tell without endlessly referring to the scientific literature. We are storytelling creatures after all.
I think you can see where I’m going here.
The way that we craft those narratives is crucially important for how we perceive potential futures and therefore how we act in the present day. If these narratives are not handled carefully, truthfully and realistically - then we end up misleading, distracting and ignoring the real concerns that face us in the present day.
The Leverhulme Centre for the Future of Intelligence at Cambridge University launched a project called the AI Narratives Project as an attempt to learn about how humans perceive the risks and benefits of AI - in order to improve the narratives we are crafting for the general public. In collaboration with the Present Futures Forum in Berlin, they recently held a virtual workshop called ‘AI Visions and Narratives’ which I was fortunate enough to attend.
The workshop brought together thinkers and researchers from various disciplines including sociology, philosophy, art, political science, linguistics, computer science, and many more. The discussions were rich and nuanced which I really appreciated. We have to champion efforts like this where we engage with difficult topics from various angles so as to try and identify potential negative externalities wherever we can.
It would be impossible to do justice to each and every presentation across the two days, but I thought I would share some general thoughts and takeaways that stuck out for me as I learned from those who shared their research:
The discussion around AI has to become more multidisciplinary. If we are to integrate it successfully in society, we need to be involving the humanities as well as the sciences. I think this is slowly improving. If I think back to the technical AI conferences I attended in 2018, there were almost no non-technical perspectives. However, the community has realised this flaw and I’m seeing more and more in every space I enter. The collaboration here is challenging, and often annoying for engineers, but it is crucially important - especially when trying to communicate these advances to the general public.
I’ve alluded to this already but mainstream media coverage of AI is really doing a disservice for the most part. Instead of nuance and intellectual humility, they focus on fear-mongering and hyperbole in the battle for clicks. It’s incredible how many pieces I read that wilfully misinterpret the science in order to make for a more dramatic story. This distortion of reality is, of course, nothing new - and it is certainly not unique to technology coverage. Yet, I still will hark on about it because we have to tell realistic, honest narratives here in order for our policymakers and consumers to have the correct information.
Speaking of narratives, the metaphors we use to describe AI really matter. The language we use to frame the benefits and associated risks will make a significant difference to their long-term impact. Communicators must be held accountable for lazy metaphors that sound good but connote something different from what is really being explained. This is immensely difficult with technology so new and unique - but we have to strive to take this seriously - because those metaphors are what linger in the minds of the public after they close that tab to return to fighting on Twitter.
The narratives we tell determine how society prioritises various problems and risks. We have to be careful of being distracted by the more dramatic, story-worthy concerns that are further away instead of the very real, tangible problems that face us right now. I’m all for existentialist thinking and I’m glad that some people are working on it, but I do think we have disproportionately focused on discussing the nature of possible superintelligence compared to things like algorithmic fairness and the like.
As this technology continues to intertwine with the human experience we must engage critically and honestly with the ambiguities of human-machine interaction if we are to manage the negative externalities. Our design process must include thinking about social engineering as well as software and algorithmic engineering. It’s too late to worry about these integrations once the product is already built and the economic incentive is pushing developers to release it. We need to be considering human interaction right from the get-go.
There is a narrative that is somewhat prevalent that technology will automatically be more objective, neutral and accurate than analogous human endeavours. This is indeed the key promise of advanced machine learning and why we see so much potential in this research. However, this is not automatic. It’s not as simple as that. The way that we design it and the data we feed into these systems is crucially important and should not be understated. We must be aware of how our intuitions mislead us here and we must take seriously the algorithmic bias that hides in plain sight.
There is lots of food for thought here but at the end of the day, I’m really pleased to see these sorts of conversations happening. Call it naive optimism but I really do think we are slowly shaping the norms to be more human-centric and we can only hope that we are doing it fast enough. There is a long way to go still.
Huge thanks to Dr Gerrit Rößler and Dr Kanta Dihal for organising such a great event.
If you enjoyed this post, please consider subscribing to my email newsletter to receive future updates directly to your inbox.