Another warning about the AI apocalypse? I don’t buy it

There is both excitement and fear about this technology. Apocalyptic scenarios of AI similar to those depicted in the Terminator films should not blind us to a more realistic and pragmatic vision that sees the good of AI and addresses the real risks. Picture: iStock
AI tools like ChatGPT are everywhere. It is the combination of computational power and availability of data that has led to a surge in AI technology, but the reason models such as ChatGPT and Bard have made such a spectacular splash is that they have hit our own homes, with around 100m people currently using them.
This has led to a very fraught public debate. It is predicted that a quarter of all jobs will be affected one way or another by AI, and some companies are holding back on recruitment to see which jobs can be automated. Fears about AI can move markets, as we saw yesterday when Pearson shares tumbled over concerns that AI would disrupt its business.
And, looming above the day-to-day debate are the sometimes apocalyptic warnings about the long-term dangers of AI technologies — often from loud and arguably authoritative voices belonging to executives and researchers who developed these technologies.
Last month, science, tech, and business leaders signed a letter which asked for a pause in AI development. And this week, the pioneering AI researcher Geoffrey Hinton said that he feared that AI could rapidly become smarter than humanity, and could easily be put to malicious use.
So, are people right to raise the spectre of apocalyptic AI-driven destruction?
In my view, no. I agree that there are some sobering risks. But people are beginning to understand that these are socio-technical systems. That is, not just neutral tools, but an inextricable bundle of code, data, subjective parameters, and people. AI’s end uses, and the direction it develops, aren’t inevitable. And addressing the risks of AI isn’t simply a question of “stop” or “proceed”.
Researchers such as Joy Buolamwini, Ruha Benjamin, and Timnit Gebru have long highlighted how the context in which AI technologies are produced and used can influence what we get out — explaining why AI systems can produce discriminatory outcomes, such as allocating less credit to women, failing to recognise black faces, and incorrectly determining immigrant families are at higher risk of committing fraud (pushing many into destitution).
- Ivana Bartoletti is a privacy and data protection professional, visiting cybersecurity and privacy fellow at Virginia Tech, and founder of the Women Leading in AI Network.