'So why not ask a data-trained machine to revise Quaker faith & practice in the style of liberal universalist Quakers (or maybe evangelical Christians)?' Photo: by Cash Macanaya on Unsplash

‘AI is about automating inequality.’

Digital divide: Mike Nellis has more on artificial intelligence

‘AI is about automating inequality.’

by Mike Nellis 8th December 2023

I was unconvinced by Keith Wilson’s computer-generated argument (24 November) that Quaker values could and should be used to shape the development of artificial intelligence (AI). It is magical thinking. The option has never existed for citizens or faith communities to affect the balance of gains and losses in AI.

The playing field on which the ethical debate on AI is occurring is not level. It is dominated by a handful of western corporations and governments, whose interests drive the agenda (in part, but not only, in competition with China). AI-driven innovation is now considered so vital to prosperity and security that regulating its use will always be done to fit investors’ expectations.

AI is not a neutral tool like a pen or a hammer. To make such a comparison is to misconceive what AI-systems are, and why they are being disseminated. Abstract ethical debates about AI might modify the ways it disrupts business, governance and public service – which is not unimportant – but they can’t halt its advance, and that may be the greater problem. ‘The fourth industrial revolution’, to which AI is central, consolidates existing power far more than it addresses social injustice or meets human needs. Such practice requires the egalitarian distribution of less complex technologies, especially in healthcare, energy and housing. AI is about automating inequality.

We should not disguise liberal platitudes about AI as Quaker values. It is potentially dangerous. It fosters complacency about the ease with which AI-generated problems can be resolved by ethical, legal and democratic procedures. These have already been weakened by digital disinformation, which AI will intensify. It underplays what bad actors will inevitably do with AI. Take social media, whose unregulated, opinion-polarising excesses we now simply live with. Crucially, it showcases AI-innovations as if they were proof of a golden human future for all, omitting to say that, while elites will benefit, many more people will lose meaningful employment and be further ensnared in forms of connectivity that are coercive and banal.

I was neither surprised nor impressed that Keith’s article was mostly generated by AI. These super-smart chatbots have been created using vast amounts of material scraped – stolen – from the web. This happens without the originator’s permission (or even knowledge), and is then ‘cleaned’ for use by armies of low-paid workers. Copyright legislation is evolving to address the theft, but the machines it enabled are here to stay, and getting better. Even Quakers are playing with them! So why not ask a data-trained machine to revise Quaker faith & practice in the style of liberal universalist Quakers (or maybe evangelical Christians)? It would only take a few minutes; so much time saved for better things! These chatbots were launched without serious consideration of their likely impact, despite years of ethical argument.

We did need to start a debate about Quakers and AI. We mustn’t be Luddite about it. But we might consider being a little Amish. As we discern how ‘the human’ might best be preserved in an AI-infused world, small acts of refusal might count for more than small acts of collusion.


Comments


Please login to add a comment