'But let’s not forget that technology can be used for good or for ill.' Photo: by Andrea De Santis on Unsplash
Just algorithms: Siani Morris calls for AI regulation
‘We need to do more than just hope that AI can gain an ethical consciousness.’
Ruth Jones’ suggestion (23 June) that Quaker principles could help inform an ethical artificial intelligence (AI) gives food for thought. Quaker business ethics do have a great deal to offer when considered against current AI systems that make life harder for the weakest in society.
For instance, right now AI algorithms replicate human biases. There are also issues with: falsehoods being replicated (which makes them difficult to correct); creative rights being exploited; and with AI systems not understanding the adverse affects of their decisions on humans. A central problem is that humans mistake AI output for meaningful text, and act accordingly. This is happening right now. Information given by AI language models may not be true, reputations may be ruined, and decisions may be ill-founded – and the basis for those decisions unknown.
Another issue is social justice and the effects of AI on vulnerable groups. We need systems that do not contribute to excessive inequalities of wealth distribution. As Ruth mentions, sustainability is another important aspect. In addition, certain activities should just be forbidden, including those that threaten peace and community.
Visions of anthropomorphic machines causing the destruction of civilisation are common, but these take attention away from the very real problems we already have from the use of AI systems. Exploitative practices, often motivated by desire for power or financial gain, are already increasing social inequality and centralising power.
We need to do more than just hope that AI can gain an ethical consciousness. We need humans inside the AI loop, ethical guidelines that are strongly enforced, and enough transparency that people know what is going on and are able to make choices accordingly.
So, we need to focus on providing this transparency, letting people know when AI is being used, and on the accountability of developers and deployers. We need protection against exploitative working practices. New regulation should protect the rights and interests of people and thereby shape the actions and choices of corporations. We should be building machines that work for the common good, not adapting society to the wishes of those few elites currently driving the AI agenda. Those most impacted by AI systems, which includes the most vulnerable in society, should have their opinions taken into account. (It’s worth noting that many of these problems do not just apply to AI, but also to other digital systems, many already being deployed, including the UK universal credit software.)
But let’s not forget that technology can be used for good or for ill. Phone tapping exists, but that doesn’t mean we want to give up our phones. The same could apply for AI. There is a huge potential for new AI tools that could be truly good for humanity, especially in the medical field. How might Joseph Rowntree, for example, have used AI?
Siani is from Just Algorithms Action Group (JAAG), a Quaker-inspired non-profit group working for greater social justice in AI.