'The digital divide and how the advance of digitalisation increases poverty and inequality, should concern us.' Photo: Photo by Markus Spiske on Unsplash
Inside information technology: Lee Coppack on the digital divide
‘People on low incomes are likely to be at significant disadvantage.’
It is not something that resonates immediately with Quakers. There are not many geeks among us. We might be grateful for Zoom, but see it mainly as a necessity. The digital divide, however, and how the advance of digitalisation increases poverty and inequality, should concern us.
It is increasingly assumed that people can access the internet to find information and complete forms electronically. During the pandemic, this process has been accelerating. Education and support services have relied heavily on the internet. The benefits system had already been moving steadily online, and people who cannot easily access the internet may fear being sanctioned for an apparent failure to comply.
One profound aspect of this shift is in medicine. Many medical consultations currently happen online. This throws up issues that especially affect the poor, people in crowded homes, and those less able to communicate their problems.
There are several parts to the digital divide. One is the hardware: the need for a computer, tablet or smartphone. Second is access to the internet. Libraries and other centres with free access are shut just now. Pay-as-you-go or low-cost phone tariffs are expensive ways to buy data. The result is that people on low incomes and in deprived settings are likely to be at significant disadvantage when it comes to educational achievement, employment opportunities and access to public and commercial services.
A second and more complex issue is the growing use of automated systems and artificial intelligence (AI). These systems depend on lots of data and on algorithms to make certain decisions, such as calculating eligibility and benefits. These algorithms are only as good as the data on which they are based, and on the human minds behind them. They can have biases, innocent or deliberate, that affect results. Research in Sweden about street clearing after snowfall reached different conclusions when the responses came from men, predominantly drivers, or women, more likely pedestrians, and often with young children.
AI systems can end up failing to take into account the needs of users from different ethnic origins, cultures, genders or abilities. It’s like a tall workman mounting the mirror in a ladies’ room – he can see into it but I can’t. And not all bias is innocent. Discrimination can be built into a system to delay claims or applications.
In response to all this a group of Friends formed the Just Algorithms Action Group in 2019. Its aim is fair and just digital treatment for everyone. Awareness and some comparatively simple measures could allow us to make a difference and shine a light on our testimonies. Perhaps Friends could consider a Quakerly approach to the issue. Could we allow asylum seekers and refugees to use Meeting house wifi or hardware? At the broader level, we could share our concerns about the need for an ethical and regulatory framework. We could collect illustrations of bias in algorithms and automated systems to support those who are campaigning.
As Quakers, our testimonies lead us to work to reverse the rise of poverty and inequality. Let us raise the concern.
Comments
Please login to add a comment