In this project, my focus is how the use of automated systems in welfare programs has negative effects on the poor. Because this is a fairly recent issue that has been brought to my attention I want to understand in what ways is this technology is being used to discriminate against the poor, in order to shed light on how this digital era has provided a new way for discrimation and inequality to live on in a way that we didn’t expect.
To provide some background a report by Philip Alston who avocates for humans rights, reveals that automated systems are being used as surveillance and a means to punish the poor in not only the UK but also the U.S and other countries. This trend of Digitalizing everything regardless of whether or not everybody even have access to the internet to access this technology is clearly shown in Alston report considering the amount of places whose welfare is digital ( require internet to access welfare or biometrics as verification to receive welfare). Alston explains how automated systems are able to commit these mistakes due to the lack of policies that address protection against A.I.This report has brought awareness of an issue that was unsaid and thus resulted in investigations, press conferences being held and books like Automated inequality being published.
The sources I have gathered addresses the issue in a way where instead of tossing the idea of using automated systems for public services, it is instead encouraged to provide policies that protect people’s privacy rights as well as updating Automated systems where it is able to address any errors it comes across as well as maintaining a neutral stance where they aren’t discrimating in any way.