A tragic scandal at the UK Post Office highlights the need for legal change, especially as organizations embrace artificial intelligence to enhance decision-making.
I don’t think you can completely anonimize data and still end up with useful results, because the AI will be faced with human inconsistency and biases regardless. Take away personally identifiable information and it might mysteriously start behaving harsher regarding certain locations, like, you know, districts where mostly black and poor people live.
We’d need to have a reckoning with our societal injustices before we can determine what data can be used for many purposes. Unfortunately many people who are responsible for these injustices are still there, and they will be the people who will determine if the AI output is serving their purpose or not.
The “AI” that I think is being referenced is one that instructs officers to more heavily patrol certain areas based on crime statistics. As racist officers often patrol black neighbourhoods more heavily, the crime statistics are higher (more crimes caught and reported as more eyes are there).
This leads to a feedback loop where the AI looks at the crime stats for certain areas, picks out the black populated ones, then further increases patrols there.
In the above case, any details about the people aren’t needed, only location, time, and the severity of the crime. The AI is still being racist despite race not being in the dataset
I don’t think you can completely anonimize data and still end up with useful results, because the AI will be faced with human inconsistency and biases regardless. Take away personally identifiable information and it might mysteriously start behaving harsher regarding certain locations, like, you know, districts where mostly black and poor people live.
We’d need to have a reckoning with our societal injustices before we can determine what data can be used for many purposes. Unfortunately many people who are responsible for these injustices are still there, and they will be the people who will determine if the AI output is serving their purpose or not.
The “AI” that I think is being referenced is one that instructs officers to more heavily patrol certain areas based on crime statistics. As racist officers often patrol black neighbourhoods more heavily, the crime statistics are higher (more crimes caught and reported as more eyes are there). This leads to a feedback loop where the AI looks at the crime stats for certain areas, picks out the black populated ones, then further increases patrols there.
In the above case, any details about the people aren’t needed, only location, time, and the severity of the crime. The AI is still being racist despite race not being in the dataset