Risk predictions amplify officials' biases
Algorithmic risk assessments are becoming widespread in government work. The belief is that these will help public servants make more-informed and better decisions. But when risk assessment is just one factor in making a decision, among many other social goals, these tools draw too much attention to risk. That's the finding of a 2021 study at the Univeristy of Michigan, which recruited more than 2000 laypeople to participate in a simulation of high-stakes government decisionmaking. The authors hypothesize that merely deploying risk-scoring algorithms activates decision makers' own implicit biases, even if they aren't embedded in the algorithms themselves.
This points towards a future where rather than reducing implict biases, the deployment of automated risk assessment in complex, multi-factor descisionmaking contexts could actually exacerbate them—raising the need for deeper and more thorough scrutiny of the interplay between these influences.