A homeless person sits on a street corner.

How face recognition harms the homeless

The growing level of surveillance used to control access to shelters and social services troubles housing and homeless advocates. In February 2021, tensions came to a head as news of a patent filing by Clearview AI spread. The controversial company claims to have harvested more than 3 billions photos from the web to train a facial-recognition system, and highlighted identifying "homeless people" as a potential use case for the technology. Homeless have been exploited in other ways by AI developers. In 2019, a Google subcontractor targeted homeless people to record facial images without consent as part of an effort to reduce racial bias in the training data for the company's Pixel 4 face unlock system. Randstad, the subcontractor, targeted homeless people offering small cash incentives, who are disproportionately non-white, for "selfies" often without disclosing the nature of their work.

All of this points towards a future where facial recognition technology, indiscriminately wielded in the public realm on the most vulnerable persons who inhabit it, will become a tool for wealth extraction and control with little recourse.

Source: buzzfeednews.com
Public Safety & Cybersecurity
computer vision
face recognition