Prioritization

Prioritization, ranking, or ordering serves to emphasize or bring attention to certain things at the expense of others. The city of New York uses prioritization algorithms built atop reams of data to rank buildings for fire-code inspections, essentially optimizing for the limited time of inspectors and prioritizing the buildings most likely to have violations that need immediate remediation. Seventy percent of inspections now lead to eviction orders from unsafe dwellings, up from 13 percent without using the predictive algorithm—a clear improvement in helping inspectors focus on the most troubling cases.7

Prioritization algorithms can make all sorts of civil services more efficient. For instance, predictive policing, the use of algorithms and analytics to optimize police attention and intervention strategies, has been shown to be an effective crime deterrent.8 Several states are now using data and ranking algorithms to identify how much supervision a parolee requires. In Michigan, such techniques have been credited with lowering the recidivism rate by 10 percent since 2005.9 Another burgeoning application of data and algorithms ranks potential illegal immigrants so that higher risk individuals receive more scrutiny.10 Whether it’s deciding which neighborhood, parolee, or immigrant to prioritize, these algorithms are really about assigning risk and then orienting official attention aligned with that risk. When it comes to the question of justice though, we ought to ask: Is that risk being assigned fairly and with freedom from malice or discrimination?

Embedded in every algorithm that seeks to prioritize are criteria, or metrics, which are computed and used to define the ranking through a sorting procedure. These criteria essentially embed a set of choices and value- propositions that determine what gets pushed to the top of the ranking. Unfortunately, sometimes these criteria are not public, making it difficult to understand the weight of different factors contributing to the ranking. For instance, since 2007 the New York City Department of Education has used what’s known as the value-added model (VAM) to rank about 15 percent of the teachers in the city.

The model’s intent is to control for individual students’ previous performance or special education status and compute a score indicating a teacher’s contribution to students’ learning. When media organizations eventually obtained the rankings and scores through a Freedom of Information Law (FOIL) request, the teacher’s union argued that, “the reports are deeply flawed, subjective measurements that were intended to be confidential.”11 Analysis of the public data revealed that there was only a correlation of 24 percent between any given teacher’s scores across different pupils or classes. This suggests the output scores are very noisy and don’t precisely isolate the contribution of the teacher. What’s problematic in understanding why that’s the case is the lack of accessibility to the criteria that contributed to the fraught teacher rankings. What if the value-proposition of a certain criterion’s use or weighting is political or otherwise biased, intentionally or not?

results matching ""

    No results matching ""