Identification

In looking for algorithms that we want to hold accountable, we might ask several questions: What are the consequences and impact of that algorithm for the public, how significant are those consequences, and how many people might be affected by or perceive an effect by the algorithm? We might ask whether the algorithm has the potential for discrimination, or whether errors made by the algorithm may create risks that negatively impact the public. When a classification decision has negative consequences, then looking for false positives, like Content ID’s identifying fair-use content as infringing, can be a tip indicating a deeper story. We might also wonder about censorship: How might the algorithm steer public attention and filter information in meaningful patterns?

Essentially what we’re looking to identify is an algorithm that’s made a bad decision, that somehow breaks an expectation for how we think it ought to be operating. Is the algorithm’s output consistent with what we think it should be? And if not, what’s driving that inconsistency—a bug, an incidental programming decision, or a deep seated design intent? Observations, tips, and digging through data are all ways that we can identify interesting and significant algorithmic decisions that might warrant accountability reporting.

results matching ""

    No results matching ""