Technological Due Process

Document Type

Presentation

Publication Date

4-30-2009

Keywords

cyber law, administrative law, information technology

Comments

This lecture was delivered on April 30, 2009 at the Center for Information Technology Policy at Princeton University.

Abstract

Today, computer systems terminate Medicaid benefits, remove voters from the rolls, exclude travelers from flying on commercial airlines, label (and often mislabel) individuals as dead-beat parents, and flag people as possible terrorists from their email and telephone records. But when an automated system rules against an individual, that person often has no way of knowing if a defective algorithm, erroneous facts, or some combination of the two produced the decision. Research showing strong psychological tendencies to defer to automated systems suggests that a hearing officer’s check on computer decisions will have limited value.

At the same time, automation impairs participatory rulemaking, the traditional stand-in for individualized due process. Computer programmers routinely alter policy when translating it from human language into computer code. An automated system’s opacity compounds this problem by preventing individuals and courts from ascertaining the degree to which the code departs from established rules. Programmers thus are delegated vast and effectively un¬reviewable discretion formulating policy.

Professor Citron will be talking about a concept of technological due process that can vindicate the norms underlying last century’s procedural protections. A carefully structured inquisitorial model of quality control can partially replace aspects of adversarial justice that automation renders ineffectual. Her proposal provides a framework of mechanisms capable of enhancing the accuracy of rules embedded in automated decision-making systems

Disciplines

Internet Law

Share

COinS