Accountable Information Use: Privacy and Fairness in Decision-Making Systems
Increasingly, decisions and actions affecting people's lives are determined by automated systems processing personal data. Excitement over the positive contributions of these systems has been accompanied by serious concerns about their opacity and the threats that they pose to privacy, fairness, and other values. Recognizing these concerns, this project seeks to enable real-world automated decision-making systems to be accountable for privacy and fairness. "Accountable" refers to computational mechanisms that can be used to "account for" behaviors of systems to support detection of privacy and fairness violations, as well as explain how they came about. This understanding is leveraged to repair systems to avoid future violations with minimal impact on utility goals. Thus, the researchers view accountability as a means to ensuring privacy and fairness for data subjects. At the same time, they must balance access to systems required for accountability with threats to data processors' intellectual property (e.g., machine learning models or algorithms) and confidentiality expectations of data subjects who contributed data to train the systems. The technical work is informed by, and applied to, three significant application domains--online advertising, healthcare, and criminal justice--in collaboration with domain experts. Each of these application domains have already witnessed examples of automated decision-making systems enabling compelling new functionality, and will thus provide vehicles for validating methods and for impact on practice.
This is a collaborative project with researchers at Cornell Tech and CMU. Michael Tschantz is leading ICSI's effort, which focuses on formalizing privacy and nondiscrimination properties.
Funding provided by The National Science Foundation.