United States
Authors: Sandra Wachter, Brent Mittelstadt, Chris Russell
Abstract
There has been much discussion of the “right to explanation” in the EU General Data Protection Regulation, and its existence, merits, and disadvantages. Implementing a right to explanation that opens the ‘black box’ of algorithmic decision-making faces major legal and technical barriers. Explaining the functionality of complex algorithmic decision-making systems and their rationale in specific cases is a technically challenging problem. Some explanations may offer little meaningful information to data subjects, raising questions around their value. Data controllers have an interest to not disclose information about their algorithms that contains trade secrets, violates the rights and freedoms of others (e.g. privacy), or allows data subjects to game or manipulate decision-making.
Explanations of automated decisions need not hinge on the general public understanding how algorithmic systems function. Even though such interpretability is of great importance and should be pursued, explanations can, in principle, be offered without opening the black box. Looking at explanations as a means to help a data subject act rather than merely understand, one could gauge the scope and content of explanations according to the specific goal or action they are intended to support.
From the perspective of individuals affected by automated decision-making, we propose three aims for explanations:
(1) to inform and help the individual understand why a particular decision was reached,
(2) to provide grounds to contest the decision if the outcome is undesired, and
(3) to understand what would need to change in order to receive a desired result in the future, based on the current decision-making model.