Studie "Diskriminierungsrisiken durch Verwendung von Algorithmen", Dr. Carsten Orwat

Cover page of the FADA AI report
Published by:
Equinet members
Date of publication:
Background Material



Algorithms: The study prepared with a grant from the Federal Anti-Discrimination Agency (FADA), a National Equality Body member of Equinet, focuses on algorithms that are used for data processing and semi- or full-automated implementations of decision rules to differentiate between individuals. Such differentiations relate to commercial products, services, positions or payments as well as to state decisions and actions that affect individual freedoms or the distribution of services.
Discriminations: Algorithm-based differentiations become discriminatory if they lead to unjustified disadvantaging of persons with legally protected characteristics, in particular age, sex, ethnic origin, religion, sexual orientation, or disability. The study describes cases in which algorithm- and data- based differentiations have been legally classified as discrimination or which are analysed or discussed as risks of discrimination.

Surrogate information: Algorithm- and data-based differentiations often exhibit the characteristics of so-called statistical discrimination. Typical for this kind of discrimination is the use of surrogate information, surrogate variables or proxies (e.g. age) to differentiate, because the original distinguishing characteristics (e.g. labour productivity) are difficult for the decision-makers to determine by examining individual cases. These surrogate variables can be protected characteristics, or there can be correlations between them and protected characteristics. With algorithmic methods of data mining and machine learning, complex models with a large number of variables can be used instead of one or a few surrogate variables.

Societal risks: The legitimacy of such differentiations is often justified on the grounds of efficiency in overcoming information deficits. However, they also involve societal risks such as injustice by generalisation, treatment of humans as mere objects, restriction of the free development of personality, accumulation effects and growing inequality and risks to societal goals of equality and social policy. Many discrimination risks of developing and using algorithms result from the use of data reflecting former unequal treatments.

In particular, the use of artificial intelligence algorithms and applications in automated decision-making can require by legal provision that entities using them assess discrimination risks, document, among other things, their functioning and decision rules, and ensure explainability also with regard to possible consequences including unequal treatment. Such documentation should be accessible to equality bodies in cases of suspected discrimination, with the right
of access being regulated by law.

Other (potential) tasks of equality bodies include advising entities developing and implementing algorithms on the prevention of discrimination and (mandatory) involvement in public procurement procedures of algorithm-based systems that are particularly prone
to discrimination.