This training took place in hybrid format on 1 December from 9:00 to 16:30 CET. It targeted Equality Bodies, with focus on members of Equinet's Cluster on Artificial Intelligence, in order to improve their knowledge on how to identify cases of algorithmic discrimination.
Back-to-back with the above training, Equinet's Cluster on Artificial Intelligence took place (hybrid format) on 2 December from 9:00 CET to 13:00 CET.
Background
The notorious “black box “ of automated decision-making, including through Artificial Intelligence (AI)-enabled systems, poses a serious and widely discussed challenge to the effectiveness of non-discrimination law. If underreporting has been known to undermine the strength of legal protection against discrimination, then AI-systems threaten to exponentially increase its scale and negative impact. With victims of discriminatory algorithms often being left unaware that they are discriminated against, the responsibility of identifying and tackling AI-enabled discrimination could increasingly fall upon Equality Bodies, whether working alone or alongside with relevant national sectoral regulators.
The present training addressed this challenge by highlighting and exploring various ways in which equality bodies could look for and ultimately successfully identify cases of algorithmic discrimination.
Training sessions*
*Check the below document "Additional Resources" for all materials shared by participants and speakers during the training and AI Cluster meeting. Further powerpoint presentations will be added under the respective sessions below.
Session 1: Session 2: Uncovering automated bias: Equality Bodies in action
- Christina Jönsson, Equality Ombudsman of Sweden
- Investigation on how government agencies are using AI and automated decision-making and the extent to which they are taking into account risks of discrimination and barriers to equal rights in their application of AI and automated decision-making.2022 Report, English summary on p. 8 – 10.
- Jessica Wulf, Algorithm Watch (presenting a Federal Antidiscrimination Agency of Germany (FADA)-funded project)
- AutoCheck, Guidebook for employees of non-discrimination public services to better recognize cases of algorithmic discrimination and support those affected
- Training package (replicable template) for workshops about the discrimination risks of automated decision making systems
Session 2: Critical alliances: media and civil society partners
- Nicolas Kayser-Bril, AlgorithmWatch (investigative journalism)
- Mher Hakobyan, Amnesty International (civil society, including digital rights organizations)
Session 3: Critical alliances: Equality Bodies working with national public regulatory bodies and within government-coordinated platforms
Introductory presentation: Learning from Council of Europe’s national-level trainings on algorithmic discrimination bringing different national stakeholders together
- Menno Etema, Council of Europe - Directorate General of Democracy, Anti-Discrimination Department – No Hate Speech and Cooperation unit
Equality bodies’ partnering with national public stakeholders
- Kathinka Theodore Aakenes-Vik, Equality and Anti-discrimination Ombud of Norway
- Valérie Fontaine, Defender of Rights of France (cooperation with data protection authorities - find a summary in the 2020 Good Practice Guide on Equality Bodies and AI Systems)
- Nele Roekens, Interfederal Centre for Equal Opportunities of Belgium/UNIA
If you have any questions regarding this training and/or experience difficulties with accessing the AI website or any of its content, please contact Milla Vidina, Policy Officer, Equinet Secretariat (milla.vidina@equineteurope.org).