Equinet's second AI training - Legal Advice Clinic with a practical focus on litigation - took place in February 2022. During this training, specific legal cases (both real and fictitious) involving suspected algortihmic discrimination and brought by National Equality Bodies were discussed. This training was aimed at NEB staff interested in AI, as well as NEB staff who are involved in or are interested in being involved in AI-related litigation.
Training Description
The expansion and diversification in the uses of AI systems across Europe has led and will continue to lead to numerous legislative developments at both European and national levels. In this context, strategic litigation is essential to give direction to these new legal developments and guide the interpretation of existing law. The history, mandate, and powers of European national equality bodies (NEBs) make them uniquely well placed to undertake such test and strategic litigation. Thus, one of the central recommendations of Equinet’s Report “Regulating for an Equal AI: a New Role for Equality Bodies” states that Equinet’s members should consider bringing, supporting, or funding litigation that challenges discriminatory technology.
To equip our members with the relevant tools to engage in such litigation, Equinet held a novel, capacity-building opportunity on AI which built upon the knowledge base created by our first training in April 2021. This training took the form of a legal advice clinic, providing concrete guidance on specific legal cases (both real and fictitious) brought by National Equality Bodies and involving suspected algortihmic discrimination. Dee Masters, an experienced equality and AI litigator and co-author of the Equinet Report on AI, served as the clinic’s expert adviser.
In the four weeks prior to the training, Equinet launched a moderated online AI discussion Forum on this website ( see under discussion category "Independent assistance to victims"), offering each week a different topical discussion dedicated to one stage in bringing AI-related non-discrimination litigation: 1) finding cases; 2) assessing cases based on existing non-discrimination law; 3) finding evidence; and 4) determining liability. Check out these online discussions for relevant learning resources and good practice examples by equality bodies in Europe.
Remember that you need to log in in order to access the Forum. If you don't have an account yet, you can start your registration process here.
Training Objectives
The overarching objective of this training was to enable equality bodies to continue to effectively fulfil their mandate of providing independent assistance to victims of discrimination in pursuing their complaints about discrimination in the now changed context of AI-enabled technologies. This training specifically aimed to:
-
Provide tailored support to equality bodies in identifying and assessing problematic AI practices that could give rise to equality litigation at the national level;
-
Develop example litigation strategies for up to three individual cases submitted by equality bodies;
-
Identify and clarify ways for fostering the involvement of relevant national stakeholders, such as civil society organisations and national regulators, in pursuing AI-related equality litigation for the identified case
Training Resources
Sign into the website to access the materials discussed in the training, which are now available in our library:
- Setting the scene - introductory presentation by Dee Masters
- UK case - case presentation and draft legal analysis
- Netherlands case - case presentation and draft legal analysis
Training recordings
Sing into the website to access the video,recordings of the training, which are now available in our library:
- 17 February 2022 - Setting the scene and Case Study I: Dee Masters provides an overview of the life cycle of an equality legal case in the context of online recruitment discrimination, and Alexander Hoogenboom discuss the case study of the Netherlands Institute for Human Rights investigating a case of discrimination related to automated decision-making system by Dutch Taw Authorities
- 17 February 2022 - Q&A session moderated by Dee Masters and Closing remarks by Milla Vidina from Equinet
- 18 February 2022 - Case Study II: Ncholas Williams, Vyaj Lovejoy, and Leona Bashow from the Equality and Human Rights Commission in the United Kingdom, present a case of discrimination related to Algorithmic allocation of school places by local authorities in England and Wales
- 18 February 2022 - Q&A session moderated by Dee Masters