Algorithmic discrimination in Europe: Challenges and Opportunities for EU equality law 

By Janneke Gerards, Utrecht University Law School and Raphaële Xenidis, Edinburgh University Law School and iCourts, Copenhagen University

 

Early 2020, the European Commission recognized in the preamble of its White Paper on Artificial Intelligence that AI ‘entails a number of potential risks’ including ‘gender-based or other kinds of discrimination’. It therefore deemed ‘important to assess whether [EU law] can be enforced adequately to address the risks that AI systems create, or whether adjustments are needed’. This is precisely one of the questions we have explored in a forthcoming report on algorithmic discrimination in Europe: Is EU equality law fit to capture the particular forms of discrimination arising from the growing use of algorithmic applications in all areas of life?

Recent media headlines have attracted attention to the concrete consequences of algorithmic bias. Although much of the discussion has focused on the US, our report shows that algorithmic applications are also widely used in Europe, both in the public sector — for example to support labour market policies, social welfare, education, policing and fraud detection, the administration of justice or media regulation — and in the private sector — for example in areas such as employment and platform work, banking and insurance, targeted advertising, price-setting and retail activities. Hence, the European Commission is right in pointing out that the risks of algorithmic discrimination are potentially widespread and pervasive.

The existence of such risks highlights the importance of robust EU-wide gender equality and non-discrimination legal safeguards. Yet, a number of problems arise here, in particular in relation to machine-learning algorithms, which autonomously evolve to optimise given outcomes based on input data. In this blog post, we focus on four central gaps, mismatches and difficulties. Nevertheless, we also note that algorithms may bring benefits and opportunities for the enforcement of EU equality law.

Legal gaps in EU equality law

The first problem relates to the patchy scope of EU equality law. EU law prohibits discrimination in the sphere of employment with regard to sex, racial or ethnic origin, disability, religion or belief, sexual orientation and age. However, only sex- and race-based discrimination are also prohibited in relation to the consumption and supply of goods and services and solely discrimination on grounds of racial or ethnic origin is prohibited in the sphere of education, media and advertising.[1] These limits are problematic because machine learning is widely used in the market for goods and services, for instance to personalize prices and offers for consumers, to assess the risks that a client defaults if granted a loan, to improve a diagnosis in the health sector, and generally to profile users for marketing purposes in a wide range of areas. This type of algorithmically induced discrimination currently escapes the grasp of EU law if pertaining to disability, religion or belief, sexual orientation or age, leaving parts of the EU population without any remedy.[2]

Conceptual mismatches: intersectional and proxy discrimination

Second, the forms of discrimination arising from algorithmic applications do not neatly fit the protected grounds of discrimination that lie at the basis of EU equality law. Algorithmic profiling techniques combined with the mining of large amounts of personal and behavioural data are likely to produce highly granular categorisations that will not neatly overlap with the protected grounds defined in EU law. Algorithmic profiling thus may lead to intersectional forms of discrimination, whereas the CJEU has failed to recognise intersectionality in its Parris judgment (2016). In addition, although programmers might try to avoid the direct use of sex, racial or ethnic origin, disability, religion or belief, sexual orientation and age as input variables in algorithmic procedures, ‘blinding’ an algorithm does not suffice to ensure equality. Machine learning algorithms are able to detect proxies for these grounds even when they are removed from available data. This type of proxy discrimination interrogates the degree of connectedness there should be with a protected ground in order for differential treatment to fall within the scope of EU non-discrimination law. The answer to this question remains uncertain in light of the Court’s refusal to consider an applicant’s country of birth as a proxy for his ethnicity in Jyske Finans (2017).

Doctrinal frictions: direct vs. indirect discrimination

Third, discrimination arising from the use of algorithmic applications is difficult to shoehorn into the types of discrimination defined in EU non-discrimination law. Tracking differential treatment based on protected grounds might prove impossible because of the ‘black box’ nature of many algorithms, making it very difficult to prove a case of direct discrimination, defined as a situation in which ‘one person is treated less favourably than another is, has been or would be treated in a comparable situation’. To a certain degree, this problem may be solved by relying on the concept of indirect discrimination, understood as a situation ‘where an apparently neutral provision, criterion or practice would put [members of a protected category] at a particular disadvantage compared with other persons’. Yet, indirect discrimination can be justified if fulfilling an objective and legitimate aim; the ensuing balancing exercise might easily amount to a trade-off between predictive accuracy and social justice.

Enforcement difficulties: proof, responsibility and liability

Fourth, in the algorithmic society, enforcing equality rights is challenging for victims of discrimination. Establishing prima facie evidence that discrimination has taken place faces hurdles in the form of lacking transparency and explainability as well as in protection of business secrets and intellectual property rights. In addition, allocating responsibility and liability for discrimination through algorithms is complicated by the intricacies of human-machine interactions (e.g. human agency vs. automation bias), the composite nature of complex AI systems which involve several algorithms, the fragmentation of algorithmic systems which several actors from programmers to end users, and the global dimension of these systems.

A call for public policy and regulation: ‘PROTECT’

Despite this bleak picture, AI also offers key opportunities for monitoring, detecting, preventing and mitigating discrimination. For instance, our report shows how AI might make the large-scale detection of discriminatory job adverts easier and improve gender equality in recruitment processes. If adequate public policy and robust regulation are put in place (see our proposal for an integrated framework called ‘PROTECT’), algorithmic decision-making might even fare better at avoiding discrimination than human brains would. The good news is that a wide range of public and private good practice examples exist at national level in Europe. These range from efforts to monitor the discriminatory effects of algorithmic applications to policies aiming to diversify the relevant professional communities. One thing is certain: societal awareness of, and robust legal safeguards against, structural inequality and discrimination are necessary for the algorithmic society to respect the fundamental right to equality.

 

[1] Article 19 TFEU and Directives 2000/43/EC, 2000/78/EC, 2004/113/EC and 2006/54/EC.

[2] Relevant legal provisions at national level might of course mitigate this problem since EU law provides for minimum requirements.