Technology, AI and ethics.

Diversity for algorithms!

 

Diversity for algorithms!

by Wolfgang Gründinger

Algorithms can be discriminatory, be it intentionally or unintentionally. This is a social problem that should cause concerns. There is already software today that discriminates against certain groups in society, particularly those who are already marginalized. For example: Google searches have displayed less well-paid job offers to women compared to men. Automatic face recognition works best for white men and worst for black women. An algorithmic decision system used in US courts, called COMPAS, recommended early release on probation for African-American detainees much less often than for white offenders.

Discrimination must be combated decisively – no matter if it happens in the “analogue” world or the digital sphere. Algorithms can help uncover and correct the often-unconscious prejudices.

Algorithms usually discriminate if the programmed rules according to which the code works are not correct, or if the data with which the algorithms trained in machine learning are distorted and hence not valid. Data often simply reproduces an unjust reality. For example: if more men with the name Thomas are represented on the boards of Dow Jones-listed corporations than women as a whole, then an algorithm “learns” that women are apparently not suitable for management positions. Hence, it does not come as a surprise that Google shows jobs with higher salaries primarily to men. Machines can only detect correlations, but they can’t detect actual causal relationships. Here the human being is required as verfier in order not to draw the wrong conclusions.

However, human decisions do not necessarily lead to less discrimination than algorithmic decisions. For instance, job applicants with foreign-sounding names have significantly worse chances – despite having the same qualifications. Even at school, children with a foreign name receive worse marks despite the same performance. The “stop and frisk” policy  (a police practice to stop and check civilians at train stations) is mainly carried out on people of color, although racial profiling is outlawed.

Discrimination must be combated decisively – no matter if it happens in the “analogue” world or the digital sphere. Algorithms can help uncover and correct the often-unconscious prejudices. They can therefore lead to less discrimination at the bottom line.

Diversity is therefore not an end in itself, but the basis for the functionality of algorithms – and for their legal admissibility. In the EU, the European Anti-Discrimination Directive prohibits discrimination on the basis of gender, ethnic origin, religion, and a number of other characteristics. The EU’s recent General Data Protection Regulation sets strict limits on automated decision-making systems.

Diversity is also becoming more and more important from an economic point of view: especially in view of increasingly complex requirements regarding pace, innovation and flexibility in a globalized and dynamic economy, diversity can become a competitive factor. Recently in January 2019, the World Economic Forum in Davos presented a report that claimed that more diversity contributes to higher profitability of companies and attracts better talents for the companies.

Businesses should increase the promotion of STEM subjects, i.e. science, technology, engineering, and mathematics, and related disciplines for women, diverse social groups, and minorities. It is obvious that mixed teams are more aware of discrimination issues.

The equality of all people is a social value and not just a technical issue. No matter whether digital or analogue, it is not anonymous machines, but the human being who is and remains the last instance and central figure in ethical decisions.

However, awareness for discrimination and the need for diversity should not only apply to the coding teams themselves, but equally to management, clients and users, who need to develop an understanding of what a machine-generated result means and what it does not mean. Another reasonable measure includes training to raise awareness for unconscious biases. Furthermore, the training data used in machine learning should be checked for discriminatory effects.

Governments can play their part by, for example, making programming languages compulsory in schools and referring to discrimination aspects therein. Above all, state authorities need sufficient skills and resources to be able to assess algorithms. The US Food and Drug Administration (FDA), which also approves digital medical devices, needs appropriate financial capabilities and human experts to monitor the application of algorithms in medicine. The same holds in the fields of financial market regulation (where algorithms are used in high-frequency trading), vehicle licensing, (where autonomous cars must fulfill safety standards), courts, (where algorithms are used for assessing the recidivism rate of offenders) and other regulatory and governmental bodies.

The equality of all people is a social value and not just a technical issue. No matter whether digital or analogue, it is not anonymous machines, but the human being who is and remains the last instance and central figure in ethical decisions.

Share Post
Wolfgang Gründinger

Wolfgang Gründinger, one of the most prestigious young thought leaders in Europe, is multiple-award winning book author, ambassador of the Foundation for the Rights of Future Generations, European Digital Leader of the World Economic Forum, and advisor on digital ethics at the German Association of the Digital Economy. Photo credit (C) David Ausserhofer. He received many awards, including the German Environment Award, the German Studies Award, the Award for Intergenerational Justice and the Award for Demography Studies. Wolfgang holds a Master in Political and Social Sciences and studied at the University of Regensburg, the Humboldt University in Berlin and the University of California, Santa Cruz (UCSC), and attended the Oxford Internet Leadership Academy.

Show Comments