The diversity dilemma in Silicon Valley

Anyone who thought that Silicon Valley was home not only to the “Frappuccino with hazelnut milk” faction, but also to the diversity-friendly political left, might not have been wrong on the first point, but certainly on the second. For some years now, there has been increasing evidence that – who would have thought – social diversity and issues such as digital ethics only play a role where they do not affect corporate power structures and bubbling profits.

Google recently provided further proof of this itself when it terminated the respected AI ethicist Timnit Gebru. Gebru co-founded the group “Black in A.I.” and became known for influential research into the social impact of facial recognition programs. That research showed that recognition systems miscategorize women and non-White people significantly more often than White men. An article Gebru co-authored in 2018 was widely seen as a “watershed moment” in the effort to point out and address social misperceptions of automated decision-making systems.

In firing Gebru, Google diminished not only (1) its own technological ability to address diversity issues in its algorithms, but (2) the diversity of its workforce itself.

Algorithms reproduce and reinforce social discrimination

The conflict between Gebru and her former employer arose from a dispute over a scientific paper she co-authored that criticized new systems in the field of language analysis that are also part of Google’s search function. Because these automated systems learn how to deal with language through “the internet” itself, i.e., big data analysis of a variety of texts commonly used in everyday life, these systems often contain the same kind of discrimination found in the everyday life of our societies.

According to an experiment by Algorithmwatch, Facebook uses gross stereotypes to optimize ad placement. For example, job ads for professions that are underpopulated by women continue to be shown to only a few women. Photos of trucks, for instance, with ad text directed at women, for example, were shown to only 12% of female users. In practical terms, this means that Facebook discriminates based on images.

Another recent study shows that Google’s image recognition system assigns attributes to women and men that cement traditional and outdated gender roles. For example, automated systems assigned labels such as “official,” “businessman,” “speaker,” “orator” and “suit” to images of men. Images of women, on the other hand, were linked to labels such as “smile,” “hairstyle,” and “outerwear.”

So how might this problem be addressed? One answer to this question is more diversity among software developers. But here, too, Big Tech companies are lagging behind society.

Silicon Valley is not a haven for social diversity

Silicon Valley has had its own diversity problem for some time. Timnit Gebru’s exit came a year after prominent A.I. ethicist Meredith Whittaker quit Google, saying she and other employees had faced internal retaliation for publicly organizing protests against the company’s handling of sexual harassment in the workplace and speaking out against the company’s handling of A.I. ethics. Whittaker, in addition to her work at Google, co-founded the AI NOW Institute at New York University, which is dedicated to ethical issues in artificial intelligence.

More recently, former Google employee Christina Curley also accused her former employer of discriminating against Black people in new hires. Curley’s job responsibilities included recruiting new employees with the goal of increasing the company’s diversity. She reported an incident in which a White supervisor referred to her Baltimore dialect as a “disability.” Baltimore has traditionally had a large African-American population.

Not only Google, but many other Silicon Valley companies don’t put much effort into creating a diverse work environment. Coinbase, a start-up that offers an online trading platform for cryptocurrencies, has seen 15 Black employees leave in the last two years. At least 11 had previously informed their supervisors or HR that they had been treated in a racist or discriminatory manner.

Pinterest, which had posed as a supporter of Black Lives Matter protests as recently as this summer, waged a small-scale war against two now-former Black female employees who were advocating for a better fact-checking system and refused to support them when their personal information was leaked to hate websites.

These are just the latest examples of a long-standing structural problem that is also reflected in U.S. employment statistics: While about 12 percent of U.S. workers were Black in 2019, their share in the tech industry was just six percent. In the case of Facebook, Alphabet, Microsoft and Twitter, that share was even lower. Diversity efforts that have now lasted six years have only resulted in low-single-digit growth in diversity.

Just as the Frappuccino-to-go may save time, but makes no sense ecologically, digital-ethical greenwashing cannot hide the fact that in the apparently progressive Silicon Valley, a strong structural conservatism cements White dominance and prevents a more representative representation of society.

How then can we reduce discrimination?

Generally speaking, the General Equal Treatment Act stipulates that people in Germany who feel discriminated against must prove this discrimination themselves. When using social networks such as Facebook, this is however virtually impossible, as users have no way of finding out what content is not shown to them. One way of countering this deplorable state of affairs would be to improve the legal situation, as the Federal Antidiscrimination Agency has already called for. So far, however, these calls have gone unheard by politicians.

There is also no simple solution to the problem of discriminatory algorithms. Uncritically designed algorithms that are trained on the basis of publicly available data sets often reproduce existing unequal treatment by means of “proxy discrimination.” Even when, for example, employers explicitly exclude potentially discriminatory variables such as gender, skin colour, or religion from their decision criteria, the training data still includes past discrimination. This discrimination creeps in through correlates close to the avoided exclusion criteria. In the case of discrimination against women for example, a correlate to the sex of the applicant could be how often words like “woman” or “female” are found in applications and resumes.

Transparency and traceability of algorithms are the key conditions – insofar as data protection permits – that need to be fulfilled in order to assess and counter the discriminatory effect of algorithms. At present, social networks such as Facebook are doing their utmost to prevent the disclosure of information about their decision-making and ranking algorithms. However, with the current draft of the European Commission’s Digital Services Act, there could be some movement happening here.

The draft stipulates that “very large platforms” (those exceeding 45m monthly users) must provide information interfaces (APIs) that include data on advertising and targeting criteria. In addition, vetted researchers will be given access to data from the corporations to evaluate potential risks emanating from algorithms with respect to fundamental rights and public discourse. Independent evaluators, regulators, and national coordinating institutions would be allowed to conduct audits. According to Algorithmwatch this draft is a good start but does not go far enough in that NGOs won’t be able to get access to platform data and raises further questions about the enforceability of the law.

Notwithstanding all efforts to regulate algorithms, however, another important problem lies in the fact that humans and their decisions in programming algorithms are the ultimate black box. Therefore, not only automated decisions must be documented but also human value judgments need to be made explicit. For where algorithms are created indiscriminately and uncritically based on publicly available texts, they end up simply adopting their inherent values, norms, and prejudices. Therefore, we urgently need to come up with ethical guiding principles against which these algorithms are measured. And by ethics, however, we do not mean those values that people have acquired by chance, but those that they should have. For this reason, the value judgments that automated decision-making systems make millions of times a day should be publicly debated, weighed, and prioritized so that they correspond to our goals as a society and do not simply reflect an outdated past.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.