When Billboards Stare Back – How Cities Can Reclaim The Digital Public Space

Over the past year and a half I have worked on an exciting project solicited by NESTA, the Cities’ Coalition for Digital Rights (CC4DR) and the City of Amsterdam. This blog entry represents a summary of the resulting report which has been published on NESTA’s web pages. It sheds light on a recent development that runs under the radar in many urban areas: the usage of privacy infringing sensors by commercial actors in what is increasingly becoming known as “the digital public space”.

This term seeks to highlight how physical public areas – town squares, pedestrian zones, but also shopping centres and bus stops – are increasingly subject to unfettered digitalisation, with commercial sensors tracking eye-movements looking for feedback on digital advertisements or cameras recognising faces in shopping centres. One main focus of lies on publicly used spaces in private ownership, also known by the abbreviation “POPS”. I make this distinction because it has repercussions on how well the EU General Data Protection Regulation (GDPR) as the relevant regulation is realistically being followed, given that there is a well-known enforcement gap of the regulation in the private sector.

As mass production and technological innovation leads to ever sinking cost, companies have increased their use of sensors in physical spaces in the past years. New services based on augmented reality applications, or the virtualisation of real physical spaces (“Metaverse”) require precise 3D data of city spaces that are costly to produce. As the internet increasingly turns into the screen on which physical cities are being rebuilt, high quality data on public spaces are in high demand. Consequently, making maps of the physical space has become the business model of a sizeable number of companies. With the afore mentioned implementation deficit of the GDPR in the offline world many companies do not properly comply with the GDPR or perform the self-critical assessment the EU legislation foresees for such cases, but “bend” their use case to fit the legal criteria. As one interviewee put it: “Companies are making 3D maps of our city without city hall knowing it.”

Where such “renegade” sensors are installed by commercial actors without making sure citizen, or at least their city administrations, can properly consent to such services, they become unwitting objects of pervasive privacy infringements that they don’t have chance to opt out from. This can not only pose a potential violation of a basic human right but can also damage trust in public representation as there is very little accountability regarding private monitoring in public space. As things stand, there are no formal procedures cities can follow to alleviate these problems as they neither have a say about such practices, nor even know about them in the first place.

For this report I have interviewed a number of smart city representatives from all over Europe to learn about their approaches and experiences in dealing with this and similar problems. Not to spoil the tension too much, but many smart city leaders are not yet aware of these potential privacy risks to their citizens.

The landscape of private sensors in public spaces

In order to enable and empower cities to actively shape how technological progress plays out in their public spaces, I briefly illustrate two particularly pertinent technological developments that have recently seen widespread adoption, following the maturing of the underlying technologies.

Interactive advertising

Generating feedback on advertisements allows ad producers to better target them to certain audiences, increase efficiency and thereby improve return on investment. Feedback is generated primarily through cameras. These are used to read spectators’ glance, faces or body movements in reaction to the exhibited ad content. Newer applications integrate photo or live video from spectators’ faces or bodies into a (moving) image or a scenery on electronic billboards. The reactions, through retina tracking or analysis of body movements, are then analysed by algorithms drawing conclusions on the reactions to the ad content. These forms of augmented reality advertisement or “interactive advertising” can be found on street furniture, pedestrian lanes, in bus shelters or shopping malls and the ad-centered use of retina tracking and facial recognition has proliferated Europe wide and over the world. With an outdoor advertising market poised to eclipse that of newspapers it stands to reason that such forms of feedback generating digital billboards will only increase in the future.

Producing digital images of faces or the retina in high resolution are the raw material of algorithmic analysis that can identify people, tie them to places at certain times and thus be used to produce movement profiles. There are numerous techniques that can, to varying degrees of reliability, use such footage to draw conclusions on what the EU calls “sensitive data”, such as eye tracking data.

Eye-tracking enabled advertising had on several occasions been made use of, among others, in the Netherlands and Belgium. Billboards were put up in (underground) train stations or other spaces which once used to be publicly owned, but have been privatised in many countries in past decades. It cannot be reconstructed if the company had filled out data protection impact assessments in which they could have argued to what extent the operating company had a “special interest” justifying the usage of this technology. The city administrations, among others in Ghent and Amsterdam, were neither informed nor asked for permission. No signs referenced the existence of cameras to passers-by. Only after a public outcry and media attention did the responsible company remove these billboards.

Integrating such eye-tracking capabilities into digital billboards is not particularly new. First applications are as much as 15 years old. This technique makes a great use case for advertisers since it makes visible what viewers are seeing. Eye tracking “makes the subconscious gaze visible and can reproduce gaze data even if the poster was just viewed for a split second”, as a company marketing this technology lays out.

Collecting such precise and highly valuable data comes at a price: Eye tracking data can also be used to reconstruct implicit information about a person’s “biometric identity, gender, age, ethnicity, body weight, personality traits, drug consumption habits, emotional state, skills and abilities, fears, interests, and sexual preferences.” Certain eye tracking measures may even “reveal specific cognitive processes and can be used to diagnose various physical and mental health conditions”. Even where third party actors collect just the images, such techniques can always be applied ex-post to them. Should such information regarding specific individual persons become public, it can potentially be devastating on a personal level.

Automatic number plate recognition

Systems that scan and recognise number plates are ubiquitous and are applied in several use case scenarios, by private companies all over Europe. One particular use is a case in point since it has been used without knowledge of City Hall until it was forbidden.

There have been cases of debt collecting companies using automatic number plate recognition in order to get a hold of people who have for instance defaulted on their loans and are not responding to efforts to contact them. For this purpose, cars of the company were fitted with camera sensors to circle entire cities and to film and process number plate data of encountered cars to identify the debtor. Such cases have been prevalent in the US so far, with Amsterdam being a notable example in Europe. The city has since banned the service, so the details of the company’s approach are unclear. From a technical standpoint, it must have collected at least tens of thousands of registration images and the location of the corresponding cars, even if some filter had been applied and deleted non-relevant data.

According to the GDPR, license plate data is not only personal data, but personally identifiable information (PII). PII is any data that can be used to clearly identify an individual such as passport numbers, fingerprints, or number plates. The practice of licence plate scanning is allowable in the context of parking garages (again, in terms of the GDPR) provided costumers are adequately notified that their registration plates will be scanned and processed through appropriate signs. In contrast to parking garages, companies indiscriminately scanning all available number plates with the intention to identify the car holder including those of uninvolved bystanders, without notification, is unlawful according to the GDPR.

In terms of lawfulness, a case could be made that if only fragments of the registration plates would be scanned, this would not constitute PII. At the same time, this argument seems unlikely to be claimed, given that these techniques are usually applied to identify car holders, especially for use cases such as debt collecting. Such use cases have been rarer but certainly need to be mentioned as they are a violation of a sizeable number of citizens’ privacy. Given that despite the clear unlawfulness of this practice there are so far no means to stop these surveillance practices from taking place, this is a case in point that illustrates the need for cities’ attention to this field.

Cities increasingly need to act as enforcers, but still lack important resources

These examples go to show that cities are stuck in a conflict: given the proliferation of private sensors in the public space, they increasingly need to act as enforcers of the GDPR since companies will keep making use of them where it suits their needs. At the same time, though, cities lack (1) the information, (2) the remit, and (3) the capacity to properly respond to this development.

Firstly, there are no European level rules that prescribe that cities get notified when commercial actors put up sensors in POPS. Bart Rosseau, the Chief Data Officer of the city of Ghent told us: “Is this a task for the police? Should we ask citizens to alert us? If they [the sensors] are well-hidden we will never know. It’s tricky and the technology is evolving faster than the regulation.” (Bart Rosseau, DPO of Ghent, 2021). In short: since there is no proper monitoring process to systematically find and identify sensors, cities have no way of detecting unwanted sensors in publicly accessible spaces.

Secondly, municipalities only have processing assignments when it comes to the GDPR, but they do not have the remit to carry out implementation tasks. Execution and implementation usually lie with the national or regional level. Municipal data protection officers (DPOs) cannot issue bans for eye-tracking billboards – that would be the remit of their national level colleagues.

Thirdly, cities do not have the capacity (yet) to address sensors in POPS. “We already have a lot of problems identifying our own data processing, so we haven’t gotten around to the private parties who do this”, the DPO of a large European city told us.

Smart cities censor the sensors: new ideas on a difficult topic

Using municipal permissions as leverage for privacy

As has become clear, cities do not have the remit to ban or prohibit new sensors on POPS. Given that the GDPR as the main piece of legislation governing privacy of EU citizens does not involve city halls, those cities most affected improvised in order to get a grip on these detrimental developments.

In this vein, London City Hall published in October of 2021 what is called the “Public London Charter”, a document which sets down principles and guidance on how new public spaces (including POPS) should be operated. These principles apply as a condition of planning consent for future developments, by building on existing powers in the area of urban planning and urban development. Whenever developers seek to obtain permission to build in cities, they will have to sign up to the Public London Charter which will be inserted into the condition part of planning agreements.

The Charter contains a number of principles that constitute a code of good practice in the management of new public spaces. A section on privacy and data builds on the Surveillance Commissioner’s code of practise (a national code of practise), the 2018 Data Protection Act and an UK ICO’s opinion. It mandates that “Data Protection Impact Assessments (DPIAs) should be shared with City Hall so they can be published on the London Datastore to promote transparency, compliance and good practice across the city.”

This addresses one of the most crucial limitations of cities dealing with sensors in POPS. Cities not only lack the remit to prescribe or mandate rules regulating sensors in publicly accessible spaces, they are also kept in the dark as they mostly do not get notified about the existence of new sensors. The Public London Charter, even though only regulating new developments, tackles both these limitations of municipal policy making by leveraging the city’s power to withhold permission to new developments on city ground. Essentially, it creates a “supercharged planning authority”, meaning a sizeable amount of the city’s power comes from the planning law used as a lever.

While this example from London is the most far-reaching and developed we have encountered in our interviews, other cities are already using similar “municipal permission leverage” to get commercial actors to subscribe to a set of principles. The City of Amsterdam has started a process that is very similar to the London example in that the “Tada” manifesto contains principles about the ethical use of data. The manifesto developed a number of principles that govern the usage of data which ought to be: (1) inclusive, (2) controlled, (3) tailored to the people, (4) legitimate and monitored, (5) open and transparent, (6) from everyone – for everyone.  Tada has become ingrained in the local administration, its decisions and processes.

It is also at the heart of a programme Amsterdam has assigned the Institute for Information Law at the University of Amsterdam, which explores conditions that can be added to the municipality’s policy instruments regulating the behaviour of private companies with regards to collecting sensor data.  Such instruments include licensing, subsidies, concessions, or contracts private companies. E-mobility service providers for instance have to fulfil certain conditions before they are allowed to run their services in the city space. They for instance have to serve commercially potentially less attractive parts of the city in order to safeguard inclusiveness from less privileged areas. Part of this programme is also a more recent development of rules formulated by the city administration on how mobility providers have to treat the data they collect and which data they have to share with City Hall.

It is important to note that cities’ leeway to add conditions to certain permits is limited. Permits, for instance, which have an impact on public spaces and safety can legally only be used to add conditions that are related to the policy area the permit seeks to address. This means in turn, that for cities not to become liable for abuse of authority, only conditions related to the regulative intent may be inserted into such authorisations. Even where this is not yet possible, applicants receive points if they fulfil conditions on privacy and data handling as part of a “soft” conditionality.

Introducing a notification obligation for new sensors

In December 2021, the city of Amsterdam has introduced a public register for sensors. Companies and other stakeholders must now notify Amsterdam City Hall insofar as they wish to install sensors in publicly accessible spaces. The “Sensors Notification Requirement Regulation” states that it is prohibited to place a sensor on street furniture, publicly accessible buildings or on moving vehicles accessible to the public without notifying the City Council within at least five days in advance. The notification also needs to indicate which data will be collected and when the sensor will be removed again. If, after a grace period of six months, sensors are still placed in public spaces without a notification to the city, the sensor will be removed (after several warnings) at the cost of the owner. Location and type of the sensors are published in a publicly available map. The municipality will inform citizens, but also industry organisations and large companies such as Google about the sensor register.

Cities react as regulation proves to be toothless

These examples show that there are creative ways cities can employ to manage, and somewhat compensate for, their lack of enforcement powers. What these examples also show is that cities got creative, because the national level did neither have the resources nor the direct experience from having to deal with new developments “on the ground”.  One digital leader in larger European city told us: “Our DPA is not in a position to sufficiently monitor developments and provide the necessary oversight. I do not think they are close enough to the ground. They can steer the conversation and they can implement, work with government to implement legislation but again, what do you do in terms of stopping somebody from actually mounting a sensor in the streets?”

This is also the reason why several interviewees told us that the national level increasingly looks to City Halls when it comes to dealing with new technological developments. Several interviewees thought that local government was more advanced in many of those questions than their national counterparts.

While national governments have the powers to pass legislation where they see fit, some respondents thought that in many instances they become active too late and were in danger of letting technological progress happen without setting the norms to shape and govern it. Cities come into it this power vacuum as they can address practical aspects of applied technological change in a way that users can understand.

At the same time, the challenge with regulating emerging technologies lies in the fact that the full scope or penetration is not known yet, meaning that any prospective regulation would have to be set at an abstract level in order to cover the full suite of activity. The challenge for cities is therefore to come up with meaningful principles, that have wide application, that are not too abstract as to be impractical, but also not too detailed as to actually stifle genuinely good innovation.


Based on these examples of how cities successfully improvised to fill the gaps the GDPR leaves, in the full report I discuss possible policy avenues cities can take to alleviate the privacy-infringing developments outlined above. Here and in all brevity, I want to focus on the most important of them: cities need to use the powers they already have – smartly.

Perhaps the most effective tool cities can use to push the boundaries –  and regulate what is not yet regulated – is to more effectively use the levers they already have at their disposal. We have seen that cities do not have the remit over classical sanctions and (in GDPR terms) enforcing mechanisms. Such sanctions happen after the fact and are therefore not an ideal tool to change behaviour proactively. Setting behavioural rules ex-ante, by inserting conditions for commercial actors’ access to city resources, could prove to be much more effective.

The examples from Amsterdam and London have shown that cities have abundant powers at their disposal, in the realm of urban planning, setting economic incentives, organising traffic, and setting procurement conditions. Cities can use such powers over permits or other forms of agreements commercial actors require from municipalities. Giving out such permits can be made conditional on approval seekers incorporating norms, or more concrete prescriptions on how to handle data collection, what data to share with the municipality, make public their data collection, to hand over DPIAs to cities and more. Given that arbitrary conditions cannot be tied to permits regulating specific domains (abuse of authority), these conditions need to be specific to the purpose of the permit.

Therefore, instead of suggesting a one-size-fits-all approach, cities should ask themselves: What are our core powers? What are our core experiences? And then use these powers and experiences. If one of the few powers they have is urban planning, then that could be an effective avenue to use conditionality to initiate behavioural change. In this, cities should follow a principled approach based on their experiences and goals, as discussed in the examples above.

The full report can be downloaded here.

Deplatforming between democratic hygiene and cancel culture

In the end, it was not the U.S. Senate that pulled the plug on Donald Trump, but social media platforms, notably Twitter and Facebook. Since it is well known that the greatest weapon of mass destruction are the masses themselves, social networks have increasingly scrutinized those who want to seduce the masses with populism, demagogy, or just plain lies. The fact that Donald Trump, now the former president of the United States, has been ousted from the most impotant social meda platforms is unlawful censorship for some and an overdue correction of an obvious aberration for others. But one step after another.

Deplatforming, the withdrawal of access to the digital public sphere of social networks, is not a new phenomenon, but a well-known moderation technique that has been used for years in online forums, such as when dealing with spam accounts. Nor is Trump the first politician to have this access revoked. In 2018, millions of users were banned from Twitter for their proximity to the Islamic State. Also in 2018, facebook stripped Myanmar’s military leaders of their official accounts after the platform was used to demonize Muslim Rohingya, hundreds of thousands of whom were then forced to flee ethnic cleansing to Bangladesh. Similarly, the removal of the right-wing conservative social media service parler by Amazon Web Services, Google and Apple also has precedent: Wikileaks was banned from Amazon back in 2010 after publishing secret documents about potential war crimes. So while it was by no means the first time a politician lost his speaking platform on the internet, the case of former President Donald Trump got the discussion going on the topic of deplatforming.

How did Trump’s deplatforming come about?

Long before Trump was even close to running for president, he was using his social media platforms to spread lies and conspiracy theories, such as that then-President Obama was not born in the United States. The far-reaching effects of the constant lies on large sections of the population led to an acceleration and intensification of the discussion on social media’s practical handling of this problem. As recently as 2017, Twitter let Trump get away with anything under the pretext of special news value – even when he threatened North Korea’s dictator Kim Jong Un with its extinction in a dispute over nuclear weapons testing. Ever since Trump’s presidential candidacy, the two major social media services went to incredible lengths to avoid having to rein in their biggest crowd-puller. It wasn’t until three years and countless lies and hate messages later that Twitter felt compelled to correct its line: under its “civil integrity” policy, created in 2018 and tightened in 2020, Twitter classified a tweet from Trump as “misleading information” for the first time on May 26, 2020, and put a warning label on it.

On Jan. 7, 2021, a day after the Trump-inspired riots at the Capitol in Washington that left 5 people dead and 138 injured, Twitter suspended Trump’s account for 12 hours. The short messaging service tied the temporary nature of the suspension to the requirement that Trump delete three tweets and warned that the suspension would be extended indefinitely on the next offense. Shortly before, facebook and instagram had also suspended the president’s account. Finally, one day and two tweets later, Twitter completed the step to permanent suspension. In addition to facebook and instagram, other services such as snapchat, twitch, spotify and shopify also blocked Trump’s user accounts.

Deplatforming in Germany

Private companies in the U.S. are allowed to deny politicians their services even if they provide elementary communication channels with the public. In Germany, however, this case is somewhat different. According to a decision by the Federal Constitutional Court, intermediaries are “bound by fundamental rights” as soon as they reach a decisive size that is relevant to public communication. In this context, the Federal Constitutional Court has confirmed that “private spaces” are no longer private if public communication is severely restricted without them.

Accordingly, a politician of Trump’s caliber could not so easily have been deprived of access to the digital public sphere in Germany, because judicial protection of political statements takes a higher priority here. According to the Federal Constitutional Court, private companies are not directly bound by fundamental rights such as freedom of expression, but fundamental rights “radiate” into other areas of law, including the T&Cs of social networks. In practice, this means that facebook had to take back the deletion of a statement by an AfD politician because the exercise of freedom of expression did not violate “the rights of another person,” as the T&Cs required.

At the same time, government politicians in Germany have greater obligations to tell the truth than their American counterparts do. Public expression law demands principles such as objectivity and accuracy from the speeches of public officials more rigorously than in the United States. In November 2015, for example, then-Federal Research Minister Johanna Wanka had to delete a “red card” she showed the AfD on her ministry’s website for “incitement of the people” as a result of an injunction from the Federal Constitutional Court. So legally, a German Trump could have been fought much earlier.

Even if the legal situation in Germany makes a similar course of events as in the U.S. seem unlikely, this does not answer the question of how we will deal in the future with politicians who divide our societies and incite them against each other, and whether blocking important digital communication channels is one of them. What is clear and indisputable is that social media platforms have too much power. But what to conclude from this interim finding is less clear. That’s because two sides are diametrically opposed in the discussion about what social networks should and should not be allowed to do now.

One perspective goes like this: deplatforming should be allowed, because real censorship can only come from the state, and certainly not from private companies. The right to freedom of expression is not restricted by a simple deletion of accounts on social networks, Donald Trump can continue to make use of this right, the reasoning goes, just not on twitter and facebook. Moreover, the state cannot force companies to give people like him a platform – especially not if that person has previously agreed to the terms of use and then violates them in his statements.

The opposing side, represented by Chancellor Merkel among others, also argues that freedom of expression, as a fundamental right of elementary importance, can only be restricted by politicians, not at the whim of influential corporate leaders. The conclusion here is albeit a different one, namely that deplatforming should be rejected, at least insofar as it is executed by social media themselves. After all, freedom of expression in social networks has also led to very desirable developments such as the Arab Spring and should therefore not be touched.

Alternatives to company-driven deplatforming

First of all, scientific evidence shows that deplatforming really does work. A 2016 study showed that mass deletion of accounts of supporters of the Islamist terrorist organization ISIS led to a significant loss of digital influence. Another analysis proved a week after Trump’s platform withdrawal that disinformation about election fraud in the U.S. had declined by 73%. And with a view to Germany, a further study suggested that deplatforming significantly limits the mobilization power of the far right.

In the search for alternatives to corporate-driven deplatforming, some good suggestions have been made. Many of them, however, do not so much concern themselves with the root of the problem (i.e. the creation and popularization of hateful content), but rather with the mere alleviation of symptoms. These suggestions include the Santa Clara principles on content moderation. Some items from this list, such as the right to object to unlawful deletions, have already been adopted by EU and German legislators. In addition to YouTube and Twitter, these principles are also supported by Facebook, but none of the major platforms in the U.S. adhere to them except reddit. So while social media in the U.S. are largely free to delete whoever with no way to formally object to this decision, in Germany they are being held accountable by the updated version of the Network Enforcement Act.

External platform councils, staffed by figures of great legitimacy such as Nobel Peace Prize winners, are also a good start in this regard, albeit one with room for improvement. Examples include the deletion advisory board that Google assembled to define its rules on the “right to be forgotten”, or the facebook oversight board that will decide whether to permanently suspend Donald Trump from the social network. The platforms have realized that the rules they set are enormously influential and that they need to seek legitimacy from outside because they do not have it themselves. However, these boards should not be filled by the social media themselves. Also, in the case of facebook’s oversight board, more than 25% of the council members are U.S. citizens, so the diversity is not representative of a global company.

We need to talk

…because even if those approaches are good first steps, they are only effective in treating the symptoms, but not the problem itself. The problem is the algorithms that give social networks their character as fear mongers. The corporate secrets of twitter and facebook that threaten democracy – namely, those algorithms that are responsible for curating individual social media accounts and, for business reasons, primarily promote fear- and anger-ridden messages – have so far been untouchable. Admittedly, the EU Commission’s Digital Services Act promises a better understanding with a transparency obligation for these algorithms. A major hurdle in effectively regulating social networks is still the lack of knowledge about their internal decision-making and rule-making processes. At the same time, however, according to lawyer and scholar Matthias Kettemann, intermediaries are so complex that legislators still lack the ability to adequately regulate social networks. This is because they fall through many categories because they fulfill many different functions: privacy law, competition law, communications law, media law (if they produce their own content).

However, mere transparency is not enough. More important would be a genuine “democracy compatibility check” of the recommendation algorithms of social media. In addition, filter bubbles should be able to be removed in a new “real world mode” so that users can see their home feed without the automatized recommendation function. Last but not least, users should also be able to pay for social networking services with money instead of data.

Ultimately, the social media have created their own monster in Trump. Deplatforming is only the ultima ratio for correcting an undesirable development that has been destabilizing societies for years. It would therefore be more important to work on the causes, the algorithms, which are calibrated for interaction and spread anger and fear more strongly than moderate and deliberative views.

The diversity dilemma in Silicon Valley

Anyone who thought that Silicon Valley was home not only to the “Frappuccino with hazelnut milk” faction, but also to the diversity-friendly political left, might not have been wrong on the first point, but certainly on the second. For some years now, there has been increasing evidence that – who would have thought – social diversity and issues such as digital ethics only play a role where they do not affect corporate power structures and bubbling profits.

Google recently provided further proof of this itself when it terminated the respected AI ethicist Timnit Gebru. Gebru co-founded the group “Black in A.I.” and became known for influential research into the social impact of facial recognition programs. That research showed that recognition systems miscategorize women and non-White people significantly more often than White men. An article Gebru co-authored in 2018 was widely seen as a “watershed moment” in the effort to point out and address social misperceptions of automated decision-making systems.

In firing Gebru, Google diminished not only (1) its own technological ability to address diversity issues in its algorithms, but (2) the diversity of its workforce itself.

Algorithms reproduce and reinforce social discrimination

The conflict between Gebru and her former employer arose from a dispute over a scientific paper she co-authored that criticized new systems in the field of language analysis that are also part of Google’s search function. Because these automated systems learn how to deal with language through “the internet” itself, i.e., big data analysis of a variety of texts commonly used in everyday life, these systems often contain the same kind of discrimination found in the everyday life of our societies.

According to an experiment by Algorithmwatch, Facebook uses gross stereotypes to optimize ad placement. For example, job ads for professions that are underpopulated by women continue to be shown to only a few women. Photos of trucks, for instance, with ad text directed at women, for example, were shown to only 12% of female users. In practical terms, this means that Facebook discriminates based on images.

Another recent study shows that Google’s image recognition system assigns attributes to women and men that cement traditional and outdated gender roles. For example, automated systems assigned labels such as “official,” “businessman,” “speaker,” “orator” and “suit” to images of men. Images of women, on the other hand, were linked to labels such as “smile,” “hairstyle,” and “outerwear.”

So how might this problem be addressed? One answer to this question is more diversity among software developers. But here, too, Big Tech companies are lagging behind society.

Silicon Valley is not a haven for social diversity

Silicon Valley has had its own diversity problem for some time. Timnit Gebru’s exit came a year after prominent A.I. ethicist Meredith Whittaker quit Google, saying she and other employees had faced internal retaliation for publicly organizing protests against the company’s handling of sexual harassment in the workplace and speaking out against the company’s handling of A.I. ethics. Whittaker, in addition to her work at Google, co-founded the AI NOW Institute at New York University, which is dedicated to ethical issues in artificial intelligence.

More recently, former Google employee Christina Curley also accused her former employer of discriminating against Black people in new hires. Curley’s job responsibilities included recruiting new employees with the goal of increasing the company’s diversity. She reported an incident in which a White supervisor referred to her Baltimore dialect as a “disability.” Baltimore has traditionally had a large African-American population.

Not only Google, but many other Silicon Valley companies don’t put much effort into creating a diverse work environment. Coinbase, a start-up that offers an online trading platform for cryptocurrencies, has seen 15 Black employees leave in the last two years. At least 11 had previously informed their supervisors or HR that they had been treated in a racist or discriminatory manner.

Pinterest, which had posed as a supporter of Black Lives Matter protests as recently as this summer, waged a small-scale war against two now-former Black female employees who were advocating for a better fact-checking system and refused to support them when their personal information was leaked to hate websites.

These are just the latest examples of a long-standing structural problem that is also reflected in U.S. employment statistics: While about 12 percent of U.S. workers were Black in 2019, their share in the tech industry was just six percent. In the case of Facebook, Alphabet, Microsoft and Twitter, that share was even lower. Diversity efforts that have now lasted six years have only resulted in low-single-digit growth in diversity.

Just as the Frappuccino-to-go may save time, but makes no sense ecologically, digital-ethical greenwashing cannot hide the fact that in the apparently progressive Silicon Valley, a strong structural conservatism cements White dominance and prevents a more representative representation of society.

How then can we reduce discrimination?

Generally speaking, the General Equal Treatment Act stipulates that people in Germany who feel discriminated against must prove this discrimination themselves. When using social networks such as Facebook, this is however virtually impossible, as users have no way of finding out what content is not shown to them. One way of countering this deplorable state of affairs would be to improve the legal situation, as the Federal Antidiscrimination Agency has already called for. So far, however, these calls have gone unheard by politicians.

There is also no simple solution to the problem of discriminatory algorithms. Uncritically designed algorithms that are trained on the basis of publicly available data sets often reproduce existing unequal treatment by means of “proxy discrimination.” Even when, for example, employers explicitly exclude potentially discriminatory variables such as gender, skin colour, or religion from their decision criteria, the training data still includes past discrimination. This discrimination creeps in through correlates close to the avoided exclusion criteria. In the case of discrimination against women for example, a correlate to the sex of the applicant could be how often words like “woman” or “female” are found in applications and resumes.

Transparency and traceability of algorithms are the key conditions – insofar as data protection permits – that need to be fulfilled in order to assess and counter the discriminatory effect of algorithms. At present, social networks such as Facebook are doing their utmost to prevent the disclosure of information about their decision-making and ranking algorithms. However, with the current draft of the European Commission’s Digital Services Act, there could be some movement happening here.

The draft stipulates that “very large platforms” (those exceeding 45m monthly users) must provide information interfaces (APIs) that include data on advertising and targeting criteria. In addition, vetted researchers will be given access to data from the corporations to evaluate potential risks emanating from algorithms with respect to fundamental rights and public discourse. Independent evaluators, regulators, and national coordinating institutions would be allowed to conduct audits. According to Algorithmwatch this draft is a good start but does not go far enough in that NGOs won’t be able to get access to platform data and raises further questions about the enforceability of the law.

Notwithstanding all efforts to regulate algorithms, however, another important problem lies in the fact that humans and their decisions in programming algorithms are the ultimate black box. Therefore, not only automated decisions must be documented but also human value judgments need to be made explicit. For where algorithms are created indiscriminately and uncritically based on publicly available texts, they end up simply adopting their inherent values, norms, and prejudices. Therefore, we urgently need to come up with ethical guiding principles against which these algorithms are measured. And by ethics, however, we do not mean those values that people have acquired by chance, but those that they should have. For this reason, the value judgments that automated decision-making systems make millions of times a day should be publicly debated, weighed, and prioritized so that they correspond to our goals as a society and do not simply reflect an outdated past.

Blockchains – carriers of democratic processes?

Democracies worldwide are facing a number of challenges. Technologies are transforming societies and social relations without politics being able to understand these processes of change in time, let alone manage them effectively. Where existing systems reach their limits, windows of opportunity for new technologies emerge. It seems no coincidence that Bitcoin, the first cryptocurrency based on blockchain technology, became established in 2009, immediately after the economic and currency crisis.

An important part of the digital (infrastructure) transformation recently emanated from blockchains. Like TCP/IP, on which the Internet is based, blockchains are protocols that not only allow programmable contracts and transactions but also to predetermine the rules that guide contractual relations. Exchanged over decentralized peer to peer networks, they allow virtually any type of transactions to be validated cheaply and securely. All parties have insight into the full blockchain, where all transactions are stored in a tamper-proof manner.

Despite the current hype, blockchains, like other innovations, are ambivalent in that their repercussions on systems such as society and politics depend on the applications built on this protocol. Nonetheless, one key aspect of this technology is of particular importance. Even though the radical transparency made possible by the complete traceabilty of all transactions in the blockchain can be limited by anonymization, a blockchain in its basic design is capable of permanently altering the current form, perception and actual use of public discoursive spaces. In this context, transparency as a principle has been gaining importance for several years and is also discussed as a human right, since it is itself of fundamental value for the realization of other human rights such as freedom of expression.

Beyond often discussed possible blockchain applications such as elections and outsourcing of administrative tasks, a corresponding research agenda would need to explore under which conditions this technological innovation can be adopted in fluid and flexible societal contexts, who defines its rules, and what this means for shaping the public space in which citizens conduct democratic discourse. Looking at the use of blockchain technology for democratic purposes, three sets of questions are in my view of central importance:

1. What are the repercussions of transparency of blockchain applications on social actors and their participation in discourses in the public sphere, especially through the omnipresent availability of past interactions contained in blockchains? While established news media such as the New York Times are already using blockchains to measure and track the truth of a given text, the focus here should be on how such models can change the public sphere. What impact does the omnipresence of this information have on participation in public discourse and how does the perception of transparency as a fundamental right change?

2. Trust in the sense of legitimacy and enforcement of decisions is the basis for the functioning of democracy. Until now, trust has been generated and guaranteed by democratic institutions. Blockchains offer the prospect of shifting the generation of trust from institutions to the protocol and could thus lead to a loss of legitimacy of the monopoly position of state rule-making and arbitration. Such, among others, is the thinking in institutions such as the European Central Bank when blockchain-based cryptocurrencies take over traditional functions of money. So to what extent do blockchains have the potential to compete with basic state services? What mechanisms characterize a transaction model on blockchain technology between citizens and the state?

3. Social and political processes are constantly changing, relying on negotiation as a central mechanism for establishing consensus and cooperation as the basis of democratic legitimacy. At the same time, modifications in public blockchains occur through consensus, and where consensus cannot be reached due to classical coordination problems, the system fragments through forkings, as the example of Bitcoin shows. The informality necessary for negotiation is in tension with the rigidity and irreversibility inherent in blockchain applications. How can the tension between this rigidity and the need for informality as a characteristic of ever-changing social relations be resolved? To what extent are consensually programmed applications, as sets of rules that are difficult to change, at all suitable for capturing socially fluid contexts?