At Systemic Justice, we are working to support the fight for systemic change on racial, social, and economic justice. In all issues we take on, we are taking an intersectional approach and address challenges across the digital and non-digital context.
Even though the understanding that the negative impacts of technology disproportionately affect marginalised people is growing, we still regularly get the question: “do you work on digital rights?”. This in and of itself illustrates the problem of seeing all things “digital” as somehow being separate from other forms of activism, such as those pushing for the protection of human rights, social justice, climate justice and many more –– something organisations such as the Digital Freedom Fund and EDRi are working to address. All our struggles are intertwined.
We cannot separate society’s power structures from the technology it (re)produces. Anything we build reflects and – in the case of tech – amplifies the inequalities we choose to maintain. This means that addressing technological manifestations of racism, ableism, misogyny, homophobia, transphobia, etc. only means mere symptom fighting, not challenging root causes.
So where do we see racial, social, and economic justice intersect with digital rights and technology? A few examples, organised under the six themes we are currently exploring in the context of our consultation process.
At our recent climate justice roundtable, participants discussed climate justice from an intersectional perspective and noted how certain communities bear the brunt of the climate crisis. When we think of the big polluters responsible for creating polluted and toxic local environments, we might jump immediately to oil and gas companies. However, tech companies should not be left out of the equation. Europe has over 1,000 data centres, which use huge quantities of electricity and water to run, with Germany, UK, Netherlands, France and Russia listed in the top ten countries in the world with the most data centres. Communities have already raised the local environmental impact of these centres, which threaten water shortages and water pollution.
Access to Justice
There is a huge divide between how “access to justice for all” is spoken about, and the lived reality for many individuals who need such access. Digital technologies, such as web-based legal advice and online hearings, are transforming the justice system, often with the promise of greater “efficiency” and “objectivity.” However, those who already face barriers to accessing justice now face the further obstacle of “digital exclusion.” To vindicate your rights, you must not only navigate an elitist and complex legal process, you also have to contend with digital processes that require other costs and forms of access and knowledge. These obstacles are hardest to overcome for those who already experience marginalisation and oppression. Furthermore, where non-digital avenues to justice are maintained, there is a real risk that these will be a poorer quality alternative to digital routes, introducing a two-tier system for accessing justice.
Institutionalised racism is rife in law enforcement in Europe. Just as racial profiling occurs on a daily basis in non-digital policing measures and techniques, such as stop and search, it has been encoded and embedded in the design and use of data-driven policing technologies such as facial recognition, automated number plate recognition, speaker and gait identification, and mobile fingerprint technology. Furthermore, digital technologies increase the existing oversurveillance of certain groups and communities across Europe. This underlines the importance of taking an intersectional approach to examining these issues. For example, many European jurisdictions strategically target police-defined “gangs.” This sees further surveillance and policing being directed at young men in particular, with the unlawful “Gangs Matrix” in the UK having been described as a “continued assault” on Black men and boys by the police.
Social protection provision has been transformed by digitisation with the promise of greater access, objectivity, cost-cutting, and efficiency. In reality, it has resulted in more “digital exclusion” for those who need access to this type of support, and their continued profiling, surveillance, and policing. For example, digital technologies exacerbate the surveillance and criminalisation of homeless communities and threaten the limited personal privacy they have, even when those technologies are purportedly helping them find shelter and accommodation. Many of the digital tools used in social protection provision rely on risk assessments, resulting in individuals being profiled or categorised in ways that are discriminatory, exclusionary, and harmful. Such as fraud detection risk models that target or disproportionately impact racialised communities. Or needs assessment algorithms that perpetuate validism by failing to take into account the specific and individual needs of individuals with disabilities.
The language that has been used to describe the great promise of the internet alludes to a utopian vision of a world “without privilege or prejudice” (John Perry Barlow). It does not deliver on this promise. Not only is the internet often an unsafe space for people of colour, LGBTQI+ people, women*, Roma, and other groups whose voice the white supremacist patriarchy does not want to hear, its very DNA is steeped in capitalism, racism, and colonialism. There are many examples of how the surveillance capitalist business model of Big Tech has fuelled racial profiling, the promotion of white supremacist ideology, hate speech, and genocide. The anti-racism fight is as important online as it is offline.
People on the move are often the “testing ground” for new technologies and digital policing techniques. The use of digital and data-driven technologies to control migration has been referred to as the “rise of digital borders.” The technologies used at borders, including facial and gait recognition systems, retinal and fingerprint scans, ground sensors, aerial video surveillance drones, lie detector tests, and biometric databases, reinforce the colonial and imperialist practices that form the historical basis of border management across the globe. Furthermore, with data-driven technologies, borders are no longer confined to the boundaries of a country, but they instead attach to people, resulting in their continued oversurveillance and over policing.
With not only technology rapidly developing, but also its regulation, it is also important to keep in mind how these laws and regulations related to technology are being formulated. Who is doing this work and whose interests are centred in these processes? A recent celebratory tweet from Margrethe Vestager, an Executive Vice President of the European Commission in reaching a deal on the Digital Markets Act included a picture that said more than any number of words can say about who is dominating this domain. Spoiler: it is not the communities most deeply impacted by the increasing digitisation of our society.
This must urgently change. As a participant in our recent social protection roundtable pointedly said: “The current legal framework protects a system of privileges. We cannot legislate on migration without including migrants, or on issues of mental health without involving people affected”, echoing the “nothing about us without us” adage from the disability justice movement. This applies to the digital aspects of these issues as much as to the non-digital ones. At Systemic Justice, we look forward to working with community partners to help shift the bigger power structures that keep these inequalities in place.