Technology is going to weaponise identity politics

Jamie Susskind

Jamie Susskind says that group recognition will increasingly be decided by opaque algorithms

14 November 2018 07:00

Politics is increasingly dominated by the ‘struggle for recognition’ – different social groups (genders, ethnic and religious groups, rural and urban dwellers) clamouring to be treated with respect and dignity in the eyes of others. Many feel forgotten, and they’re increasingly angry about it. But identity politics is not going to peak with the election of Donald Trump or Brexit. In the coming years, digital technology could transform what it means to be disrespected or marginalised – with profound political consequences.

As Francis Fukuyama explains in his new book, at the heart of identity politics is the desire to be seen and treated by others as a person of equal moral worth (see DRUGSTORE CULTURE’s interview with Fukuyama here). Above all, this requires the removal of legal regimes which wrongly prioritise certain social groups (like Jim Crow or the Nuremberg Laws); but it also means the demolition of the norms, values, habits, and manners that allow some to flourish while others are disrespected, abused, ignored, or attacked. (It is a sobering fact, for instance, that persons with disabilities are 2.5 times more likely to be the victims of violent crime.)

Identity politics is not new, but digital technology has already transformed the way we see each other, and not always for the better. Clicks, likes, followers, favourites, and retweets offer a new way of measuring our social standing against that of others. Algorithms increasingly determine who is seen online and who remains invisible. We can now rate people in a way that was never possible in the past: China’s social credit system will distribute a greater share of society’s goods to those with the highest scores for civic qualities including their cleanliness and manners.

The traditional assumption underpinning identity politics was that humans could only meaningfully be disrespected by other humans. Not so in the future.

Consider the example of the New Zealand man of Asian ethnicity whose passport application was rejected because the online system determined that his eyes were ‘closed’ in the photograph he uploaded. Or the fact that until recently Amazon used a machine-learning recruitment system which had taught itself – based on the majority-male CVs fed to it from the last ten years – to award lower scores to CVs containing reference to all-women’s colleges. CVs even containing the very word “women’s” (as in “women’s volleyball team”) were downgraded.

The feeling of being misunderstood can cut just as deep as socio-economic deprivation. But algorithms have no interest in our authentic identities.

Is there any more obvious failure of recognition than the facial-recognition systems which literally cannot ‘see’ people of colour after being ‘trained’ only on white faces? Or a clearer failure of respect than Google’s autocomplete system, which completes the sentence ‘Why do Jews…’ with the words ‘…have big noses?’

You’ve probably been enraged at some point at a device that has frozen or glitched. Imagine how you’ll feel the first time a digital system (especially one that claims to be ‘neutral’) appears to be racist or sexist toward you, or when it declines you a mortgage, or a job, or insurance, or healthcare benefits, or a shorter prison sentence – because its algorithms are poorly constructed or fed with flawed data. How will you react when a voice-recognition system fails even to ‘hear’ your voice, because you speak with a different pitch or accent from the ones on which it was trained?

Identity politics boils down to the hope that others will respect what we consider our true inner self, as well as the roles or titles imposed on us by society. A failure to be understood by our fellow citizens, particularly those in power, is a regular complaint of rural groups as against urban ‘elites’. The feeling of being misunderstood can cut just as deep as socio-economic deprivation. But algorithms have no interest in our authentic identities. As John Cheney-Lippold argues in We Are Data (2017), algorithms only ‘see’ what they want to see, depending on their purpose. A predictive policing algorithm will not see a young African-American man as an individual with a unique life story; it will see a compilation of data – including where he comes from – which it will use to answer just one question: how likely is this man to commit an offence?

As digital technologies come to govern more and more aspects of our lives, all of us – but particularly those who are in some way anomalous or different – will find ourselves in a new and strange struggle for recognition. Soon we won’t just yearn for the recognition of other humans, but for a fair crack of the whip from the machines that increasingly surround us.

The impotence already felt by millions will only fester in a world where opaque systems take thousands of decisions about our lives on the basis of data we are not shown, using processes that we cannot see, with results that may never be appealed or even explained. That’s why our identity crisis is not going away any time soon – the need to be listened to will never be satisfied as long as we are the subjects and not the masters of our technology.

Jamie Susskind is the author of FUTURE POLITICS: Living Together in a World Transformed by Tech (Oxford University Press, 2018).