Book review: The Age of Surveillance Capitalism

I recently finished Shoshana Zuboff’s book The Age of Surveillance Capitalism. It’s received glowing reviews, but left me disappointed. Zuboff spends much of the book outraged at the behaviour of big tech corporations, but often neglects to explain what’s actually bad about either the behaviour itself or the outcomes she warns it’ll lead to. The result is far more polemical than persuasive. I do believe that there are significant problems with the technology industry - but mostly different problems from the ones she focuses on. And she neglects to account for the benefits of technology, or explain how we should weigh them against the harms.

Her argument proceeds in three stages, which I’ll address in turn:

  1. Companies like Google and Facebook have an “extraction imperative” to continually “expropriate” more personal data about their users.

  2. They use this for “the instrumentation and instrumentalisation of behaviour for the purposes of modification, prediction, monetisation, and control.”

  3. Ultimately, this will lead to “a form of tyranny” comparable to (but quite different from) totalitarianism, which Zuboff calls instrumentarianism.


On data: I agree that big companies collect a lot of data about their users. That’s a well-known fact. In return, those users get access to a wide variety of high-quality software for free. I, for one, would pay thousands of dollars if necessary to continue using the digital products that are currently free because they’re funded by advertising. So what makes the collection of my data “extraction”, or “appropriation”, as opposed to a fair exchange? Why does it “abandon long-standing organic reciprocities with people”? It’s hard to say. Here’s Zuboff’s explanation:


Industrial capitalism transformed nature’s raw materials into commodities, and surveillance capitalism lays its claims to the stuff of human nature for a new commodity invention. Now it is human nature that is scraped, torn, and taken for another century’s market project. It is obscene to suppose that this harm can be reduced to the obvious fact that users receive no fee for the raw material they supply. That critique is a feat of misdirection that would use a pricing mechanism to institutionalise and therefore legitimate the extraction of human behaviour for manufacturing and sale. It ignores the key point that the essence of the exploitation here is the rendering of our lives as behavioural data for the sake of others’ improved control over us. The remarkable questions here concern the facts that our lives are rendered as behavioural data in the first place; that ignorance is a condition of this ubiquitous rendition; that decision rights vanish before one even knows that there is a decision to make; that there are consequences to this diminishment of rights that we can neither see nor tell; that there is no exit, no voice, and no loyalty, only helplessness, resignation, and psychic numbing.


This is fiery prose; but it’s not really an argument. In more prosaic terms, websites are using my data to serve me ads which I’m more likely to click on. Often they do so by showing me products which I’m more interested in, which I actively prefer compared with seeing ads that are irrelevant to me. This form of “prediction and control” is on par with any other business “predicting and controlling” my purchases by offering me better products; there’s nothing “intrinsically exploitative” about it.


Now, there are other types of prediction and control - such as the proliferation of worryingly addictive newsfeeds and games. But surprisingly, Zuboff talks very little about the harmful consequences of online addiction! Instead she argues that the behaviour of tech companies is wrong for intrinsic reasons. She argues that “there is no freedom without uncertainty” and that predicting our behaviour violates our “right to the future tense” - again taking personalised advertising as her central example. But the degree of personalised prediction is fundamentally the wrong metric to focus on. Some of the products which predict our personal behaviour in the greatest detail - sleep trackers, or biometric trackers - allow us to exercise more control over our own lives, increasing our effective freedom. Whereas many of the addictive games and products which most undermine our control over our lives actually rely very little on personal data - as one example, the Universal Paperclips game is incredibly addictive without even having graphics, let alone personalised algorithms. And the “slot machine”-style intermittent rewards used by mobile apps like Facebook again don’t require much personalisation.


It’s true that personalisation can be used to enhance these problems - I’m thinking in particular of TikTok, whose recommendation algorithms are scarily powerful. But there’s also a case to be made that this will become better over time. Simple metrics, like number of views, or number of likes, are easy for companies to optimise for. Whereas figuring out how to optimise for what people really want is a trickier problem. So it’s not surprising if companies haven’t figured it out yet. But as they do, users will favour the products that give them the best experience (as one example, I really like the premise of the Dispo app). Whether or not those products use personal data is much less important than whether they are beneficial or harmful for their users.


Lastly, we come to the question of longer-term risks. What is Zuboff most worried about? She holds up the example of Skinner’s novel Walden Two, in which behavioural control is used to teach children better self-control and other virtuous behaviour. Her term for a society in which such tools are widely used is “instrumentarian”. This argument is a little strange from the beginning, given that Walden Two was intended as (and usually interpreted as) a utopia, not a dystopia. The idea that technology can help us become better versions of ourselves is a longstanding one; behavioural reinforcement is just one mechanism by which that might occur. I can certainly see why the idea is discomfiting, but I’d like to see an actual argument for why it’s bad - which Zuboff doesn’t provide.


Perhaps the most compelling argument against instrumentarianism from my perspective is that it paves the way for behavioural control technology to become concentrated and used to maintain political power, in particular by totalitarian regimes. But for reasons I don’t understand, Zuboff downplays this risk, arguing that “instrumentarian power is best understood as the precise antithesis of Orwell’s Big Brother”. In doing so, she holds up China as an example of where the West might be headed. Yet China is precisely a case in which surveillance has aided increasing authoritarianism, as seen most notably in the genocide of the Uighurs. Whereas, whatever the faults of big US tech companies in using data to predict consumer behaviour, they have so far stayed fairly independent from exercises of governmental power. So I’m still uncertain about what the actual harms of instrumentarianism are.


Despite this strange dialectic, I do think that Zuboff’s warnings about instrumentarianism contribute to preventing authoritarian uses of surveillance. So, given the importance of preventing surveillance-aided totalitarianism, perhaps I should support Zuboff’s arguments overall, despite my reservations about the way she makes them. But there are other reasons to be cautious about her arguments. As Zuboff identifies, human data is an important component for training AI. Unlike her, though, I don’t think this is a bad thing - if it goes well, AI development has the potential to create a huge amount of wealth and improve the lives of billions. The big question is whether it will go well. One of the key problems AI researchers face is the difficulty of specifying the behaviour we’d like our systems to carry out: the standard approach of training AIs on explicit reward functions often leads to unintended misbehaviour. And the most promising techniques for solving this involve harnessing human data at large scale. So it’s important not to reflexively reject the large-scale collection and use of data to train AIs - because as such systems become increasingly advanced, it’s this data which will allow us to point them in the right directions.

Comments

Popular posts from this blog

In Search of All Souls

25 poems

Book review: Very Important People