Past, Present, and Future of Surveillance Capitalism

In “The Age of Surveillance Capitalism”, Shoshana Zuboff offers an essential take on the practices and consequences of digital capitalism in the information era. This article exposes her argument to those unwilling to go the length of the whole book.

“I don’t have anything to hide”. Anyone who dared to express any doubt regarding the data collection practices of tech firms has faced this reaction. Too often, the answer involves vague claims to privacy which weigh way too little in comparison to the many benefits that social media and other digital technologies bring to our life. We are growing uneasy with those practices and yet we fail to explain the reasons of our discomfort, to ourselves and others.

In her 2019 book, “The Age of Surveillance Capitalism”, Shoshana Zuboff recalls the path that brought us to the digital iron cage, describes how data collection came to be central to 21st century information capitalism and seeks to predict what this entails for our future. By so naming and describing surveillance capitalism, she brings to light the true reason of our discomfort. And to the statement above, she answers: “If you’ve got nothing to hide, you are nothing”.

Past — Behavioral Surpluses and the Birth of Surveillance Capitalism

The second half of the 20th century was marked by the tension between the dictates, or lack thereof, of the second modernity and neo-liberalism. In the 1970s, neo-liberalism promoted a capitalism of exclusion to the benefit of a few shareholders to the detriment of the masses which faced stagnant wages, growing unemployment and dwindling public services. By doing so it deprived individuals of the very means of self-determination, which the second-modernity had taught them to cherish and see has central to their worth. Apple and the likes promised to solve this tension by creating economic growth without alienation, as consumers could tailor the products to their every needs and desires (think of the infinite combination of songs your iPod offered compared to a disc). Lured by this promise, consumers widely adopted digital technologies in pursuit of a third modernity. Google would soon make a discovery that would turn this paradigm on its head.

The Discovery of Behavioral Surpluses

In the late 1990s, Google undertook the challenge of ordering and facilitating access to information on the internet. When undertaking this challenge, the company quickly realized that users of Search left a trail of information about their behavior (the location from which they searched, the time at which they searched, what they searched, etc.), the “Behavioral Surpluses”. At first, Google treated such surpluses as literal digital waste. But they eventually understood that behavioral surpluses could be harvested and analyzed to predict user behavior and, thus, better align Search results with the user’s needs.

By 2000, facing growing tension from investors, Google was looking for ways to monetize the success of Search. It soon realized that, instead of using behavioral surpluses solely to enhance Search, it could sell behavior predictions about its users (the “Predictive Products”) to advertisers, eager to efficiently target consumers. From there on, Google’s business changed. Google was not offering search services to consumers anymore but rather selling predictive products to advertisers. Users’ behavioral surpluses were not used to their benefit but against them, as it allowed advertisers to exploit their every desire for commercial purposes. Search‘s primary function became to harvest behavioral surpluses to be turned into predictive products, the real Google product. This shift happened without consumer consent or knowledge, and this lack of consideration would remain a staple of Google’s and its peers’ practices.

The Behavioral Gold Rush

Google’s profit skyrocketed following this discovery and other tech companies, notably Facebook and Microsoft, followed suit. Those companies started competing to offer the most accurate predictive products, which would attract the most bids from advertisers. To do so they needed to achieve economies of scale: because the more behavioral surpluses one has access to the more accurate the predictive product is, the streams through which to capture behavioral surpluses had to be multiplied. They did so by offering an increasing number of “services”, ultimately designed to generate behavioral data.

Tracking cookies, which had almost been banned in the US became pervasive. Websites were designed to be addictive, in order to maximize the time user spent there. Google expended its reach to numerous markets, eventually reaching one billion devices by freely licensing its Android software, allowing it to gather incredible amounts of behavioral data from unknowing individuals.

To secure their competitive advantages, tech companies also adopted aggressive anti-competitive behavior to secure their sources of harvest. The huge concentration in the industry, in which nascent social medias like Instagram were systematically bought by established surveillance capitalist, now being acknowledged by the US Congress is testament to this phenomenon.

As a result, surveillance became pervasive. Digital technologies allowed the creation of new streams of behavioral surpluses which could be harvested, turned into predictive products and sold to advertisers or others not for our good, but for their profit. This profit-incentivized race to surveillance earned those tech companies the title of “Surveillance Capitalists”.

The Avoidance of Scrutiny and Regulation

Those practices entail a marketization of human behavior which, although now so widespread as to seem natural, violates our traditional understanding of privacy and agency by monetizing our behavior, without our express knowledge or approval. And although the birth of surveillance capitalism first remained unnoticed, its restless expansion led to growing discontent from the population and regulators. Google and its peers would soon learn how to overcome those obstacles.

“Move fast and break things”, the well-know Facebook mantra, accurately describes how surveillance capitalists overcome public resistance. To start, companies implement their new ideas without requesting permission. Google did so with Google Street View when it deployed its cars and their cameras through major cities and uploaded the resulting images on the internet without notice. They then inevitably face resistance, which is either ignored or fought back until it fades away. In the case of Street View, when inhabitants of certain cities refused to allow Google’s cars to enter and when Germany banned the service, Google simply implemented it elsewhere until the technology became so normalized that those who initially resisted it caved in. To secure this normalization, companies then offer to adapt their services at the margin, creating a false sense of reciprocity. Blurring the faces in Street View is an example. Finally, surveillance capitalists move on to the development of the next stream. Once Google had mapped the world, it sought to install microphones in our homes with Google Home.

Avoiding regulatory scrutiny has proved even easier. They strove in a policy context still deeply influenced by Friedman’s view of governmental regulation as being detrimental to innovation, as illustrated for example by the immunity granted to those companies under 47 U.S.C. § 230, and in which the governmental surveillance needs of the post 9/11 world aligned with those of Google and the like. Since then, they have managed to keep regulation at bay by using their technology to the benefit of certain politicians (such as the 2008 Obama campaign), maintaining revolving doors between the US government and Silicon Valley, outspending any other lobby and funding research aligned with their interests.

The Division of Knowledge

Google’s discovery of behavioral surpluses thus turned the third-modernity promise on its head. Where digital technologies were supposed to empower individuals, they have been turned into a profitable surveillance enterprise. Various “services” are better understood as streams of behavioral surpluses to be harvested, turned into predictive products and sold to the highest bidder. Without our consent, our behavior is monetized and turned against us, as it is used to sell us goods, services or political ideas. By doing so, surveillance capitalists create a new division of society based on knowledge, between those with this data and the predictions it allows (the “Dark Text”) and those who don’t and from which behavioral data is unwillingly extracted. The amount of power this dark text gives them becomes increasingly obvious as they move away from mere predictions to behavior modification and certainty.

Present — From Behavior Prediction to Behavior Modification

To many, and it shows the success of the habituation enterprise undertaken by surveillance capitalists, harvesting behavioral surpluses from unknowing users, predicting their future behavior and selling such predictions to advertisers is not a “bad thing”. But what if instead of predicting your behavior they modified it ? What if instead of predicting who you could vote for, they could decide who you will vote for? What if instead of predicting who you could be attracted to, they could decide who you will be attracted to?

Economies of Scale, Scope and Action

The first stage of surveillance capitalism was characterized by economies of scale: increased supplies of behavioral surpluses meant more accurate predictive products which lead to increased profits to fund new supplies of behavioral surpluses and so on. But as competition and the expectations of advertisers increases, surveillance capitalism are reaching a stage where, if profits are to keep rising, predictive products must now approach certainty.

To achieve such certainty, surveillance capitalists are now striving to achieve economies of scope, by gathering data on all aspects of our physical, emotional and social life, with the ultimate goal of achieving economies of action, using their knowledge of the dark text to influence our behavior to the economic benefit of their clients.

The Rendition of Our World (Economies of Scope)

More diverse sources and types of behavioral surpluses directly translate into more accurate predictions. Indeed, one sees how predictions based on an individual’s social, mental and physiological state would be far superior to those based solely on this individual’s online behavior. Surveillance capitalists hence have a clear incentive to so diversify the behavioral surpluses they gather. But as all behavioral surpluses generated online are already being used, such diversification necessarily means rendering more and more of our real world activities into actionable data.

Our phones, and the location data they constantly generate, already provided a proxy of our offline behavior. But the Trojan Horse of this rendition enterprise is the Internet of Things (“IoT”), synonymous with the multiplication of sensors generating a constant flow of real-world data. Your Fitbit gives surveillance capitalists access to your physiological information, Alexa gives them access to every single word pronounced in your house and the Smart Cities of tomorrow will turn your every step into behavioral surpluses ready to be analyzed.

As they so walk down the surveillance path, Google and its peers once again face resistance from those suspicious of the IoT and seek to circumvent it. Like in the past, they do so by “moving fast and breaking things”, framing regulations as outdated innovation-prohibitive instruments and by mobilizing troops of lobbyists. But this new era brings its share of new public relations techniques, so effective that many of us are now paying to bring microphones, cameras and other sensors into our homes.

First, surveillance capitalists emphasize the “smart” nature of IoT devices. Surely, your smart light fixtures must be superior to the normal lights of your neighbour! From there, the adoption of those devices is framed as inevitable. Indeed, if those “smart” models are better, why wouldn’t everyone adopt them? If you can turn on the light using your voice, why bother using a switch anymore? And if their adoption is inevitable, those questioning their actual usefulness (are there really any benefits to voice-activated lights? Is it worth all of your conversations being recorded?) must be labelled as irrational conservatives.

On top of this social pressure, surveillance capitalists exploit our weaknesses. Our social nature is tapped into by vocal assistants (Cortana, Google Home, Alexa) designed to emulate real human relationships and bring us to divulge more and more information. Our economic needs are satisfied by smart devices promising increased cost efficiency (smart home heaters reasulting in lower consumption) or threatened by forcing the use of smart devices as the condition to access cheaper services (think of a car insurer forcing the use of a tracking device as a condition for cheaper premiums).

Ultimately surveillance capitalists hope that, thanks to the IoT-facilitated multiplication of sensors, rendition will become ubiquitous. From there, they will be able to track and analyze our real-world behavior, taping into renewed and diversified sources of behavioral surpluses, all without our consent or true knowledge. Once secured, those economies of scope will pave the way to economies of action.

The Contemplation of Behavior Modification (Economies of Action)

Thanks to the unprecedented quantities and diversity of behavioral surpluses at their disposal, surveillance capitalists are, for the first time, looking past mere predictions. They realize that these surpluses can be used to influence our behavior to the economic benefit of their clients. An advertiser — let’s say a shoe manufacturer –, instead of being able to purchase adds targeting individuals likely to be looking for shoes, could pay Facebook to modify the behavior of those same individuals so that they do indeed buy shoes. None of the tech firms are yet able to sell such certainty but they’ve started experimenting.

Facebook is leading the charge. Scientists, such as Adam D. I. Kramer, are tracking the ways in which tweaks to the news feed can affect individual behavior. In a 2012 study, the scientists discovered that by voluntarily displaying either predominantly positive or negative messages on a user’s newsfeed, they could significantly alter the mood of those users (as evidenced by the content they posted after the experiment). In another study, they discovered that by displaying an “I Voted” icon at the top of the news feed, together with the picture of the user’s friends that had voted, they could effectively push more people to vote. They discovered that human bias towards social comparison could be instrumented to spread desired behavior through social networks: if an individual can be convinced that others are acting in a certain way, she would likely follow suit. These experiments yielded effective results in only 2–3% of users that were exposed, but they proved the concept and considering the billions of daily users Facebook has, this impact of such small changes remains considerable.

One Google spin-out, Niantic, explored behavior modification through gamification. With Pokémon Go, the company realized that they could modify the real-life behavior of their users by placing rare Pokémon, arenas and other elements of the game in strategic places. One restaurant paid Niantic to place rare Pokémon in its dining room and as a result its profits increased by 70%. Unknowing players lured by the game were made to purchase food in a restaurant they would have never been to otherwise.

Finally, insurance companies are exploring the potential use of the data made available by new sensors to punish certain behavior and incentivize those behaviors better aligned with their economic interests. For example, numerous insurers are now requiring their customers to install trackers in their cars, resulting in lower premiums for good drivers, and higher premiums for bad ones.

The Right to the Future Tense

The abovementioned experiments build on research conducted in the 1970s by the CIA and B. F. Skinner. Before their efforts were shun by Congress, they had found that behavior modification requires the subject to have a low sense of self-determination and to be unaware of the ongoing manipulation. These two elements are built-into the current digital economy. Psychological surveys have shown that the widespread use of social networks led to a decreased sense of self-determination across society and the ubiquity of digital technologies and sensors renders the manipulation effort conducted through them almost invisible to the average individual.

As surveillance capitalists learn to induce and exploit our diminishing awareness and self-determination through social comparison, gamification and economic incentives, they approach dangerous manipulation capabilities. By doing so, they pave the way to a new divided world that is not “smart” but anti-democratic and in which a few control the many, which are denied any resemblance of individuality.

Future — The Hive and the End of the Individual

Realistically, wide-scale behavioral modification technologies remain a distant prospect. Yet, one has to wonder what such technologies entail for the future of society and human experience. Through their statements, tech executives and their researchers offer the beginning of an answer. They foresee a new social system in which a class of specialists, descending from the current surveillance capitalist, control the behavior of the masses through scientific and technological means. This new society model (the “Hive”) will have individual and collective consequences which we should be aware of if we are to keep marching along.

The Hive, the Specialists and the Masses

In this new world, society is divided in two bodies. At the bottom are the masses, composed of individual’s whose behavior is constantly rendered into data by countless sensors, too deeply engrained in their environment to be questioned. The behavior of theses individuals can not only be measured by those sensors; it can also be manipulated, through an unspecified combination of social comparison, gamification and various sanctions and incentives. At the top are the surveillance capitalists and their researchers, a class of specialists which have access to behavioral data and the means to use those behavioral modification tools. They decide what ends are to be sought by society and consequently use their power to tilt the behavior of the masses in the desired direction.

This system relies on instrumentarianism and should not be confused with past figures such as authoritarianism. Unlike authoritarianism, instrumentarianism does not seek to invade the soul and bodies of its members. It is indifferent to the soul and bodies and seeks to overcome them. It sees individuals as gears in a society-wide machine built to serve the interests of a narrow class of specialists. As such, it threatens the very fundaments of democracy and human dignity.

The End of Democracy

Since the enlightment, self-determination and free-will are seen as central to human nature and the ability to exercise them is closely related to human dignity. This philosophy implies that society is a complicated undertaking, requiring the conciliation of conflicting views and behavior. Democracy was hence developed to balance the need to respect human dignity and agency with the collective needs of our societies. It allows individuals with conflicting views and interests to live and build a future together through constant negotiation. But in the Hive, the views and interests of the masses are not conflicting anymore, they perfectly align with those of the specialists. As individual agency is replaced by behavior modification, the need for negotiation, and hence for democracy, disappears. The specialists design a plan for our future, and they execute it through behavior modification.

In the Hive the future is therefore not the result of democratic negotiations. Rather, it is unilaterally decided by a class of specialist for which democracy is the distant vestige of a messy and unaligned society. Portland and other surveillance capitalists claim that the end of democracy will allow specialists to modify mass behavior to serve the common good, for example by favoring climate-conscious behaviors. But what guarantees that this power will not be used for the worst of evils rather than for the greater good?

The End of the Individual

Skinner already knew in the 1970s that behavior modification could better be achieved if the subject did not know it was being manipulated and if it had a weak sense of self-determination. Wide-scale behavior modification and the Hive therefore require individuals to give up free will and agency if it is to function. Individuality becomes a bug, an undesirable friction to be tamed. Conveniently, the very tools used to control behavior in the Hive also induce a decreased sense of self-determination. The annihilation of individuality is both a prerequisite and a feature of this system.

Ubiquitous rendering negates the very essence of sanctuaries, essential to our development. In sanctuaries, such as our homes, we exercise the process of boundary control by arbitrarily deciding which people or objects we let into our lives and the interests we cultivate. Through this exercise, we define ourselves and our life separately from others, who define themselves through the same process. Any interference with the exercise of this control therefore impedes a crucial tool of self-determination. The ubiquitous presence of sensors, now reaching our bedrooms, is such an interference. If all words spoken between your walls are gathered by Alexa before being analyzed and, in essence, sold to the highest bidder, are you free to say as you please in your own home? If you are being filmed, as you free to act as you please in your own home? As our sanctuaries are invaded, they lose their role as a tool for individual self-determination.

At the same time, behavior modification is achieved, in part, through social comparison, which also diminishes our agency. No one can yet comprehend what form such comparison would take in the Hive but twenty years of experience of social media have taught us what it could imply for the individual. The process of self-determination, materialized as the definition of an inner-self, takes place during adolescence and early-adulthood. It requires the individual to slowly detach itself from the pressure of its peers to develop a personality better aligned with its own traits and needs. Eventually, it allows the adult to see herself as having inherent worth, independently from the value of its peers, while still fulfilling her need for social validation and connection. Without this process, the individual is bound to see herself from the outside looking in and needs constant approval from her peers to maintain a sense of worthiness. Because it offers a constant exposure to our peers, to which we are attracted by our natural thirst for social interactions, and ways to measure our acceptance by the group (through likes for example), social media increases the importance given to the group and correlatively impairs our ability to detach from it and develop a sense of inner-self. Enhanced social comparison in the Hive will only further this process. As we see ourselves only in the reflection of the mirror, we are exposed to the manipulation of those shaping it.

Unable to exercise self-determination and to develop a sense of inner-self by detaching herself from the peer group, the individual in the Hive loses her individuality and her behavior becomes easily malleable. The Hive marks the end of the individual. In this world “if you’ve got nothing to hide, you’re nothing”.

The Digital Future We Want

Capitalism in the information age was supposed to pave the way to a third-modernity, offering economic growth while furthering individual agency. But the discovery of behavioral surpluses and the subsequent marketization of individual behavior doomed this vision. It initiated the seemingly unstoppable rise of surveillance capitalism and it could well lead to the end of democracy and individuality. Before we get there, we need to recall that surveillance capitalism is not an inherent feature of digital technologies. It is the result of political and economic decisions, starting at Google in the early 2000s, which can all be reverted. Now is still the time to plead for a human future in the digital age.

--

--

--

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Construction costs are soaring, so where can you still build a house?

IN CONNECTED ASIA, WHY IS PUBLIC WIFI SO D@#$M BAD?!

Technology and the Ones Left Behind

You Are Already a Cyborg

Snapdragon CPUs and 165Hz panels are included in Lenovo’s latest ThinkPads

Reality and Virtual Reality: All New Ways Our Friday Nights Are Shaping Up

Quantum Computing applications in Aeronautics, Travel and Logistics

Virtual Reality And Exploring Other Cultures

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Hoto Riboseaud

Hoto Riboseaud

More from Medium

THE BLOCKCHAIN WITH IN-BUILT ORACLE:

Are your fellow colleagues at work dying of mental illness?

The METACADE (Concept) Pitch

Imagine that….