< Articles

Why Privacy Matters

If you’re passionate about democracy, you’re passionate about privacy. We all must become active privacy advocates with some urgency or risk losing real democracy forever.

Sep 9, 2024 • 29 mins

Author: Angus Mackay

TL;DR

I picked up Neil Richard’s book Why Privacy Matters because I had an interest in learning more about privacy: it’s past, present and future. I brought to the endeavour a bunch of preconceived notions about privacy that had been formed through exposure to popular media. None of us have time to be experts on many things so we all fall into this trap. My favourite was that “you should only be concerned about your privacy if you’ve done something wrong or you have something to hide”. I cringe now when writing that. If you don’t have time to read the book and democracy is absolutely a non-negotiable in your life, this article is for you. Otherwise, it’s a great read that I can’t recommend highly enough.

We’re living in a time where rapid advances are being made in AI seemingly every other month and in some cases faster. Like so many ground breaking technologies that have come before it, AI has the potential to be humanity’s saviour and it’s undoing. We’re seeing devices now like Meta’s smart glasses, Humane’s lapel pin and Rewind’s pendant (before a name change) that see, hear and record everything we do in our lives. Whilst this first generation of products are likely to come and go, they undoubtedly set the tone for things to come. The AI agents behind these devices will soon be pervasive and they’ll have the potential to nudge, cajole and dictate our most important actions. When we cross this nexus, the idea of democracy will remain just that, and any plausible opportunity to change our fate will already be lost.

How to Think About Privacy

What Privacy Is

There’s a lot of definitions of privacy. Privacy scholar Daniel Solove puts it well when he argues that “privacy is a concept in disarray”. Solove started a book to write a definition but gave up after a vast amount of research, explaining frankly:

“After delving into the question I was humbled by it. I could not reach a satisfactory answer. This struggle ultimately made me realise that privacy is a plurality of different things and that the quest for a single essence of privacy leads to a dead end. There is no overarching conception of privacy — it must be mapped like terrain, by painstakingly studying the landscape”.

The good news is that we don’t actually need a precise, universally accepted definition of privacy. Lots of things we care about lack precise definitions, but we still manage to protect them anyway, like equality and freedom of expression. So we have a place to start though:

Privacy is the degree to which human information is neither known nor used.

In the US, the Video Privacy Protection Act of 1988 refers to human information as “personally identifiable information” whereas Europe’s GDPR applies to “personal data” which it defines as “any information relating to an identified or identifiable natural person.” The drawback of using terms like “data”, “personal data” and “users” is it distances us from what’s really at stake in discussions on privacy – human beings.

All too often popular and legal conversations about privacy stop the moment our human information is collected. Solove has termed this the “secrecy paradigm” — the idea privacy is only about keeping things hidden, and information exposed to another person ceases to be private. This is why our law has imposed duties of confidentiality — in some cases for centuries — on a wide variety of actors, including doctors, lawyers, accountants, and those who agree to be bound by a contract of nondisclosure. It’s why the privacy theorist Helen Nissenbaum argues:

“when we think about privacy, it’s best to focus on whether the flow of information is appropriate in a particular context.”

Privacy is a continuum rather than a binary on/off state. Unfortunately American law does not always reflect this commonsense understanding of how human information circulates.

A Theory of Privacy as Rules

In 2002 Target Corporation discovered a host of reliable lead indicators from buying behaviour and other data they had purchased that told them when a customer was pregnant, even if they didn’t want them to know. The aggregated insights led to a “pregnancy prediction” score that even allowed them to guess each woman’s due date with surprising precision. It’s a good example of how big data can be used to find surprising insights using seemingly innocuous information, and how “creepy” that can be. The real lesson here though, is the power those insights confer to control human behaviour.

Target wanted to get to consumers who were expecting to shop at Target before its competitors and to habituate them as Target shoppers before their buying preferences readjusted to new habits — with Target as one of those habits. Consumers who became aware of it responded negatively, with reactions ranging from feeling “queasy” to actual anger. Target’s reaction was to start mixing in all these ads for things they knew pregnant women would never buy, so the baby ads looked random. They found as long as a pregnant woman thinks she hasn’t been spied on, she’ll use the coupons. It’s fair to say that Target has perfected it’s targeting far beyond what was possible more than 20 years ago. Just imagine what will be possible in the age of artificial intelligence. This illustrates why it’s helpful to think about privacy in terms of our inevitable need for human information rules that restrain power to promote human values.

My theory of privacy as rules has four elements:

  1. “Privacy” is fundamentally about power.
  2. Struggles over “privacy” are in reality struggles over the rules that constrain the power that human information confers.
  3. “Privacy” rules of some sort are inevitable.
  4. So “privacy” should be thought of in instrumental terms to promote human values.

Privacy & Power

Human information is powering algorithms in the service of established social, economic, and political power, often in connection with applied behavioural science or other forms of social, economic, or political control. Privacy is about power because information is power, and information gives you the power to control other people.

Julie Cohen has explained how digital platforms operating within what she calls “informational capitalism” treat human beings as the raw material for the extraction of data as through a process of surveillance under which the subjects become conditioned as willing providers of the data. The Canadian legal theorist Lisa Austin has been even more blunt, arguing that a lack of privacy can be used by governments and businesses to enhance their power to influence and manipulate citizens and customers, whether by changing behaviour or by manufacturing “consent” to dense sets of terms and conditions or privacy policies.

The discipline of behavioural economics became popularised by Nudge (2008), the best-selling book by Thaler and legal scholar Cass Sunstein. Their key insight was human decisions can be influenced by the ways the structure of decisions affect our cognitive biases. They helpfully coined the term “choice architecture” to indicate that the conditions under which we make choices affect those choices and that those conditions are subject to manipulation by others. They optimistically hoped that choice architects would encourage human well-being, welfare and happiness on average, while maintaining the ability of choosers to opt out of choices they didn’t believe advanced their individual preferences over the alternatives.

A classic example of this shift was the Facebook game FarmVille. Launching in June 2009, Zynga’s social game put its players in control of a small virtual farm. At its peak FarmVille boasted 85 million players. Zynga created feedback and activity loops that made it hard for players to escape. As one journalist describes it succinctly, “If you didn’t check in every day, your crops would wither and die; some players would set alarms so they wouldn’t forget. If you needed help, you could spend real money or send requests to your Facebook friends.” It was ultimately abandoned by players who came to realize that they were not the farmers but rather the crop itself — it was their own time, attention, and money being farmed by Zynga.

Zynga were a pioneer in the use of embedding cognitive tricks in design to create a deliberately engaging (“addictive”) product. Other designers were watching and taking notes, and FarmVille’s cognitive tricks — a subset of what are known as “dark patterns” — spread throughout the internet. Indeed, as the New York Times reported on the occasion of the game finally shutting down on New Year’s Eve 2020, “FarmVille once took over Facebook. Now everything is FarmVille.”

Privacy as Rules

In the absence of comprehensive privacy rules to protect consumers, the American approach to privacy has been “sectoral.” Under this system, some types of information, like health, financial, and movie rental data, are covered by individual laws, but no general privacy law fills the holes and provides a baseline of protection like the GDPR does in Europe. The general rule of American privacy has been called “notice and choice.” Since the dawn of the internet, as long as tech companies give us “notice and choice” of their data practices, they are judged to be in compliance with the law. In practice, the “notice” consumers get is little more than vague terms hidden in the bewildering fine print of privacy policies. Moreover, virtually no one reads these privacy policies — a fact documented by a vast academic literature.

The “choice” half of “notice and choice” is an equally dangerous fiction, particularly when the “choice” we are being presented with is essentially the “choice” of whether or not to participate in the digital world. This terrible general rule has forced consumers into what Solove called “Privacy Self-Management.” This sets the defaults of human information collection in favour of powerful companies, and then causes consumers to feel guilty and morally culpable for their loss of privacy by failing to win a game that is intentionally rigged against them. We blame ourselves because the design that is nudging us in that direction is often invisible or seemingly apolitical. Notice and choice is thus an elaborate trap, and we are all caught in it.

It gets worse. Even good rules can be co-opted in practice, and institutions can be highly resourceful in watering down or even subverting rules intended to constrain them. Detailed sociological fieldwork by Ari Waldman has revealed the ways in which corporate structures and cultures, organisational design, and professional incentives are often deployed by companies to thwart both the intentions of privacy professionals on the ground as well as the spirit of the legal rules and privacy values those professionals attempt to advance.

To routinise surveillance, executives in the information industry use the weapons of coercive bureaucracies to control privacy discourse, law, and design. This works in two ways: it inculcates anti-privacy norms and practices from above and amplifies anti-privacy norms and practices from within. Tech companies inculcate corporate-friendly definitions of privacy. They undermine privacy law by recasting the laws’ requirements to suit their interests. And they constrain what designers can do, making it difficult for privacy to make inroads in design. As this happens, corporate-friendly discourses and practices become normalized as ordinary and common sense among information industry employees. This creates a system of power that is perpetuated by armies of workers who may earnestly think they’re doing some good, but remain blind to the ways their work serves surveillant ends.

Edward Snowden has explained at length how he witnessed other intelligence community personnel who spoke up internally about surveillance abuses face professional marginalization, harassment, and even legal consequences — convincing him that the only way to address those abuses was to go to the press with documentary evidence.

Cohen goes even further, explaining how powerful institutions are able to shape not just their own organizations but the basic structure of our political and legal language to capture rules and subordinate them to their own purposes through the ideology of neoliberalism. This can happen, for example, by reinterpreting human information as their own valuable property rights or advancing dubious interpretations of the First Amendment like “data is speech” to insulate their information processing from democratically generated state and federal laws that would rein them in.

These intertwined phenomena help to explain why there are so few effective American privacy rules at present, and why efforts to improve them have faced serious challenges in the political process, in the courts, and in practice:

  • Our law governing email privacy — the Electronic Communications Privacy Act (ECPA) — was passed in 1986, long before most people (including members of Congress) had even sent an email.
  • Our law governing computer hacking — the Computer Fraud and Abuse Act (CFAA) — was passed in 1984.
  • Our law protecting movie-watching privacy, the Video Privacy Protection Act (VPPA), was passed in 1988.
  • The Health Insurance Portability and Accountability Act (HIPAA) of 1996 only covers health data from doctors, hospitals and insurance companies and none of the other participants in the modern health system.

The design of computer systems is a source of rules that constrain the users of those systems. Seda Gurses and Joris van Hoboken have shown how the software development kits that platforms allow designers to use to build products are themselves designed in ways that make the protection of privacy in practice very difficult, even where privacy protections are mandated by legal rules.

The Inevitability of Privacy Rules

Many still think of privacy as a binary option of “public” or “private,” when our everyday experiences remind us that virtually all information that matters exists in intermediate states between these two extremes. In reality, virtually all information is and has been in intermediate states between these two extreme poles. Much of the confusion about privacy law over the past few decades has come from the simplistic idea that privacy is a binary, on-or-off state and that once information is shared and consent given, it can no longer be protected.

The law has always protected private information in intermediate states, whether through confidentiality rules like the duties lawyers and doctors owe to clients and patients, evidentiary rules like the ones protecting marital communications, or statutory rules like the federal laws protecting health, financial, communications, and intellectual privacy.

Neither shared private data (nor metadata) should forfeit their ability to be protected merely because they are held in intermediate states. Understanding that shared private information can remain confidential helps us see more clearly how to align our expectations of privacy with the rapidly growing secondary uses of big data.

“Privacy,” in this broader sense, becomes much more than just keeping secrets; it enters the realm of information governance. Privacy is about degrees of knowing and using, and as such it requires an ethical rather than a mathematical approach to the management of information flows.

In her history of privacy in modern America, Sarah Igo documents a variety of privacy struggles over the past century or so, in which some Americans sought to know more about other people, and how the subjects of that scrutiny resisted those attempts to make them known. Igo convincingly argues that these episodes were invariably fights over social status and power in which privacy was the indispensable “mediator of modern social life.” As she puts it well, “Americans never all conceived of privacy in the same way, of course……What remained remarkably consistent, however, was their recourse to privacy as a way of arguing about their society and its pressures on the person.”

And the familiar fault lines of our society — race, class, wealth, gender, religion, and sexuality — were all too often the conduits of those struggles and all too often dictated their winners and losers. Privacy talk in America, then, has long been a conversation about social power, specifically the forms of power that the control and exploitation of human information confers.

Privacy Rules are Instrumental

There’s pretty good anthropological evidence that humans, like many animals, benefit from having private spaces and relationships, and that this benefit is an intrinsic good. European Human Rights Law, as we’ve seen, also treats privacy this way. The European Union’s Charter of Fundamental Rights and Freedoms recognizes fundamental rights to “private and family life, home and communications” (Article 7) and “the protection of personal data concerning [individuals]” (Article 8).

In my experience as well, because privacy is about power, people whose identity or circumstances depart from American society’s (socially constructed) default baseline of white, male, straight, rich, native-born Christians tend to be, on average, more receptive to the privacy-as-an-intrinsic-good argument.

If someone doesn’t believe that privacy is fundamental, pounding the table about its fundamentality is not going to be an effective way of changing their mind. Instead, I’ve found it’s necessary to go deeper than just privacy, to explain that privacy can matter not for its own sake but because it gets us other things that we can all agree are important.

Getting this argument right is particularly significant because international conversations about privacy rules frequently break down with the assertion (commonly made by Europeans) that privacy is a fundamental right that needs no further explanation. To many Americans used to thinking about personal information in economic terms, that argument is bewildering. But at the same time, even if all we care about is economics, some international understanding about what privacy is and why it matters is essential to the economic future of Western democracies.

What Privacy Isn’t

In a time of rapid technological and social change, it’s helpful to think about privacy in relatively broad terms, particularly given the importance of human information to those changes. This is why most of the leading scholarly definitions of privacy are relatively open-ended, like Daniel Solove’s sixteen different conceptions of privacy.

My own definition excludes other ways we could talk about privacy, such as its being a right to control our personal information or the ability to conceal disreputable information about ourselves.

Let’s clear up some misconceptions and myths about privacy. Four of these myths are particularly dangerous:

  • Privacy is about hiding dark secrets, and those with nothing to hide have nothing to fear.
  • Privacy is about creepy things that other people do with your data.
  • Privacy means being able to control how your data is used.
  • Privacy is dying.

Privacy isn’t about Hiding Dark Secrets

Everyone has something to hide, or at least everyone has facts about themselves that they don’t want shared, disclosed, or broadcast indiscriminately. It’s necessary for every member of society to separate themselves from others, and it’s necessary for society to function.

Unwanted disclosure of many kinds of information about ourselves can have deeply harmful consequences to our identity, to our livelihood, to our political freedom, and to our psychological integrity. The law’s intuitive and long-standing protection against blackmail shows that the ability to disclose secrets confers the kind of inappropriate power that the law needs to safeguard against.

Intellectual privacy is particularly important at the present moment in human history, when the acts of reading, thinking, and private communications are increasingly being mediated by computers. Human information allows control of human behavior by those who have the know-how to exploit it. And all of us can be nudged, influenced, manipulated, and exploited, regardless of how few dark secrets we might have.

The “nothing to hide” argument focuses narrowly on privacy as an individual matter rather than as a social value. It doesn’t recognize privacy as a right, yet it treats privacy as an individual preference rather than something of broad value to society in general. Framing privacy in this way makes it seem both weak and suspicious from the start.

With the possible exception of our thoughts, very little of our information is known solely to us. We are social creatures who are constantly sharing information about ourselves and others to build trust, seeking intimacy through selective sharing, occasionally gossiping, and always managing our privacy with others as we maintain our personal, social, and professional relationships. We also have an instrumental interest in letting other people have the privacy to live their lives as they see fit. Undeniably, this value is a cultural one; as law professor Robert Post has argued, privacy rules are a form of civility rules enforced by law.

Solove agrees that privacy’s value is social. “Society involves a great deal of friction,” he argues, “and we are constantly clashing with one another. Part of what makes a society a good place in which to live is the extent to which it allows people freedom from the intrusiveness of others. A society without privacy protection would be oppressive. When protecting individual rights, we as a society decide to hold back in order to receive the benefits of creating free zones for individuals to flourish.”

Free zones let us play — alone as well as with others — with our identity and are an essential shield to the development of political beliefs. They foster our dynamic personal and political selves, as well as the social processes of self-government. In this way, privacy is essential to the kinds of robust, healthy, self-governing, free societies that represent the best hope against tyranny and oppression. As Edward Snowden puts it succinctly, “Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.”

Privacy isn’t about Creepiness

Using creepiness as our test for privacy problems creates a problem of its own. If something isn’t creepy, the insight suggests, then it probably isn’t something we should worry about. And it seems to follow from this that if people aren’t aware of a data practice, it’s fine.

Creepiness has three principal defects:

  • First, creepiness is overinclusive. Lots of new technologies that might at first appear viscerally creepy will turn out to be unproblematic.
  • Second, creepiness is underinclusive. New information practices that we don’t understand fully, or highly invasive practices of which we are unaware, may never seem creepy, but they can still menace values we care about.
  • Third, creepiness is both socially contingent and highly malleable. A pervasive threat to privacy or our civil liberties can be made less creepy as we become conditioned to it. Examples include:
    • The internet advertising industry, which relies on detailed surveillance of individual web-surfing.
    • The erosion of location privacy expectations on dating apps.
    • Facebook piggybacking on the greater willingness of people to share their lives and photos with people they actually knew. Their tricks were to:
      • use in-person social norms as bait for a much broader privacy heist.
      • keep pushing at the social and legal norms surrounding privacy, and changing its terms of service to allow it to access ever more personal information.

Privacy isn’t Primarily about Control

When Zuckerberg was called before Congress to testify about Facebook’s information practices during the Cambridge Analytica scandal, he argued over and over again that when it comes to privacy, Facebook’s goal, first and foremost, is to put its “users” in “control.”

Privacy as Control runs deep in our legal and cultural understandings of privacy. The basic approach of the Fair Information Practices is all about empowering people to make good, informed decisions about their data. The right to consent to new uses of our data and the right to access our data and correct it if it is wrong are examples of this approach, which has been common in U.S. privacy laws from the federal Privacy Act of 1974 to the California Consumer Privacy Act of 2020. Privacy as Control also runs through European data protection law and particularly through the GDPR, which enshrines strong norms of informed consent, access, correction, data portability, and other control-minded principles. American regulators have shared this view. As we have seen, for many years the Federal Trade Commission has called for a “notice and choice” regime to protect consumer privacy, even though the limitations of this approach became apparent over time.

Technology companies also lionize Privacy as Control. Google promises, “[Y]ou have choices regarding the information we collect and how it’s used” and offers a wide variety of “privacy controls.” In an online manifesto titled “A Privacy-Focused Vision for Social Networking,” Zuckerberg mused, “As I think about the future of the internet, I believe a privacy-focused communications platform will become even more important than today’s open platforms,” and he offered a few key principles that this new, allegedly privacy-protective Facebook would adhere to, the first of which was “Private interactions. People should have simple, intimate places where they have clear control over who can communicate with them and confidence that no one else can access what they share.”

Unfortunately, it’s not that simple. What can be stated simply, though, is that Privacy as Control has been a spectacular failure at protecting human privacy for the past thirty years, particularly in the United States. Privacy as Control is an illusion, though like the best illusions it is a highly appealing one. There are four main problems with using control to solve problems of privacy, but they are big ones: (1) control is overwhelming; (2) control is an illusion; (3) control completes the creepy trap; and (4) control is insufficient.

Control is overwhelming

Woodrow Hartzog states the problem well when he explains, “The problem with thinking of privacy as control is that if we are given our wish for more privacy, it means we are given so much control that we choke on it.” Mobile apps can ask users for more than two hundred permissions, and even the average app asks for about five.

When a company’s response to a privacy scandal is “more control,” this simply means more bewildering choices rather than fewer, which worsens the problem rather than making it better. As scholars across disciplines have documented extensively, our consent has been manufactured, so we just click “Agree.” All of this assumes that we even know wheat we’re agreeing to.

Long or short, privacy policies are vague and they are legion. One famous 2009 study estimated that if we were to quickly read the privacy policies of every website we encounter in a typical year, it would take seventy-six full working days of nothing but reading just to get through them all.

Control is an illusion

Early tech evangelists imagined the internet’s revolutionary potential to empower humans. What we got instead was one in which the interfaces governing privacy have been built by human engineers answering to human bosses working for companies with purposes other than revolutionary human empowerment (Silicon Valley’s advertising claims to the contrary notwithstanding). All of the rhetoric surrounding putting “users” in control belies the fact that engineers design their technologies to produce particular results. These design choices limit the range of options available to the humans using the technology. Companies decide the types of boxes we get to check and the switches we get to flip. They also decide which set of choices goes in the basic privacy dashboard, which set goes in the “advanced settings,” and, even more important, which choices “users” don’t get to make at all.

Facebook’s engineers not only know which options the average user is likely to select, but they can use the nudging effects of choice architecture to produce the outcomes they want and that serve their business interests. Or they use other dark patterns to discourage their customers from exercising their privacy controls.

Control completes the creepy trap

Communications scholar Alice Marwick’s idea of “privacy work” is particularly illuminating. Marwick argues that we all engage in “privacy work,” uncompensated labor that we must engage in or else be considered at fault.

And thus the creepy trap is completed. “Adtech” companies — advertising companies processing human information that is below our creepiness threshold with the purpose of selling targeted, surveillance-based ads to manipulate us into buying things.

The illusion of Privacy as Control masks the reality of control through the illusion of privacy. What is really being controlled is us.

Control is insufficient

Our privacy depends not just on what choices we make ourselves but on the choices of everyone else in society. Treating privacy as a purely individual value that can be given or bartered away (whether through “control” or sale) converts it into an asset that can be chipped away on an individual basis, distracting us from (and ignoring) the social benefits privacy provides.

Sometimes there is little or nothing we can do to prevent others from disclosing information about us. This can happen when companies set up pricing systems that rely on information disclosure, like “safe driver” discounts for car insurance contingent on your agreeing to have a black-box data controller in your car, especially if such boxes were to become standard in passenger cars. Or when your child’s school decides to use a “learning management system” or other software that has privacy practices only the school can agree to. Or when a company voluntarily discloses data it collected about you to the government. Or when someone discloses their genetic data to a company, which, since blood relatives have very high genetic similarities, means they have also shared sensitive information about their close family members.

Privacy isn’t Dying

The idea that Privacy Is Dying is a weaker but more insidious version of the Privacy Is Dead argument. A society fueled by data has no place for privacy, we hear, and we should let it fade into the past like horse-drawn carriages and VHS cassettes. Besides, the argument goes, people in general (and especially young people) don’t care about privacy anymore. A good example of the Privacy Is Dying argument was offered by a young Mark Zuckerberg in 2010, responding to an interview question about the future of privacy on Facebook and the internet in general: “We view it as our role in the system to constantly be innovating and be updating what our system is to reflect what the current social norms are.”

Just because more information is being collected, it does not mean that Privacy Is Dying. Zuboff explains that “in forty-six of the most prominent forty-eight surveys administered between 2008 and 2017, substantial majorities support measures for enhanced privacy.” Polls by the Pew Research Center, which does extensive nonpartisan work on public attitudes toward technology, have also found that Americans are increasingly concerned about online data collection, believe the risks of collection outweigh the benefits, and support withholding certain kinds of personal information from online search engines.

Moreover, the very institutions that have the most to gain from the acceptance of the Privacy Is Dying myth often go to great lengths to protect their own privacy:

  • The NSA, for example, keeps its surveillance activities hidden behind overlapping shields of operational, technical, and legal secrecy. It took Edward Snowden’s illegal whistleblowing to reveal the NSA’s secret court orders from the secretive FISA Court. These orders allowed the NSA access in bulk to the phone records of millions upon millions of Americans, without any evidence that international terrorists were involved.
  • Law enforcement agencies have access to “sneak and peek” search warrants that allow them to read emails stored on the cloud, often never giving notice to the people being spied on, and they are secretive about their use of drones and “stingrays,” devices that pretend to be cell phone towers that access digital information.
  • Technology companies closely guard their privacy with aggressive assertions of intellectual property rules, trade secrecy law, and the near-ubiquitous use of NDAs, nondisclosure agreements that prohibit employees, visitors, and even journalists from revealing discreditable things about a company.

Three Privacy Values

Privacy is best understood not so much as an end in itself but as something that can get us other things that are essential to good lives and good societies. Three such human values that I think privacy rules can and should advance: identity, freedom, and protection.

When we treat privacy as instrumental, the way we talk about privacy changes. We stop talking about creepiness, about whether we’re Luddites or about whether our friend’s privacy preferences are idiosyncratic. Instead, we start asking ourselves:

  1. what rules about human information best promote values we care about
  2. what the power consequences of those rules might be, and
  3. how we should use those rules to advance the values on the ground.

From this perspective, privacy becomes more neutral. This is important because privacy rules can promote bad things, too. And sometimes companies have a legitimate point that poorly crafted privacy rules can get in the way of economic activity without protecting anything particularly important.

1. Identity

Authenticity

Many technology companies are obsessed with “authenticity,” but perhaps none so much as Facebook. From its very beginnings, the social network has insisted upon a “Real Name Policy” under which its human customers are required to use their real names rather than pseudonyms.

While psychologists and philosophers use the term “authenticity” in a variety of ways, all of these ways have one thing in common: they produce benefits for the individual self. On the other hand, “authenticity” as used by social media companies often means the exact opposite:

  • “authenticity” is judged by the company rather than the individual (think presumptively “fake” Native American names or Salman versus Ahmed Rushdie).
  • this definition of “authenticity” primarily benefits the company by enabling better ad targeting to known humans.
  • it is “the community” over the individual — from Facebook’s perspective “authenticity creates a better environment for sharing.”

We deserve the right to determine for ourselves not just what we are called, but who we are and what we believe.

How Privacy nurtures Identities

At the most basic level, privacy matters because it enables us to determine and express our identities, by ourselves and with others, but ultimately — and essentially — on our own terms. Privacy offers a shield from observation by companies, governments, or “the community” within which our identities can develop. In other words, privacy gives us the breathing space we need to figure out who we are and what we believe as humans.

At the core of political freedom is intellectual freedom — the right and the ability to think and determine what we believe for ourselves. Intellectual and political freedom require the protective, nurturing shield of “intellectual privacy” — a zone of protection that guards our ability to make up our mind freely. More formally, intellectual privacy is the protection from surveillance or interference when we are engaged in the activities of thinking, reading, and communicating with confidants.

In order to generate new ideas, or to decide whether we agree with existing ideas, we need access to knowledge, to learn, and to think. The fight for communications privacy in letters, telegrams, and telephone calls was a long and hard one, and we must ensure that those hard-won protections are extended to emails, texts, and video chats. We also need private spaces — real and metaphorical — in which to do that work. Our identities may be individual, but the processes by which we develop them are undeniably social, and intellectual and other forms of privacy are needed throughout the processes of identity and belief formation, development, and revision.

Our political identities are critically important and have received special protection in the law for centuries. Privacy shields the whole self, enabling us to develop our whole personalities and identities, in ways that may be far removed from politics and society at large. Our identities, our senses of self, are complicated and gloriously, humanly messy. We human beings are complicated, we are constantly in flux, and our identities are affected by our environment and by our interactions with others. This self is neither rigidly determined by the mythic, autonomous core, nor is it rigidly determined by social forces; the self exists somewhere in between these two poles. This identity experimentation goes on throughout our lives. In other words, there is indeed a Me, but it is complicated, influenced, and shaped by all of the Yous out there. Yet even though You and other Yous influence Me, I can still resist, sometimes with success.

We are often members of groups that define ourselves in terms of what we are not. Groups engage in identity play in order to define who they are and who they are not (notice definitions once again at work in multiple directions, defining and restricting). As individuals, we can do the same thing — belong to groups, define ourselves in opposition to groups, or be part of a group and also separate from it.

Forcing

Identity forcing happens when our social or cultural environment defines us, forcing our identities into boxes we might not choose or may not even have drawn in the first place. While there was certainly room for identity play, the forcing effect of institutions can make play emotionally, psychologically, or socially costly. Facebook’s mandate that the identities of its users be tied to a public, “real” name is another example of forcing. As technology scholars Oliver Haimson and Anna Lauren Hoffmann have explained, “Experimentation with representing one’s identity online can also allow people to embody potential future selves, which can be indispensable to developing one’s identity broadly.”

Members of marginalized ethnic and national groups, immigrants, racial minorities, sexual minorities, and women, among others, are all familiar with the practice of code-switching, whereby we act, speak, perform, and dress differently for different audiences. In fact, everyone code-switches, even if we don’t realize it. This is not “inauthentic”; it is instead the expression of different aspects of my self, both of which are Me. Privacy matters because it allows us to hold multiple identities without their coming crashing together, giving the lie to Zuckerberg’s stingy and self-serving notion of unitary identity as authenticity.

We are all of us different people at different times each day as we play different roles in society, and we are different people at different times in our lives as we evolve, mature, regress, explore, and play with our identities. But this is not dishonest, nor is it inconsistent. It is human. Forcing human beings into compatibility with digital or corporate systems, shaving off our rough or “inauthentic” edges, has undeniable human costs.

If we do have a false, inauthentic self, I would argue that it is the homogenized, flattened, forced self that is the false one. This is the forced self that may be desirable to interfaces, advertisers, and self-appointed arbiters of our middle school social standing, but it is worrying if we care about individuality, eccentricity, and the development of unique, critical individuals and citizens.

Exposure

Digital tools expose us to others in ways that are normalizing, stultifying, and chilling to the personal and social ways we develop our senses of self. And exposure can be devastating to identity. Alan Westin expressed this idea well:

Each person is aware of the gap between what he wants to be and what he actually is, between what the world sees of him and what he knows to be his much more complex reality. In addition, there are aspects of himself that the individual does not fully understand but is slowly exploring and shaping as he develops. Every individual lives behind a mask in this manner; indeed the first etymological meaning of the word “person” was “mask.”

Hopefully you agree with me that an attractive vision of freedom is one in which we are able to work out for ourselves and with those close to us what we like, who we love, what we think is beautiful or cool, and what we think human flourishing looks like. This is not the freedom to be just like everyone else but rather a more radical notion that good lives and good societies are ones in which there is individuality, diversity, eccentricity, weirdness, and dissent. This is an argument for authenticity the way psychologists describe it, not the way Facebook does.

Privacy is the key to making this system work. The philosopher Timothy Macklem puts it well when he argues, “The isolating shield of privacy enables people to develop and exchange ideas, or to foster and share activities, that the presence or even awareness of other people might stifle. For better and for worse, then, privacy is sponsor and guardian to the creative and subversive.”

2. Freedom

The “Truth Bomb”

Edward Snowden disclosed many things about the scope of U.S. government surveillance in the summer of 2013, but one revelation that went under the radar was the National Security Agency’s pornography surveillance program. The NSA wanted to identify and surveil “radicalizers” — people who were not terrorists but merely radical critics of U.S. policy. The plan was to surveil them to find their vulnerabilities, then expose those vulnerabilities to discredit them publicly.

Alan Westin explained in 1967 that “the modern totalitarian state relies on secrecy for the regime, but high surveillance and disclosure for all other groups. With their demand for a complete commitment of loyalty to the regime, the literature of both fascism and communism traditionally attacks the idea of privacy as ‘immoral,’ ‘antisocial,’ and ‘part of the cult of individualism.’ ”

We have made some progress against the specter of unregulated government surveillance, most notably with the passage of the 2015 USA Freedom Act, but we still lack a clear understanding of why surveillance can be harmful and how it can threaten political freedom. U.S. courts hearing challenges to government surveillance programs too frequently fail to understand the harms of surveillance and too frequently fail to allow legitimate claims to go forward.

Understanding surveillance

Like privacy, surveillance is a complex subject, and like privacy, it is neither always good nor always bad. But a better understanding of what surveillance is and how it works can help us to understand how to protect privacy when it matters. Sociologist David Lyon explains that surveillance is primarily about power, but it is also about personhood. Lyon also offers a helpful definition of surveillance as “the focused, systematic and routine attention to personal details for purposes of influence, management, protection or direction.”

I’d like to add a fifth, which is that surveillance transcends the public-private divide. This is a real problem under U.S. law because constitutional protections like the Fourth Amendment typically apply only against government surveillance. Private actors can undermine civil liberties free from constitutional constraint. For example, facial recognition companies like Clearview AI scrape millions of photos from the internet, combine them with government photo ID databases or Facebook profiles (both of which associate photos with real names), and then sell facial recognition products to the government.

In modern democratic societies, surveillance of all kinds is on the rise. Location tracking from smartphones and smart cars, facial recognition technology, or AI and “big data” algorithms fueled by human information from a variety of sources and used for an ever greater variety of purposes.

“Liquid surveillance” is the spread of surveillance beyond government spying to a sometimes private surveillance in which surveillance subjects increasingly consent and participate. But here, too, consent can become an illusion. We cannot consent to secret surveillance, nor can we consent to structural surveillance like ubiquitous CCTV cameras installed by businesses or on public transport.

Then there is the problem of the “unraveling” of privacy described by legal scholar Scott Peppet, which occurs when other people opt in to surveillance, leaving the vulnerable exposed and isolated. Progressive Insurance’s “MyRate” program, in which drivers can receive reduced insurance rates in exchange for the installation of a surveillance device that monitors driving speed, time, and habits. Drivers who don’t participate in this surveillance program not only pay more for their insurance, but their “privacy may unravel as those who refuse to disclose are assumed to be withholding negative information and therefore stigmatized and penalized.”

Intellectual Privacy and Political Freedom

In a democracy there is also a special relationship between intellectual privacy and political freedom. Intellectual privacy theory suggests that new ideas often develop best away from the intense scrutiny of public exposure. Protection from surveillance and interference — is necessary to promote this kind of intellectual freedom. It rests on the idea that free minds are the foundation of a free society, and that surveillance of the activities of belief formation and idea generation can affect those activities profoundly and for the worse.This requires at a minimum protecting the ability to think and read as well as the social practice of private conversations with confidants. It may also require some protection of broader social rights, whether we call them rights of association or rights of assembly, since our beliefs as well as our identities are socially constructed. It reflects the conviction that in a free society, big ideas like truth, value, and culture should be generated organically from the bottom up rather than dictatorially from the top down.

Surveillance and Power

The struggles over personal information that we have lumped under the rubric of “privacy” are the struggles that are defining the allocation of political, economic, and social power in our new technological age. Even in democratic societies, the blackmail threat of surveillance is real. Surveillance (especially secret surveillance) often detects crimes or embarrassing activity beyond or unrelated to its original purposes. Whether these discoveries are important, incidental, or irrelevant, all of them give greater power to the watcher.

Looking forward, it does not take much paranoia to imagine a spy agency in a democracy interfering in an election by blackmail (through the threat of disclosure) or even disclosure itself. The power effects of surveillance that enable blackmail are likely to become only greater as new digital tools for processing human information are developed and deployed.

Digital surveillance technologies have reached the point when their powers of persuasion threaten self-government itself. A system of unregulated micro-targeting mean elections might be determined by data science rather than democratic practices under which swing voters are subjected to the same processes by which “consent” to data practices are manufactured by companies. Indeed, the South Korean National Intelligence Service admitted it had conducted a secret campaign in its own country to ensure its preferred candidate won.

The power of sorting can bleed imperceptibly into the power of discrimination. Governments are increasingly using tools like these that enable invidious sorting (eg criminal risks). Even where the data is of high quality and doesn’t reflect racism or inequality, algorithms are still not neutral. As data scientist Cathy O’Neil, the author of Weapons of Math Destruction, explains “Algorithms are, in part, our opinions embedded in code”. “They reflect human biases and prejudices that lead to machine leading mistakes and misinterpretations.” Marginalised groups frequently lack a meaningful ability to claim the right of privacy.

The right to claim privacy is the right to to be a citizen entrusted with the power to keep secrets from the government, the right to exercise the freedom so lauded in orthodox American political theory and popular discourse. It is a claim to equal citizenship, with all the responsibility that comes with such a claim, and an essential and fundamental right in any meaningful account of what it means to be free. In order to achieve this promise in practice, privacy protections must be extended as a shield against inequality of treatment.

3. Protection

A digital society encourages consumers to depend ever more upon information products that operate in a virtual black box. Because privacy rules are information rules, this is in effect a privacy problem, in a broad sense. Privacy rules will be necessary for the next stage of consumer protection laws and will increasingly need to merge.

Unlike European countries that typically have both a consumer protection agency and a new data protection agency, the United States has only the Federal Trade Commission at the national level preventing “unfair and deceptive trade practices.” We have been developing laws and vocabulary to talk about the problem of government power since at least the ancient Greek philosophers, and in Anglo-American law since at the least the Magna Carta of 1215. Justice Oliver Wendell Holmes’ famous metaphor of the “marketplace of ideas” in Abrams v. United States (1919), in which he argued that we should protect the expression of “opinions that we loathe and believe to be fraught with death” because they might turn out to be correct. But our cultural understandings of private power are far less mature. American law places far fewer constraints on private actors than on government actors.

How should we think about Consumer Protection?

Many tech companies talk about consumer privacy using three distinct terms:

  1. users (are often the product itself who aren’t compensated for or given ownership of their work)
  2. choices (are unwitting, coerced, involuntary or some combination of the three)
  3. innovation (is vague, cast only as good, disappears when regulation appears, and is sometimes pushed as a fundamental right)

Innovation as a fundamental right is particularly ironic where fears of “stifling innovation” are commonly and frequently used to resist any attempts to protect privacy, an actual fundamental right protected not just by European law but by American law as well.

Modern life and the incentives placed on human beings living in late capitalist, modern, networked societies have overleveraged the time and energy of most consumers, sometimes up to and beyond the breaking point, and we shouldn’t pretend otherwise. Our consumer protection law should recognize the situation consumers are actually in and not treat them as if they have limitless resources of time and money, strong bargaining power and access to sophisticated lawyers. It should recognize their position as exposed, tracked, sorted, marketed, and at risk of manipulation.

Consumer privacy law for the information economy must ultimately become consumer protection law and four initial strategies seem the most promising — two new and two old. The two old strategies would be to reinvigorate protections against deception and unfairness, while the two new ones would be to protect against abusive practices and consider regulating services marketed as “free” when they are really not.

Building Digital Trust through Privacy Rules

Trust is everywhere, even when it’s not obvious, and crucially it’s at the core of the information relationships that have come to characterize our modern, digital lives. So much of our lives are mediated by information relationships, in which professionals, private institutions, and the government hold information about us as part of providing a service. But privacy is often thought about in negative terms, which leads us to mistakenly focus on harms from invasions of privacy, and places too much weight on the ability of individuals to opt out of harmful or offensive data practices. Privacy can also be used to create great things, like trust.

Information relationships are ones in which information is shared with the expectation it will not be abused and in which the rules governing the information sharing create value and deepen those relationships over time. If privacy is increasingly about these information relationships, it is also increasingly about the trust that is necessary to thrive, whether those relationships are with other humans, governments or corporations. This vision of privacy creates value for all parties to an information transaction and enables the kinds of sustainable information relationships on which our digital economy must depend.

If we want to build trust in our technologies that run on personal information, four factors will be essential for trustworthy institutions. They’re:

  1. discreet about the human information they hold — we should be able to presume our data will not be disclosed or sold without our knowledge.
  2. honest about their data practices — requires humans whose data is processed to understand what’s going on.
  3. protective of the human information they hold — secure human data against breaches and minimise the damage caused when there are failures.
  4. loyal — act in our best interests and use data only to the extent it doesn’t endanger or negatively affect us.

With thanks

I’d like to give Neil Richards my eternal thanks for illuminating what was really at stake in the fight for digital safety and privacy so adeptly for me.

Follow us