"Why do you think people say they highly value data privacy, yet freely offer up their data to companies that request it? Do you think people would value their data differently if they were given ownership of it?"
Are we all cynics, simply impulsive, short-sighted or dupes!? And would an ownership right, where we can decide what to do with our data - even sell it - fix the problem?
I’ve been doing a course at the LSE, Data: Law, Policy and Regulation, where this question was asked.
I want to argue that most of us can see the benefits of using services like Facebook have for us, while the risks to our privacy and dignity seem small or hard to quantify - and for good reason.
While many claim a visceral fear of a creeping loss of privacy, so far actual events have not supported this angst. Take the totemic legal case concerning online behavioural advertising (OBA), and Real Time Bidding (RTB) in online advertising auctions currently before European (Irish) and UK data protection authorities.
The complainant, Johny Ryan of Brave wrote about the seriousness of the complaints:
"RTB auctions happen hundreds of billions of times every day, and leak the online habits of billions of Internet users into the data broker ecosystem. We believe is the largest data breach ever recorded."
Yet to date no person or institution has come forward with specific claims that this "largest data breach ever" has caused them suffering or harm. Of course, the lack of known victims does not mean that the data is not valuable. It is - for starters - the basis from which advertisers can bid for our attention and smaller media companies and bloggers, who sell our attention, make money.
But currently, in terms of potential harm, in most* scenarios to most actors (bar Google and states) the format this RTB data comes in is not of much use besides the ability of disparate advertisers to use it to bid for advertising. Yes, advertising can be annoying. It can also be harmful for the individual, but particularly nefarious on aggregate for society, as the Cambridge Analytica case showed.
That's one of the strange things about data. Not all data is the same. An email address is useful in ways that a first name is not. One piece of data, even very useful ones like email address is not of much use even if you have it for everybody (without context that makes its use valuable - it’s still what marketers call “unqualified”).
And even if you have a range of data about a single person it is rarely valuable to persons or institutions that do not have a lot of other similar data. Machine learning, a technology that finds statistical correlations in data, amplifies this effect where the more relatable data you have the more valuable it becomes.
So I agree with David Priest, “data is slippery”. Data’s value and power depend on the context. And "it has different values for individuals, companies and society writ large."
The so-far victimless OAB case above illustrates just one example, and it makes me wonder over the appropriateness of the often-heard call to giving people a property right over their personal data. So for example, who should own the data that machine learning creates as statistically significant, after evaluating personal data, together with that of other individuals' and publicly available data?
I also agree with Professor Mark MacCarthy that data has other characteristics that make it an inappropriate candidate for ownership rights, the most important of which is that humans are social, and data embodies our societies.
So says MacCarthy, that if we had data property rights a lot of it would have to be co-owned, say for example a recorded conversation. I'd go one step further, your data's value is socially constructed. It is dependent on your position in society and who you interact with. In other words your location in the social graph (network). It would be worthless without you being embedded in a particular place in a particular society.
Personal Data is collective data simultaneously. As sociologist Zeynep Tufekci recently argued when it became clear that Strava revealed the location of American military bases:
"Data privacy is not like a consumer good, where you click “I accept” and all is well. Data privacy is more like air quality or safe drinking water, a public good that cannot be effectively regulated by trusting in the wisdom of millions of individual choices. A more collective response is needed."
And then there is imperative of having strong successful economies and societies. In 2019, economist Jason Furman did a report for the UK on how to create more competition in the digital economy. He recommended that data of the large tech platforms should be shared, to encourage smaller competitors, and to enhance how the state delivers its services.
We do need a hybrid model when it comes to the protection of data, that combines the right to privacy with something else. But what that something else should be, looks more like data as a public good than data that is personally owned. Yes, you may have some claim to the data you generate, but so does society, for which it is of much more use.
* Addendum: Since I published this post something happened that made me revise my opinion somewhat. The New York Times received a data leak from a commercial database of the January 6 2021 riot in Washington. It showed that many of the rioters’ routes to the event could be traced in this database, and in numerous cases, they could be identified. These databases are created mainly by using device identifiers used for advertising targeting.
Why has this not happened in Europe? Probably because unlike the US, where there is little legal constraint on doing this kind of data gathering, the GDPR makes commercial trackers more cautious.
Does this change the main trust of my argument that we need a hybrid model when it comes to the protection of data, that combines the right to privacy with something else? No it does not.