Facebook and the Future of Online Privacy
The EU has taken the lead in responding to abuse by the likes of Facebook, thanks to its new privacy standards and proposed greater taxation of peddlers of online personal data. Yet more is needed and feasible.
NEW YORK – Chris Hughes, a co-founder of Facebook, recently noted that the public scrutiny of Facebook is “very much overdue,” declaring that “it’s shocking to me that they didn’t have to answer more of these questions earlier on.” Leaders in the information technology sector, especially in Europe, have been warning of the abuses by Facebook (and other portals) for years. Their insights and practical recommendations are especially urgent now.
Facebook CEO Mark Zuckerberg’s testimony before the US Senate did little to shore up public confidence in a company that traffics in its users’ personal data. The most telling moment of testimony came when Illinois Senator Richard Durbin asked whether Zuckerberg would be comfortable sharing the name of his hotel and the people he had messaged that week, exactly the kind of data tracked and used by Facebook. Zuckerberg replied that he would not be comfortable providing the information. “I think that may be what this is all about,” Durbin said. “Your right to privacy."
Critics of Facebook have been making this point for years. Stefano Quintarelli, one of Europe’s top IT experts and a leading advocate for online privacy (and, until recently, a member of the Italian Parliament), has been a persistent and prophetic critic of Facebook’s abuse of its market position and misuse of online personal data. He has long championed a powerful idea: that each of us should retain control of our online profile, which should be readily transferable across portals. If we decide we don’t like Facebook, we should be able to shift to a competitor without losing the links to contacts who remain on Facebook.
For Quintarelli, Cambridge Analytica’s abuse of data acquired from Facebook was an inevitable consequence of Facebook’s irresponsible business model. Facebook has now acknowledged that Cambridge Analytica is not alone in having exploited personal profiles acquired from Facebook.
In personal communications with me, Quintarelli says that the European Union’s General Data Protection Regulation, which takes effect on May 25, following six years of preparation and debate, “can serve as guidance in some aspects.” Under the GDPR, he notes, “non-compliant organizations can face heavy fines, up to 4% of their revenues. Had the GDPR already been in place, Facebook, in order to avoid such fines, would have had to notify the authorities of the data leak as soon as the company became aware of it, well in advance of the last US election.”
Quintarelli emphasizes that, “Effective competition is a powerful tool to increase and defend biodiversity in the digital space.” And here, the GDPR should help, because it “introduces the concept of profile portability, whereby a user can move her profile from one service provider to another, like we do when porting our telephone profile – the mobile phone number – from one operator to another.”
But “this form of ownership of one’s own profile data,” Quintarelli continues, “is certainly not enough.” Just as important is “interconnection: the operator to which we port our profile should be interconnected to the source operator so that we don’t lose contact with our online friends. This is possible today thanks to technologies like IPFS and Solid, developed by the web inventor Tim Berners-Lee.”
Sarah Spiekermann, a professor at the Vienna University of Economics and Business (WU), and Chair of its Institute for Management Information Systems, is another pioneer of online privacy who has long warned about the type of abuses seen with Facebook. Spiekermann, a global authority on the trafficking of our online identities for purposes of targeted advertising, political propaganda, public and private surveillance, or other nefarious purposes, emphasizes the need to crack down on “personal data markets.”
“Ever since the World Economic Forum started to discuss personal data as a new asset class in 2011,” she told me, “personal data markets have thrived on the idea that personal data might be the ‘new oil’ of the digital economy as well as – so it seems – of politics.” As a result, “more than a thousand companies are now involved in a digital information value chain that harvests data from any online activity and delivers targeted content to online or mobile users within roughly 36 seconds of their entry into the digital realm.” Nor is it “just Facebook and Google, Apple or Amazon that harvest and use our data for any purpose one might think of,” Spiekermann says. “‘Data management platforms’ such as those operated by Acxiom or Oracle BlueKai possess thousands of personal attributes and socio-psychological profiles about hundreds of millions of users.”
While Spiekermann thinks “personal data markets and the use of the data within them should be forbidden in their current form,” she thinks the GDPR “is a good motivator for companies around the world to question their personal data sharing practices.” She also notes that “a rich ecosystem of privacy-friendly online services is starting to be up and running.” A study by a class of WU graduate students “benchmarked the data collection practices of our top online services (such as Google, Facebook or Apple) and compared them to their new privacy-friendly competitors.” The study, she says, “gives everyone a chance to switch services on the spot.”
Facebook’s immense lobbying power has so far mostly fended off the practical ideas of Quintarelli, Spiekermann, and their fellow campaigners. The recent scandal, however, has opened the public’s eyes to the threat that inaction poses to democracy itself.
The EU has taken the lead in responding, thanks to its new privacy standards and proposed greater taxation of Facebook and other peddlers of online personal data. Yet more is needed and feasible. Quintarelli, Spiekermann, and their fellow champions of online ethics offer us a practical path to an Internet that is transparent, fair, democratic, and respectful of personal rights.
Your Data or Your Life
Today's amalgamation and synthesis of digital services and hardware is designed to make our lives easier, and there is no doubt that it has. But have we stopped asking fundamental questions, both of ourselves and of the companies we entrust to do all of these things?
LONDON – Apple’s new watch keeps track of your health. Google Now gathers the information needed to compute the ideal time for you to leave for the airport. Amazon tells you the books you want, the groceries you need, the films you will like – and sells you the tablet that enables you to order them and more. Your lights turn on when you get close to home, and your house adjusts to your choice of ambient temperature.
This amalgamation and synthesis of digital services and hardware is designed to make our lives easier, and there is no doubt that it has. But have we stopped asking fundamental questions, both of ourselves and of the companies we entrust to do all of these things? Have we given sufficient consideration to the potential cost of all of this comfort and ease, and asked ourselves if the price is worth it?
Every time we add a new device, we give away a little piece of ourselves. We often do this with very little knowledge about who is getting it, much less whether we share their ethics and values. We may have a superficial check-box understanding of what the companies behind this convenience do with our data; but, beyond the marketing, the actual people running these organizations are faceless and nameless. We know little about them, but they sure know a lot about us.
The idea that companies can know where we are, what we have watched, or the content of our medical records was anathema a generation ago. The vast array of details that defined a person was widely distributed. The bank knew a bit, the doctor knew a bit, the tax authority knew a bit, but they did not all talk to one another. Now Apple and Google know it all and store it in one handy place. That is great for convenience, but not so great if they decide to use that information in ways with which we do not proactively agree.
And we have reason to call into question companies’ judgment in using that data. The backlash to the news that Facebook used people’s news feeds to test whether what they viewed could alter their moods was proof of that. I do not recall checking a box to say that that was okay. Recently, hackers misappropriated photos sent via Snapchat, a service used primarily by young people that promises auto-deletion of all files upon viewing.
Likewise, health-care data were always considered private, so that patients would be open and honest with health-care professionals. As the lines between health care and technology businesses become hazy, some manufacturers of “wearables” and the software that runs on them are lobbying to have their products exempted from being considered medical devices – and thus from regulatory requirements for reliability and data protection.
Privacy is only one part of a larger discussion around data ownership and data monopoly, security, and competition. It is also about control and destiny. It is about choice and proactively deciding how people’s data are used and how people use their own data.
More mature firms have phased in formal protocols, with ethics officers, risk committees, and other structures that oversee how data are collected and used, though not always successfully (indeed, they often depend on trial and error). Small new companies may have neither such protocols nor the people – for example, independent board members – to impose them. If serious ethical lapses occur, many consumers will no longer use the service, regardless of how promising the business model is.
We like new applications and try them out, handing over access to our Facebook or Twitter accounts without much thought about the migration of our personal data from big companies with some modicum of oversight to small companies without rigorous structures and limits. Consumers believe or expect that someone somewhere is keeping an eye on this, but who exactly would that be?
In Europe, legislation to protect personal data is not comprehensive, and much of the rest of the world lacks even rudimentary safeguards. After exploring this issue with legislators in several countries over the past couple of months, it has become abundantly clear that many do not have a full grasp of the myriad issues that need to be considered. It is a difficult subject to address, and doing so is impeded by lobbying efforts and incomplete information.
In the short term, young companies should view ethics not as a marketing gimmick, but as a core concern. All organizations should invest in ethics officers or some sort of review process involving people who can assess all of the implications of a great-sounding idea. Legislators need to educate themselves – and the public – and exercise more oversight. For example, just as many countries did with car seatbelts a generation ago, a public-safety campaign could be paired with legislation to explain and promote two-step verification.
In the longer term, as we rightly move toward universal Internet access, we need to ask: How much of ourselves are we willing to give away? What happens when sharing becomes mandatory – when giving access to a personal Facebook account is a job requirement, and health services are withheld unless a patient submits their historical Fitbit data?
If that is the future we want, we should stride toward it with full awareness and a sense of purpose, not meander carelessly until we fall into a hole, look up, and wonder how we got there.
Securing the Internet Commons
Ever since Edward Snowden’s revelations about the US National Security Agency’s spying around the world, a debate has raged about the balance between privacy and national security. Should companies be able to encrypt their users’ messages so securely that no one – including governments – can get in?
WASHINGTON, DC – Ever since Edward Snowden’s revelations about the National Security Agency’s spying on citizens and leaders around the world, a debate has raged in the United States about the proper balance between national security and individual privacy and liberty. Most recently that debate has focused on encryption: whether technology companies should be able to develop programs that encrypt their users’ messages so securely that no one but their intended recipients – not even governments – can read them. It is a debate to which governments and citizens everywhere should pay attention.
Not surprisingly, the US government’s national security officials oppose full encryption by American technology companies, arguing that the country will be less safe if the proper authorities have no “backdoor” – a piece of code that lets them in. Software engineers call backdoors “vulnerabilities,” deliberate efforts to weaken security. They regard a request for backdoors the same way an automobile manufacturer would view a request for a defective engine.
A large coalition of technology companies and civil-society organizations recently sent a letter to President Barack Obama arguing against backdoors. In addition to “undermining every American’s cyber security and the nation’s economic security,” the signers argued, “introducing new vulnerabilities to weaken encrypted products in the US would also undermine human rights and information security around the globe.”
What supporters of encryption recognize is that “if American companies maintain the ability to unlock their customers’ data and devices on request, governments other than the United States will demand the same access, and will also be emboldened to demand the same capability from their native companies.” The US government will not be able to object, given its own policies. The result “will be an information environment riddled with vulnerabilities that could be exploited by even the most repressive or dangerous regimes.”
What is at stake is nothing less than the health of the “global digital ecosystem.” The technology corporations that signed this letter understand that while they may be incorporated in the US, they are global players and have global responsibilities. They also understand, in a more self-interested vein, that if foreign consumers believe that using American social media and search engines exposes them to the scrutiny of US intelligence agencies, those consumers will turn to other providers.
The response of many foreign governments has been to focus on how to ensure and protect their “technological sovereignty.” Information may transcend borders, but servers and cables still have physical locations.
The Open Technology Institute at New America (which I head) is collaborating with the German Global Public Policy Institute (GPPI) on a series of transatlantic dialogues about “security and freedom in the Digital Age.” An initial report identifies 18 proposals from more than a dozen European governments on subjects including new undersea cables, encryption, localized data storage, domestic industry support, international codes of conduct, and data protection laws.
Many of these proposals will not actually achieve their goals. But the most important issues to sort out are less about feasibility than about desirability, from both a national and a global perspective. As with US companies, it is important to differentiate among corporate, economic, and public interests at the national and global levels. Citizens from around the world have a stake in getting this right.
For starters, digital protectionism should be as suspect as any other form of protectionism, albeit with exceptions for health, safety, and social solidarity. “Technological sovereignty” can easily be a pretext for insisting that citizens buy only homegrown tech products.
Moreover, rules should be formulated globally and implemented nationally. The Internet is essentially a country with both state and non-state actors as its citizens. It is our task to ensure that it is a free, open, and universally accessible country that advances and safeguards universal human rights.
The OECD adopted an excellent set of “Principles for Internet Policy Making” in 2011. Principle 1 requires that policymakers “promote and protect the global free flow of information”; Principle 2 insists on promotion of “the open, distributed, and interconnected nature of the Internet”; Principle 8 mandates “transparency, fair process, and accountability” across the Internet; Principle 11 promotes “creativity and innovation”; and Principle 13 “encourage[s] cooperation to promote Internet security.”
Other countries and regional organizations have come up with similar guidance. The point is less the specific wording or even content than the recognition that such principles are necessary for policymakers around the world.
From this perspective, it is not clear whether “technological sovereignty” is a helpful or harmful concept. It makes digital space sound like airspace or territorial seas, something that is somehow derived from or adjacent to physical territory. The New America-GPPI report is entitled “Technological Sovereignty: Missing the Point?” The authors conclude that “data privacy and security depend primarily not on where data is physically stored or sent, but on how it is stored and transmitted.” For data purposes, at least, we are in a post-territorial age.
Decoupling sovereignty from territory is hard to do, particularly after Snowden showed the world just how far one country’s technological tentacles can reach. Still, the larger lesson is that differentiating between national security and global security in the digital world may be both impossible and deeply counter-productive. We need new maps and mindsets and new coalitions of business, civic activists, and all those who understand that national security must include the protection of privacy and freedom of expression. And we need new ways to engage both governments and citizens in a new, exhilarating, dangerous, and still largely unexplored world.
Is the Press Too Free?
Earlier this month, the former actor and comedian John Ford revealed that Rupert Murdoch’s Sunday Times newspaper employed him to hack and blag his way into the private affairs of dozens of prominent people. We need the press to protect us against abuses of state power; but we also need the state to protect us from abuses of media power.
LONDON – The poisoning of Russian double agent Sergei Skripal and his daughter Yulia at an Italian restaurant in Salisbury has driven an important story off the front pages of the British press. Earlier this month, the former actor and comedian John Ford revealed that for 15 years, from 1995 to 2010, he was employed by Rupert Murdoch’s Sunday Times newspaper to hack and blag his way into the private affairs of dozens of prominent people, including then-Prime Minister Gordon Brown.
Discussing the techniques he used, Ford said: “I did their phones, I did their mobiles, I did their bank accounts, I stole their rubbish.” Some of the most prominent names in British journalism are likely to be tarnished by this and other revelations of illegality and wrongdoing.
The basic plot goes back to the foundation of the free press with the abolition of licensing in 1695. To fulfill what has been seen since then as its distinctive purpose – holding power to account – a free press needs information. We expect a free press to investigate the exercise of power and bring abuses to light. In this context, one inevitably recalls the exposure of Watergate, which brought down President Richard Nixon in 1974.
But actual scandals are not necessary for the press to do its job. The very existence of a free press is a constraint on government. It is not the only one: the rule of law, enforced by an independent judiciary, and competitive elections held at regular intervals are no less important. Together, they form a three-legged stool: take one, and the other two collapse.
We continue to view the press as our defender against an over-mighty state, despite politicians’ often-craven performance in the face of media pressure. This is because we have no proper theory of private power.
The liberal argument is both simple and simplistic: the state is dangerous precisely because it is a monopolist. Because it controls the means of coercion and levies compulsory taxes, its dark doings need to be exposed by fearless investigative journalism. Newspapers, by contrast, are not monopolists. They lack any power of compulsion, so there is no need to guard against the abuse of press power. It does not exist.
But while a press monopoly in its pure form does not exist, oligopoly prevails in most countries. If, as economists claim, the public good emerges from the invisible hand of the market, the market for news is quite visible – and visibly concentrated. Eight companies own Britain’s 12 national newspapers, and four proprietors account for more than 80% of all copies sold. In 2013, two men, Murdoch and Lord Rothermere, owned 52% of online and print news publications in the United Kingdom. Were it not for the success of the press in rendering its own power invisible, we would never rely on self-regulation alone to keep the press honest.
Efforts to bind the British press to a standard of “decent” journalism have been tried – and failed – repeatedly. There have been six commissions of inquiry in the UK since 1945. Each one, established after some egregious abuse, has recommended that “steps be taken” to protect privacy; and each time, the government has backed down.
There are two main reasons for this. First, no politician wants to turn the press against him: Tony Blair’s wooing of Murdoch, owner the Sun, the Times, and the Sunday Times, is legendary, as was its pay-off. The Murdoch press backed Labour in Blair’s three election victories in 1997, 2001, and 2005. The other reason is more sinister: newspapers have “dirt” on politicians, which they are willing to use to protect their interests.
In 1989, following pressure from Parliament, the government commissioned David Calcutt to chair a committee to “consider what measures (whether legislative or otherwise) are needed to give further protection to individual privacy from the activities of the press and improve recourse against the press for the individual citizen.” Calcutt’s key recommendation was to replace the moribund Press Council with a Press Complaints Commission (PCC), which was duly created.
In 1993, however, Calcutt described the PCC as “a body set up by the industry, financed by the industry, dominated by the industry, and operating a code of practice devised by the industry and which is over-favorable to the industry.” He recommended its replacement by a statutory Press Complaints Tribunal. The government refused to act.
In March 2011, a Joint Committee of Parliament reported that “the current system of self-regulation is broken and needs fixing.” Because the PCC “was not equipped to deal with systemic and illegal invasions of privacy,” the committee set out proposals for a reformed regulator.
The same year, following criminal prosecutions for telephone hacking which led to the closure of Murdoch’s News of the World, then-Prime Minister David Cameron appointed Lord Justice Brian Leveson to head an inquiry into “the culture, practices and ethics of the press; their relationship with the police; the failure of the current system of regulation; the contacts made, and discussions had, between national newspapers and politicians; why previous warnings about press misconduct were not heeded; and the issue of cross-media ownership.” Leveson tackled his remit – to make recommendations for a new, more effective way of regulating the press – with “one simple question: who guards the guardians?”
The first part of the Leveson Report, published in 2012, recommended an industry regulator whose independence from the newspapers and government alike was to be assured by a Press Recognition Panel, set up under a Royal Charter. To preempt what they called “state control,” the newspaper proprietors set up an Independent Press Standards Organization (IPSO), accountable to no one but itself.
True to previous form, the government then gave up, overruling the opinion of Leveson himself that further inquiry was needed to establish the “extent of unlawful or improper conduct by newspapers, including corrupt payments to the police.” Indeed, Leveson doubted whether the IPSO is sufficiently different from its predecessor, the PCC, to have resulted in any “real difference in behavior” at all.
Although some British press outlets are uniquely vicious, striking the right balance between the public’s need to know and individuals’ right to privacy is a general problem, and must be continually addressed in the light of changing technology and practices. The media are still needed to protect us against abuses of state power; but we need the state to protect us from abuses of media power.