How Will New Cybersecurity Norms Develop?
In 2013, cybersecurity was named the biggest threat facing the US. But, as UN Secretary-General António Guterres argued last month, minimizing the risks will require a global effort to establish shared rules and norms.
CAMBRIDGE – Last month, United Nations Secretary-General António Guterres called for global action to minimize the risk posed by electronic warfare to civilians. Guterres lamented that “there is no regulatory scheme for that type of warfare,” noting that “it is not clear how the Geneva Convention or international humanitarian law applies to it.”
A decade ago, cyber security received little attention as an international issue. But, since 2013, it has been described as the biggest threat facing the United States. Although the exact numbers can be debated, the Council on Foreign Relations’ “Cyber Operations Tracker” contains almost 200 state-sponsored attacks by 18 countries since 2005, including 20 in 2016.
The term cybersecurity refers to a wide range of problems that were not a major concern among the small community of researchers and programmers who developed the Internet in the 1970s and 1980s. In 1996, only 36 million people, or about 1% of the world’s population, used the Internet. By the beginning of 2017, 3.7 billion people, or nearly half the world’s population, were online.
As the number of users soared after the late 1990s, the Internet became a vital substrate for economic, social, and political interactions. Along with rising interdependence and economic opportunity, however, came vulnerability and insecurity. With big data, machine learning, and the “Internet of Things,” some experts anticipate that the number of Internet connections may grow to nearly a trillion by 2035. The number of potential targets for attack, by both private and state actors, will expand dramatically, and include everything from industrial control systems to heart pacemakers and self-driving cars.
Many observers have called for laws and norms to secure this new environment. But developing such standards in the cyber domain faces a number of difficult hurdles. Although Moore’s law about the doubling of computing power every two years means that cyber time moves quickly, human habits, norms, and state practices change more slowly.
For starters, given that the Internet is a transnational network of networks, most of which are privately owned, non-state actors play a major role. Cyber tools are dual use, fast, cheap, and often deniable, verification and attribution are difficult, and entry barriers are low.
Moreover, while the Internet is transnational, the infrastructure (and people) on which it relies fall within the differing jurisdictions of sovereign states. And major states differ in their objectives, with Russia and China stressing the importance of sovereign control, while many democracies press for a more open Internet.
Nonetheless, the description of “www” as the “wild west web” is a caricature. Some norms do exist in cyberspace. It took states about two decades to reach the first cooperative agreements to limit conflict in the nuclear era. If one dates the international cybersecurity problem not from the origins of the Internet in the early 1970s but from the takeoff period since the late 1990s, intergovernmental cooperation in limiting cyber conflict is now at about the two-decade mark.
In 1998, Russia first proposed a UN treaty to ban electronic and information weapons (including for propaganda purposes). With China and other members of the Shanghai Cooperation Organization, it has continued to push for a broad UN-based treaty. The US continues to view such a treaty as unverifiable.
Instead, the Secretary-General appointed a Group of Governmental Experts (UNGGE) which first met in 2004, and in July 2015 proposed a set of norms that was later endorsed by the G20. Groups of experts are not uncommon in the UN process, but only rarely does their work rise from the organization’s basement to recognition at a summit of the 20 most powerful states. The UNGGE’s success was extraordinary, but it failed to agree on its next report in 2017.
Where does the world go now? Norms can be suggested and developed by a variety of policy entrepreneurs. For example, the new non-governmental Global Commission on Stability in Cyberspace, chaired by former Estonian Foreign Minister Marina Kaljurand, has issued a call to protect the public core of the Internet (defined to include routing, the domain name system, certificates of trust, and critical infrastructure).
Meanwhile, the Chinese government, using its Wuzhen World Internet Conference series, has issued principles endorsed by the Shanghai Cooperation Organization calling for recognition of the right of sovereign states to control online content on their territory. But this need not contradict the call to protect the public core, which refers to connectivity rather than content.
Other norm entrepreneurs include Microsoft, which has issued a call for a new Geneva Convention on the Internet. Equally important is the development of norms regarding privacy and security regarding encryption, back doors, and the removal of child pornography, hate speech, disinformation, and terrorist threats.
As member states contemplate the next steps in the development of cyber norms, the answer may be to avoid putting too much of a burden on any one institution like the UNGGE. Progress may require the simultaneous use of multiple arenas. In some cases, development of principles and practices among like-minded states can lead to norms to which others may accede at a later point. For example, China and the US reached a bilateral agreement restricting cyber espionage for commercial purposes. In other cases, such as security norms for the Internet of Things, the private sector, insurance companies, and non-profit stakeholders might take the lead in developing codes of conduct.
What is certain is that the development of cybersecurity norms will be a long process. Progress in some areas need not wait for progress in others.
Tech vs. Democracy
In an age when most people get their news from social media, mafia states have had little trouble censoring social-media content that their leaders deem harmful to their interests. But for liberal democracies, regulating social media is not so straightforward, because governments must strike a balance between competing principles.
BRUSSELS – Instagram, a photo-sharing platform owned by Facebook, recently caved in to a demand by the Russian government that it remove posts by opposition leader Alexey Navalny alleging misconduct on the part of Russian Deputy Prime Minister Sergei Prikhodko. In a YouTube video that has garnered almost six million views (and which is still available), Navalny shows Prikhodko hobnobbing with the oligarch Oleg Deripaska on a yacht in Norway, where he alleges bribery took place.
After Navalny’s posts appeared, Deripaska went to the Russian communications regulator Roskomnadzor to request that Facebook remove the content, which it immediately did. This episode has now attracted much attention, as well as criticism for Facebook. And yet there have been thousands of other cases just like it.
In an age when most people get their news from social media, mafia states have had little trouble censoring social-media content that their leaders deem harmful to their interests. But for liberal democracies, regulating social media is not so straightforward, because it requires governments to strike a balance between competing principles. After all, social-media platforms not only play a crucial role as conduits for the free flow of information; they have also faced strong criticism for failing to police illegal or abusive content, particularly hate speech and extremist propaganda.
These failings have prompted action from many European governments and the European Union itself. The EU has now issued guidelines for Internet companies, and has threatened to follow up with formal legislation if companies do not comply. As Robert Hannigan, the former director of the British intelligence agency GCHQ, recently observed, the window for tech companies to reform themselves voluntarily is quickly closing. In fact, Germany has already enacted a law that will impose severe fines on platforms that do not remove illegal user content in a timely fashion.
These ongoing measures are a response to the weaponization of social-media platforms by illiberal state intelligence agencies and extremist groups seeking to divide Western societies with hate speech and disinformation.
Specifically, we now know that the Kremlin-linked “Internet Research Agency” carried out a large-scale campaign on Facebook and Twitter to boost Donald Trump’s chances in the 2016 US presidential election. According to US Special Counsel Robert Mueller’s recent indictment of 13 Russian individuals and three organizations, an army of Russian trolls spent the months leading up to the 2016 election stoking racial tensions among Americans and discouraging minority voters, for example, from turning out for Trump’s opponent, Hillary Clinton.
Mueller’s findings obviously raise important questions about transparency and the protection of democratic institutions in the digital age. Despite having allowed themselves to become Kremlin special-operations tools, the major social-media platforms have been reluctant to provide information to democratic governments and the public.
For example, in the United Kingdom, the MP Damian Collins has launched an investigation into Russian interference in the 2016 Brexit referendum, but he has struggled to receive much cooperation from Facebook and Twitter. In December, he described Twitter’s response to his questions as “completely inadequate.” That is regrettable. When democracy itself is at stake, social-media platforms have a responsibility to be transparent.
Moreover, if Russia can interfere so thoroughly in the US democratic process, just imagine what it has been doing in Europe, where we still do not know who financed some of the online advertising campaigns in recent national elections and referenda. I suspect that we have only just scratched the surface when it comes to exposing foreign meddling in our democratic institutions and processes. With European Parliament elections due in May 2019, we must be better prepared.
The tech giants, for their part, will continue to claim that they are merely distributing information. In fact, they are acting as publishers, and they should be regulated accordingly – and not just as publishers, but also as near-monopoly distributors.
To be sure, censorship and the manipulation of information are as old as news itself. But the kind of state-sponsored hybrid warfare on display today is something new. Hostile powers have turned our open Internet into a cesspool of disinformation, much of which is spread by automated bots that the major platforms could purge without undermining open debate – that is, if they had the will to do so.
Social-media companies have the power to exert significant influence on our societies, but they do not have the right to set the rules. That authority belongs to our democratic institutions, which are obliged to ensure that social-media companies behave much more responsibly than they are now.
Cybersecurity Starts at the Top
Data breaches might feel like a fact of modern life, but they are an artifact of modern indifference. Companies, regardless of size or sector, need to recognize their responsibility, inescapable in today's technology-based economy, to be supremely vigilant and pro-active about securing their data and systems.
LONDON – Every time a major corporate cybersecurity breach occurs, the response looks pretty much the same: cry “havoc!” and call in the cyber first responders to close the breach. But by the time an executive or two stands before a few government committees, proffering some explanation and pledging to beef up security protocols, people – including the hackers – have largely moved on. And with each breach, the cycle accelerates: people either dismiss the threat – it probably won’t happen to them – or accept it as an unavoidable pitfall of modern life.
The truth is that the threat posed by cybersecurity breaches is both acute and avoidable. The key to mitigating it is to understand that cybersecurity isn’t simply a technology issue; it is also an urgent strategic issue that should be at the top of the agenda for every board and management team. After all, from Yahoo! to Equifax, data breaches have often been rooted in internal forces of human error, carelessness, or even maliciousness.
Already, the scale and speed of attacks is massive. It has now emerged that the 2013 Yahoo! data breach affected all three billion accounts. In May, the WannaCry ransomworm attack affected dozens of the UK’s National Health Service trusts, and spread globally at lightning speed.
The recently revealed Equifax data breach – which occurred during two months when the company had a patch to a known security vulnerability, but hadn’t applied it – gave the hackers access to 145.5 million consumers’ personal and sensitive data. According to testimony provided by now-former Equifax CEO Richard F. Smith to the US Congress, the breach reflected the negligence of one individual in the IT department.
The risks are only growing. The United Kingdom’s National Cybersecurity Centre, founded last year, has already responded to nearly 600 significant incidents. The department’s director recently predicted that our first “category one cyber-incident” would occur in the next few years.
One problem is that many organizations simply don’t have cyber-security on their radar. They believe they are too small to be a target, or that such breaches are limited to the tech and finance sectors. But, just recently, the US fast-food chain Sonic – not exactly a tech giant – revealed that a malware attack on some of its drive-in outlets may have allowed hackers to secure customers’ credit card information.
The fact is that many types of companies use, if not depend on, technology. And they collect many types of data, about everything from customers and employees to distribution systems and transactions. Consumers often don’t comprehend the extent of companies’ data collection, failing to understand even the basics of the “cookies” being used when they surf the web. According to a March 2017 report by the Pew Research Center, many Americans, for example, “are unclear about some key cybersecurity topics, terms, and concepts.”
Of course, consumers must be informed and vigilant about their own data. But even those who are, find that if they want to engage fully in modern life, they have little choice but to hand over personal data to organizations in both the private and public sectors, from utility and finance companies to hospitals and tax authorities.
With automation, this trend will only accelerate, with people counting on technology to do everything from ordering groceries to turning on the lights and even locking the doors. The power this gives to the likes of Google and Amazon, not to mention an ever-growing array of startups, is obvious. What is not obvious is that consumers can rely on companies’ knowledge and duty of care to protect the information they collect.
No company can afford a laissez faire attitude about cybersecurity. Yet even tech companies took some time to recognize the extent of their technical responsibilities, including the need for a C-level executive to manage their technology needs. Not long ago, such companies often maintained a “helpdesk” mindset: just make sure people could use the product and have someone to call if something went wrong.
But, with data breaches proliferating, often with business-critical consequences, there is no excuse for such inertia. Such breaches can cripple companies both operationally and financially, owing to the direct theft of funds or intellectual property and the cost of plugging the security hole or paying punitive fines. They can also diminish a company’s reputation and credibility among investors, business partners, and communities, even in cases when the breach is minor and doesn’t compromise sensitive information.
While board members do not all have to be technology experts, they do need to keep up with the state of their company’s technology, including how well secured it is. A board’s risk committee can conduct in-depth reviews. But regular status updates to the full board, like those for other crucial issues affecting the business, are also needed.
In today’s world, no organization – public or private, commercial or non-profit – has an excuse not to be supremely vigilant and pro-active about securing their data and systems. It is not enough to meet legal requirements, which don’t keep up with technological change. Instead, those requirements should be viewed as a starting point for a much more robust, closely monitored, and effectively adapted system that truly protects the data on which our societies and economies increasingly depend.
Data breaches are not a fact of modern life. They are an artifact of modern indifference.
Securing the Digital Transition
Within a few decades, the Internet has transformed the global economy and rendered the old Westphalian order increasingly obsolete. But without a new governance framework to manage cyber threats and abuses, what has been a boon to globalization could become its undoing.
NEW DELHI – Every year, the World Economic Forum publishes a Global Risks Report, which distills the views of experts and policymakers from around the world. This year, cybersecurity is high on the list of global concerns, as well it should be. In 2017, the world witnessed a continued escalation in cyber attacks and security breaches that affected all parts of society. There is no reason to believe 2018 will be different.
The implications are far-reaching. Most immediately, we must grapple with governance of the Internet as well as on the Internet. Otherwise, the opportunities afforded by digital technologies could be squandered in a regulatory and legal arms race, complete with new borders and new global tensions.
But there’s a broader issue: For all the speed with which we are racing into the digital age, efforts to ensure global stability are lagging far behind. In many respects, our world is still organized within a Westphalian framework. States with (mostly) recognized borders are the building blocks of the international system. Their interactions, and their willingness to share sovereignty, define the existing world order.
But globalization has gradually changed the realities on the ground. And while its force – waxing and waning since the decades preceding World War I – is nowadays being tempered by geopolitics, and by the impulse to slow the pace of technological change, the digital transformation will propel globalization forward, albeit in a different form. After all, the Internet’s key feature is its non-territorial architecture. By breaking down traditional borders, it poses a direct challenge to the very foundation of the Westphalian order.
This is a profoundly positive development, because it facilitates free expression and the cross-border exchange of goods and ideas. But, as with all human inventions, the Internet can be abused, as evidenced by the rise in cybercrime, online harassment, hate speech, incitement to violence, and online radicalization.
Minimizing such abuses in the years ahead will require close international cooperation to establish and enforce common rules. There can be no solution in isolation, because no single government can tackle the problem on its own.
Over time, an alphabet soup of organizations has emerged to bring together the technical community, businesses, governments, and civil society. And bodies such as ICANN (Internet Corporation for Assigned Names and Numbers), IETF (Internet Engineering Task Force), and W3C (World Wide Web Consortium) now provide de facto governance of the Internet’s architecture. But governance on the Internet is far more complex. Here, the institutional landscape is both crowded and unsettled.
It is crowded because numerous actors are competing to shape the normative framework of cyberspace. Many countries have multiple relevant ministries regulating online activity. Websites and online services have vastly different community guidelines and terms of service. Public- and private-sector developers determine the design of the Internet’s changing infrastructure. And numerous civil-society groups are proposing their own sets of cyber principles, while international organizations attempt to develop multilateral agreements.
The landscape remains unsettled because intergovernmental cooperation has largely stalled, owing to conflicting priorities among countries. Making matters worse, there are still too few dedicated spaces for different stakeholders to interact and devise operational solutions.
In the absence of mutually agreed frameworks, governments will tend to adopt short-term unilateral measures – mandatory data localization, excessive content restrictions, intrusive surveillance – to address immediate concerns, or as a response to domestic political pressure. But by doing so, they could fuel a dynamic that heightens, rather than minimizes, international tensions.
Digital governance touches on everything from cybersecurity to the economy to human rights, and uncertainty about which laws apply in different jurisdictions weakens enforcement in all of them, leaving everyone worse off. Moreover, measures to address one dimension can easily affect the others, which means that uncoordinated and rash policy decisions can have negative consequences across the board.
When I had the honor of chairing the Global Commission on Internet Governance, our 2016 report highlighted these risks, and called for “a new Social Compact” to ensure that the Internet of the future will be accessible, inclusive, secure, and trustworthy.
Progress since then has been limited. Because efforts at the United Nations to establish global cyber rules have reached an impasse, alternative initiatives will have to drive the process forward.
Fortunately, the Global Commission on the Stability of Cyberspace recently issued an important “Call to Protect the Public Core of the Internet.” And the upcoming Global Internet and Jurisdiction Conference in Ottawa will provide another valuable opportunity for policymakers to continue working toward solutions.
Such technical and legalistic proceedings are essential for shaping the global transition from the industrial to the digital era. To avoid a legal arms race, policymakers will need to develop a smart approach to a variety of tricky issues, from mutual assistance frameworks for investigations to the role of domain-name administrators and service providers in addressing abusive speech online.
Achieving policy coherence across jurisdictions should be a top priority. Doing so will require direct, sustained interactions among all stakeholders. Only then can we create a framework to preserve the cross-border nature of the Internet, protect human rights, fight abuse, and sustain a truly global digital economy.
As Kofi Annan said back in 2004, “In managing, promoting, and protecting [the Internet’s] presence in our lives, we need to be no less creative than those who invented it.” Westphalia is behind us. What comes next is up to us.
Fighting Cybercrime with Neuro-Diversity
Neurologically exceptional people, such as those with autism or Asperger syndrome, tend to be disadvantaged by the traditional interview process. But, if given the opportunity to train and work as cybersecurity professionals, they could prove integral to protecting the data that underpins the digital age.
LONDON – Cybersecurity is one of the defining challenges of the digital age. Everyone, from households to businesses to governments, has a stake in protecting our era’s most valuable commodity: data. The question is how that can be achieved.
The scale of the challenge should not be underestimated. With attackers becoming increasingly nimble and innovative, armed with an increasingly diverse array of weapons, cyber-attacks are happening at a faster pace and with greater sophistication than ever before. The security team of my company, BT, a network operator and Internet service provider, detects 100,000 malware samples every day – that’s more than one per second.
Creative thinking among cyber attackers demands creative thinking among those of us fending them off. Here, the first step is ensuring that there are enough talented and trained individuals engaged in the fight. After all, according to a recent survey by the International Data Corporation, 97% of organizations have concerns about their security skills. By 2022, another study estimates, there will be 1.8 million vacant cybersecurity jobs.
Amid this critical shortage of security specialists, it is imperative that we develop new approaches to attracting, educating, and retaining talented individuals, in order to create a deep pool of highly skilled cyber experts prepared to beat cybercriminals at their own game.
The key to success is diversity of talents and perspectives. This includes neurological diversity, such as that represented by those with autism, Asperger syndrome, and attention-deficit disorder. People with Asperger syndrome or autism, for example, tend to think more literally and systematically, making them particularly adept at mathematics and pattern recognition – critical skills for cybersecurity.
The problem is that neurologically exceptional people tend to be disadvantaged by the traditional interview process, which relies heavily on good verbal communication skills. As a result, such people often struggle to find employment, and even when they do find a job, their work environment may not be able to support them adequately.
The United Kingdom’s National Autistic Society reports that just 16% of autistic adults in Britain have full-time paid employment, and only 32% have any kind of paid work, compared to 47% for disabled people and 80% for non-disabled people. This highlights the scale of the challenge faced by such candidates, as well as the vast untapped resource that they represent.
Recognizing the potential of neurological diversity to contribute to strengthening cybersecurity, we at BT have reframed how we interact with candidates during interviews. We encourage them to talk about their interests, rather than expecting them simply to answer typical questions about their employment goals or to list their strengths and weaknesses. This approach has already been applied with great success by the likes of Microsoft, Amazon, and SAP in the areas of coding and software development, and by the UK’s GCHQ intelligence and security organization, one of the country’s biggest employers of autistic people.
Of course, an updated approach to interviewing candidates will not work for everyone. But it is a start. More broadly, we must do more not just to expand the opportunities available to neurologically exceptional candidates, but also to ensure that these opportunities are well publicized.
Delivering this change will require leadership by – and cooperation between – government and business. I am pleased to say that, on this front, BT is already taking a leading role, including by working with the British government on their Cyber Discovery program, a special initiative to attract schoolchildren into the cyber industry, and through our own apprenticeship programs.
In the digital age, neuro-diversity should be viewed as a competitive advantage, not a hindrance. We now have a chance to invest in talented people who are often left behind when it comes to work, benefiting them, business, and society as a whole. By recognizing and developing the skills of this widely overlooked talent pool, we can address a critical skills shortage in our economies and enhance our ability to fight cybercrime. Such opportunities are not to be missed.