Multilateralism in the Digital Age:
How to Make Progress on Global Digital Governance
Kenddrick Chan, Head, Digital IR in the Information Age project, LSE IDEAS
"Wild West" is a term commonly used to describe the digital domain. Yet it is not an entirely inaccurate characterisation, considering how countries operate in cyberspace. While there are agreed-upon norms that seek to ensure states behave responsibly, those norms are ultimately voluntary in nature. Despite governments signalling their agreement over the need to avoid the catastrophic security, economic, social, and humanitarian consequences that might inadvertently result from cyber operations, they continue to utilise the digital domain for grey zone operations and pursue both military and industrial applications of digital technologies to gain a competitive advantage over each other. In the first three months of 2023 alone, approximately 38 'significant cyber incidents' were detected – a rate of one attack occurring every two days. Seemingly innocuous AI technologies such as Generative AI (ChatGPT and DALL-E are popular examples) have already sparked discussions about a new frontier in information warfare. The rapid place of digital technology development, coupled with its possible military and societal implications, have led to calls for multilateral institutions and fora to arrive at a legally binding instrument that would establish codes of conduct for operating in cyberspace.
However, despite the efforts of the UN and other multilateral organisations, such a compact remains elusive. Rather than bringing states together, efforts at global digital governance appear to be in danger of producing the opposite. Two reasons stand behind this development. The first is one of terminology, where AI governance often gets conflated with, or lumped under, global digital governance. This is something commonly reflected at the multilateral level, such as the recently released G7 Hiroshima Leaders' Communiqué. Issues of AI governance revolve around the application of machine learning, and global discussions on the topic are largely (but not exclusively) concerned with avoiding unintended biases, increasing accountability of AI decision-making, establishing 'guardrails' to ensure that AI does not 'go rogue' but functions within its intended parameters. This is to ensure that, insofar international security is concerned, that the use of AI in military applications remains bound by ethical guidelines. This is distinct from its thematic cousin, global digital governance. Efforts at global digital governance, as far as international security is concerned, are intended to secure the stability of cyberspace though the promulgation of norms, if not rules, of conduct. It involves asking questions that are of a broader and more traditional nature, such as what constitutes critical infrastructure, how might cyber-attacks or intrusions be defined and attributed, how to implement mechanisms for states for information exchange, or what 'responsible state conduct' might look like. It is not difficult to see why global AI governance can be easily conflated with global digital governance. Both fall under the 'digital' umbrella and involve asking what sorts of standards and best practices ought to be in place to avoid negative outcomes – such as, in the case of AI, a rogue AI that makes inflammatory statements at best and at worst, constitutes the next weapon of mass destruction. However, a great deal of confusion can be mitigated by delineating the two issues spheres and should be the first step of any multilateral effort towards them.
The second reason behind the lacklustre showing of multilateral efforts to unite states around a global legally binding compact is the classic case of 'too many cooks'. There are multiple parallel initiatives in existence, all of which compete for policymaking attention and risk diverging energy away from what should have been a coherent and cohesive effort. At the UN alone, there are multiple panels, groups, and initiatives that work independently of each other, leading to potential overlaps between their work. Given their relatively broad ambit, it would be unsurprising to find that there are overlaps between their work. Having a coherent and streamlined working structure would be much more beneficial to the effort. Such fragmentation of efforts also extends to other multilateral initiatives, some of which appear to reflect wider geopolitical tensions. The G7's recent call for the establishment of the Hiroshima AI process builds upon the work of the Global Partnership on AI (GPAI), which commits itself to 'advancing multi-stakeholder approaches' regarding AI. It is widely regarded as the most comprehensive state-led initiative, whose membership has grown from 15 states since its establishment in 2020 to 29 at time of writing. However, most of its members are part of the 'West', with states from the Global South largely missing. This raises questions of whether such a 'Global Partnership' contributes to global digital governance efforts, or to their fragmentation. The same questions can be levied at Chinese-led parallel initiatives that have sprouted up – such as the World Internet Conference – given that they slant heavily towards China's approach to digital governance rather than being reflective of a truly global effort.
Multilateral efforts at global digital governance therefore need to be more streamlined and inclusive than they currently are. While developed states may be the most active when it comes to pushing for progress on global digital governance and dominate agenda-setting, this is not to say that the concerns of the developing states of the Global South should not be tabled for discussion. States vary in terms of their stage of digital development and have different national conditions, such as having different types of digital infrastructure, robustness of regulatory frameworks, or even level of human capital (e.g., digital literacy, technical competence). National priorities regarding digital issues will therefore differ. For example, although most African states agree on the need to avoid a 'cyber Pearl Harbour', for them digital health remains their topmost priority and they would thus benefit greatly if discussions on global digital governance aimed to creating an enabling environment for (health) information-sharing on a regional or global level. To avoid isolating states from already-underrepresented regions and to prevent further fragmentation, agenda-setting in multi-stakeholder efforts at global digital governance should thus account for these differing priorities.
Governments should also realise that private sector tech expertise will have a key role to play in any global digital governance effort. Private tech companies already play a critical role in the functioning of today's society by providing key financial and communication services to their customers. As stated in the 2018 Paris Call for Trust and Security in Cyberspace, private tech companies are responsible for the development of the digital technologies that are being discussed by states. It is very likely that they will play a key role in implementing global digital governance initiatives – something that the sector appears willing to do. The Cybersecurity Tech Accord, an industry coalition that counts Dell, Microsoft, Nokia, and Oracle amongst its signatories, has actively signalled its willingness to partner on issues of global digital governance. Some, like Microsoft, have gone further to establish a dedicated Digital Diplomacy team and to push for a 'Digital Geneva Convention'. However, incorporating private sector expertise into what is traditionally a state-dominated arena need not be a bridge too far for governments. Successful examples of how this can be done can be drawn from the fields of nuclear non-proliferation, space cooperation, or climate change. Failure to do so would further derail efforts at effective and equitable global digital governance.
In conclusion, the state of global digital governance in international relations faces significant challenges. However, none of these challenges are insurmountable. To overcome them, multilateral efforts should clarify the distinction between AI and digital governance, streamline discussions, and prioritize inclusive representation of different national priorities. Engaging the private sector, which possesses critical tech expertise, is also essential for making progress as private companies can play a key role in implementing initiatives and have shown willingness to collaborate. Governments must recognize the importance of incorporating private sector perspectives and learn from successful examples in other domains. By addressing these challenges, the international community can work towards a regulated and secure digital domain that promotes stability and responsible state conduct in cyberspace.
This Information System is provided by the University of Toronto Library |
All contents copyright © 2023. University of Toronto unless otherwise stated. All rights reserved.