Mary Shelley’s classic novel ‘Frankenstein, Or The Modern Prometheus’ has become a classic reference to the ethical questions attached to scientific research and technological advancement. Victor Frankenstein, the novel’s protagonist, is a scientist who rejected his creation, the evil murderous monster born out of his scientific curiosity. Like a ‘Modern Prometheus’, Victor had a thirst for power, stemming from his quest for knowledge and scientific discovery, and was careless in his endeavours.
Frankenstein has been widely used as a framework to understand the relationship between new technologies and their creators. It serves as a reminder that new technologies and scientific discovery are not necessarily ‘neutral’ for their own sake; their specific design and use have the capacity to positively or negatively affect global environment and humankind. In this context, the role of the law is pivotal. Law has the capacity to help technology and scientific research to grow by shaping their background conditions, whilst at the same time mitigating the risks they pose. Because of the cross-border effects of new technologies and activities of multinational corporations, technology has also become a very important subject for all disciplines of international law. Technological advances pose challenges to the rights of individuals and groups, the global economy and environment, and give rise to complex global governance and institutional design questions. International treaties increasingly deal with new technologies and technology-related subjects, including but not limited to data flows, privacy, warfare drones, digital goods and services, labour standards, crypto-currencies and foreign direct investment, IP rights, artificial intelligence, deep seabed mining, climate and geo-engineering, and space technologies. International law is called upon to respond to rapid technological and scientific developments and their cross-border effects, by developing regulatory and governance frameworks that would respond to the threats posed by new and emerging technologies.
Information and communication technologies (ICTs) may also influence democratic institutions as well as decision-making processes. To discuss matters related to democracy, technology, and international law, Cambridge International Law Journal (CILJ) organised the 8th Annual Cambridge International Law Conference on the 20th and 21st of March 2019, in Cambridge, United Kingdom, entitled ‘New Technologies: New Challenges for Democracy and International Law’. Professor Benedict Kingsbury (NYU) delivered the keynote speech of the event, and three plenary speeches were delivered by Professor Eyal Benvenisti (University of Cambridge), Dr Louise Arimatsu (LSE), and Dr Mark Leiser (Leiden Law School). The programme of the conference was rich and diverse, with a broad selection of panels and roundtable presentations on a variety of international law topics. After the conference, the Journal accepted paper submissions by the conference participants and, following its established double-blind peer-reviewed process, it selected the finest papers presented at the conference. This post provides a succinct synopsis of the papers included in the 8(2) Special Issue of CILJ.
The special issue aims to critically assess novel and complex challenges posed by new and emerging ICTs, from the perspective of international law. The different papers published in this issue provide a valuable analysis of a wide range of international law topics related to such ICTs and global governance, underline challenges and suggest solutions at both doctrinal and normative levels. This important collection of articles combines a rich variety of research methodologies and creates an impactful mosaic of ideas aiming at shaping our understanding and influencing future policy-making and dispute settlement in the field of new ICTs and international law. Following an Introduction by the Editors-in-Chief Maayan Menashe and Eirini Kikarea (‘The Global Governance of Cyberspace: Reimagining Private Actors’ Accountability: Introduction’), the Special Issue includes eight papers on various topics by prominent academics.
Benedict Kingsbury, in his article ‘Infrastructure and InfraReg: On Rousing the International Law “Wizards of Is”’, puts forward the idea of ‘infrastructure as regulation’ as way of ‘opening up thinking about international law and technology of all kinds’. Kingsbury observes that international law ‘has come to seem somewhat maladapted for the demands and the weight technological changes have put on it.’ His article therefore ‘explores implications for reinvigorating deliberative forward-planning international law projects to address technologically driven transformation, which follow from “thinking infrastructurally”’.
Louise Arimatsu, in her article ‘Silencing Women in the Digital Age’, explores ‘some of the ways in which developments in new digital technologies reproduce, and often amplify, the patriarchal structures, practices and culture of contemporary life and, in doing so, operate to silence women through exclusion and through violence.’ As a response, the article considers ‘how international human rights law – most notably the Convention on the Elimination of Discrimination Against Women (CEDAW) – can be harnessed to counter both forms of silencing’.
M R Leiser, in his article ‘Regulating Computational Propaganda: Lessons from International Law’, argues that there is ‘a significant lack of regulatory oversight’ over ‘computational propaganda that is disseminated as deceptive political advertising.’ The article examines ‘whether there is a right to disseminate propaganda within our free expression rights and focuses on the harms associated with the engineered polarisation’. It concludes with ‘a discussion of the implications of maintaining this status quo and some suggestions for plugging the regulatory holes identified’.
Rachel Adams and Nóra Ní Loideáin, in their article ‘Addressing Indirect Discrimination and Gender Stereotypes in AI Virtual Personal Assistants: The Role of International Human Rights Law’, critique the ‘reproduction of negative gender stereotypes in virtual personal assistants’. The article ‘explores the provisions and findings within international women’s rights law to assess both how this constitutes indirect discrimination and possible means for redress’.
Enguerrand Marique and Yseult Marique, in their article ‘Sanctions on Digital Platforms: Beyond the Public–Private Divide’, argue that platform operators ‘can have an intense power of norm-setting and sanctions, with a tendency to concentrate power within themselves or with unclear arrangements for dividing it across different entities’. The article argues that this ‘can deeply affect individual freedoms’ and it explores legitimacy questions that arise. It then considers alternative frameworks for regulating sanctions, concluding they should focus on the protection of the freedom of individual users.
Paolo Cavaliere, in his article ‘Digital Platforms and the Rise of Global Regulation of Hate Speech’, analyses the terms of service of online platforms that set standards to regulate the blocking or removal of undesirable content; a requirement set in the EU Code of Conduct on Countering Illegal Hate Speech Online. The article reveals that, compared to existing legal standards, ‘the scope of speech that may be removed increases significantly under the Code’s mechanism’. It therefore considers these platforms ‘as substantive regulators of speech’.
Petra Molnar, in her article ‘Technology on the Margins: AI and Global Migration Management from a Human Rights Perspective’, examines ‘how technologies used in the management of migration impinge on human rights with little international regulation, arguing that this lack of regulation is deliberate, as States single out the migrant population as a viable testing ground for new technologies’. The article concludes that ‘[m]ore oversight and issue-specific accountability mechanisms are needed to safeguard fundamental rights of migrants’.
Shannon Raj Singh, in her article ‘Move Fast and Break Societies: The Weaponisation of Social Media and Options for Accountability under International Criminal Law’, considers ‘the application of international criminal law to the role of social media entities in fuelling atrocity crimes’. The article argues that ‘it may be more productive to conceptualise social media’s role in atrocity crimes through the lens of complicity’ rather than incitement, while ‘drawing inspiration not from the media cases in international criminal law jurisprudence, but rather by evaluating the use of social media as a weapon’.
Large multinational corporations, among others, give birth to AI virtual personal assistants, complex algorithms affecting migration flows, and generate technology-related norms regulating the behaviour of users on their platforms. They develop ICTs that have the capacity to control hate speech, fuel atrocity crimes, enable the spread of political propaganda and fake news, and also silence women and local communities. This Special Issue exposes the risk of these corporations transforming into the ‘Victor Frankensteins’ of contemporary societies, and reflects on the role of international law in addressing a wide range of concerns caused by new and emerging ICTs.