Nazar Nosenko
Information and Communication student at Paris-Panthéon-Assas University
Cyber threats have become widespread since the early 2000s, with significant measures taken against them, such as the 2001 Budapest Memorandum now signed by almost 70 states, which strives to establish legal measures against cybercrime globally.[1]
But new technologies bring about new challenges. Cybercrime rates have doubled globally in the last decade, with most of these crimes being AI-enabled, such as deepfakes, which can cause reputational damage to organisations. This threat is exacerbated by the rapid growth of generative AI which could create new cyber threats.
However, governments and businesses are still not adequately equipped with the necessary tools to address this issue. At the UN General Assembly in December 2023, 95% of UN member states showed a clear interest in reforming cybercrime laws, which ultimately raises concerns such as the regulation of AI for current and future issues.
This presents a new opportunity for growth in the field of AI regulation. In this article, we will first discuss what cyber threats are (I) to raise the question of who the perpetrators and target are (II) to better frame the problem. This will lead us to the solutions proposed by different states and organisations (III).
- The dawn of a new age of AI cyber threats
“My worst fears are that we…the technology industry, cause significant harm to the world. I think if this technology goes wrong, it can go quite wrong.” [2]
– Open AI CEO Sam Altman, stated to the Senate Judiciary Committee
In the early 00s, cyber threats ranged from text messages asking you to confirm information to emails saying, “click here to win $5,000”, which resulted in a virus being installed on your computer. However, the challenge of combating cyber threats has now been exacerbated by the fact that defensive technology has not kept pace with new hacking methods, and statistics show that cyber threats are affecting organisations worldwide. In fact, 37% of organisations worldwide have experienced AI-enabled fraud, and statistics show that 91% of US companies have identified these threats as an escalating concern.[3]
This growth can be explained by two ways in which AI is being used for cyber threats:
Firstly, AI is improving old techniques used for cyber threats. Given the sheer volume of data they can process, it would be an easy task for an AI to send 10,000 personalised messages to people with different names and contact details. Methods in this category include two main techniques: phishing, which involves sending fraudulent messages that appear to come from a legitimate source in an attempt to extract/steal information from the victim.[4] And spoofing, a behaviour in which a “cybercriminal masquerading as a trusted entity to get you to do something beneficial to the hacker”.[5] These attacks are almost certain to increase, according to the National British Cyber Security Centre’s January 2024 report.
Second, AI could create new cyber threats through its technological advances. AI, particularly large language modules (LLMs), are capable of exploring data and generating human-sounding interactions. As a result, AI-driven technologies have the ability to clone voice patterns, fabricate audio files and create fake images and videos. One of the first cases of AI fraud was reported in 2019 when fraudsters used an AI-generated voice clip of the CEO of a parent company to instruct the CEO of its UK subsidiary to transfer €220,000 to a fictitious Hungarian supplier.[6] Another example is a fraud case in which hackers stole $25 million in 2023 from a multinational company. Using a deepfake video, the hackers fooled a Hong Kong employee who was watching a digitally recreated version of the company’s CFO in a video call, instructing him to transfer funds to a specific account.[7] Most importantly, the perpetrators are still nowhere to be found because of the technical capabilities and data encryption that public authorities cannot yet counter.[8] Now that we have defined cyber threats, it is important to understand their wider implications on businesses and financial crime.
- Politics behind AI threats: cyber threat from whom to whom?
It is important to distinguish the different actors at play to better understand the magnitude of the cyber threat and the importance of finding appropriate solutions. Victims can be divided into two categories: individuals on the one hand and companies/states on the other. This is because these targets require different skills in order to be hacked.
For example, novice hackers tend to have fewer tools and less experience in hacking and would conduct phishing attacks or plant a virus on the victim’s computer to obtain login credentials and data. A recent example includes the 2024 US presidential election race, where the voice of presidential candidate Joe Biden was used to make approximately 25,000 fraudulent calls using an AI-generated voice.[9] This illustrates our earlier point that AI is amplifying existing cyber threats and is now being contextualised opportunistic hackers.[10]
However, the operations of professional hackers work on a larger scale. It involves various advanced techniques, such as encrypting (locking) government data and holding it for ransom or generating malware more efficiently. These attacks would generally target large companies and government agencies, which often incorporate AI into a wide range of services, including sales and customer support.[11] This leads us to look at the duality of AI:
AI can be considered a double edge sword in cybersecurity operations.[12] On the one hand, it enhances the defences of companies and states, encouraging them to store more data in their systems. This, in turn, could make it more rewarding for hackers, who would use AI to develop sophisticated and stealthy attacks. More practically, one of the methods used consists of generating an input to another AI that would cause it to generate a wrong answer to a given question. This brings us to the issue of adversarial AI, where one AI (the teacher) is trained to cause the other AI (the student) to give “wrong answers”, effectively handing over data to the hacker.[13] A notable example is IBM’s DeepLocker, which demonstrates the potential of AI to create malware that evades traditional detection by activating under certain conditions.[14] This duality underscores the need for balanced regulation of AI to harness its benefits while mitigating the risks, as it can become a powerful tool in the hands of attackers. It will require not only technical but also legal, ethical and strategic solutions on a global scale.
- Future insights: diverse methods of AI regulation in cybersecurity
There is no standard approach to government regulation of AI, given the ever-changing nature of AI technology. However, governments generally strive to balance innovation with risk management.
As a result, AI regulations can be found in national strategies and agreed-upon ethical guidelines. This is because policy documents and government recommendations can be amended more easily than legislation and allow for a flexible approach to keeping pace with the rapid developments in the sector. This is exemplified by the UK Department of Science’s white paper (government report) published in March 2023, which introduces a non-statutory framework that sets expectations and empowers regulators in each sector, such as the Financial Conduct Authority (FCA).[15]
Other jurisdictions are under more pressure, such as the US following the election example mentioned above. This is forcing them to react quickly and contain the problem. On 8 February 2024, the US Federal Communications Commission (FCC) declared that voices generated by AI are ‘artificial’ and therefore a violation of the Telephone Consumer Protection Act 1991 (TCPA).[16] To anticipate additional potential challenges, several US states, including Michigan, California, Washington, Minnesota, and Texas, have enacted legislation aimed at curbing the misuse of AI technologies during election periods. These laws primarily focus on restricting the creation and distribution of deepfake content that could influence election outcomes by misrepresenting candidates or manipulating public opinion. In addition, efforts are underway in 26 other states to introduce similar regulations, demonstrating a nationwide commitment to addressing the ethical implications of AI in the political arena and protecting the integrity of elections.
Looking ahead, cyber threats will continue to grow as an international concern. While countries are advancing their frameworks and approaches, they are also redoubling their efforts to work together and with other institutions to coordinate and harmonise different approaches.
An example of inter-state collaboration includes the European Union’s NIS2 directive and the insights from the National Security Commission on Artificial Intelligence (NSCAI), led by former Google CEO Eric Schmidt, which have emerged as significant legal barriers.[17] While these initiatives lay the groundwork for AI regulation, they also underscore the need for a broad approach that encompasses civil rights and liberties in the digital age and aims to protect individuals from AI-related threats in all areas of society.
On the other hand, collaboration with companies and private organisations will be crucial as well: in fact, companies such as OpenAI are the ones developing AI and have more practical experience and access to AI than most countries, making direct collaboration with them in this area a fundamental aspect of future functioning AI regulation. Without direct engagement with companies, regulators run the risk of tipping the scales and halting progress in this technological field -another aspect of this double-edge use of AI. As a result, companies such as Google, Meta, Microsoft, OpenAI and TikTok have all signalled “reasonable precautions” to prevent artificial intelligence tools from being used to disrupt democratic elections around the world.[18]
In conclusion, the cybersecurity landscape has evolved significantly, with cyber threats becoming more pervasive and sophisticated since the early 2000s. The usage of AI provides an opportunity for amateur hackers to increaser the frequency of their attacks and enable more experienced ones to devise new techniques, necessitating organisations to adapt. In response, multiple layers of protection must be implemented, ranging from national laws to international cooperation alongside collaboration with private entities to better address the issue. As these efforts gain momentum, the anticipation for the cybersecurity market’s growth is underscored by its projected expansion to $500 billion by 2029. With the imminent growth of cyber defence, it will be crucial to consider broader implications such as compliance, data protection and the appropriate distribution of responsibility.
[1] European treaty series- No. 185 Convention on Cybercrime, Budapest 23.XI.2001
[2] Luke Hurst, ‘OpenAI’s Sam Altman calls for regulation amid fears AI could cause ‘significant harm to the world’ (euronews.next, 17 May 2023)
[3] ‘One-Third of Global Businesses Already Hit by Voice and Video Deepfake Fraud’ (Regula Forensics press release, 27 April 2023)
[4] ‘What is phishing’ (Cisco, 2023)
[5] ‘What is Spoofing- Definition and Explanation’ (Kaspersky, 2023)
[6] Jesse Damiani, ‘A voice deepfake was used to scam a CEO out of $243,000’ (Forbes Consumer Tech, 3 September 2019)
[7] Drew Todd, ‘Hong Kong Clerk Defrauded of $25 Million in Sophisticated Deepfake Scam’ (Secure World, 13 February 2024)
[8] Laura Dobberstein, ‘Deepfake CFO tricks Hong Kong biz out of $25 million’ The Register (5 February 2024)
[9] Pranshu Verma, ‘Democratic operative admits to commissioning Biden AI robocall in New Hampshire’ The Washington Post (26 February 2024)
[10] James Pearson, ‘AI rise will lead to increase in cyberattacks, GCHQ warns’ (Reuters Cybersecurity, 24 January 2024)
[11] Ibid
[12] Tamer Charife, Michael Mossad, ‘AI in cybersecurity: A double-edged sword’ (Deloitte Insights, Fall 2023)
[13] ‘Artificial Intelligence: Adversatial Machine Learning’ (National Cybersecurity Center of Excellence)
[14] Msrc Ph. Stoecklin et al, Deeplocker. How AI can power a stealthy new breed of malware’ (Security Intelligence, 8 August 2018)
[15] Mark A. Prinsley et al, ‘The UK’s approach to regulating the use of AI’ (Mayer Brown, 7 July 2023)
[16] Shannon Bond, ‘The FCC says AI voices in robocalls are illegal’ (National Public Radio, 8 February 2024)
[17] ‘Directive on measures for a high common level of cybersecurity across the Union (NIS2 Directive)’ (European Commission Policies)
[18] Guardian staff and agencies, ‘Tech firms sign ‘reasonable precautions’ to stop AI-generated election chaos’ The Guardian AI (16 February 2024)