Regulation of Emerging Technologies and Sustainable Development: An EU/UK/US Perspective

Ivan Kovalenko
Organisational Sciences student at Paris Dauphine University- PSL

In a world where technology and sustainable development are reshaping our future, the European Union (EU) has made a significant contribution to international standards aimed at ethical and sustainable technology governance.

Other jurisdictions, such as the United Kingdom (UK) and the United States (US), have adopted specific standards in their regulatory frameworks and corporate governance models. Although each country has unique characteristics, they interact and inspire each other to collaborate on shared initiatives.

From this comparative law perspective, this article will discuss practices related to government technology (GovTech), which refers to the use of digital tools in administrative processes and public management, as well as corporate social responsibility (CSR) initiatives in the EU, the UK, and the US. We will highlight their similarities, differences, and mutual influences. First, we will examine the EU’s stance on GovTech and CSR (I), before comparing it with the approaches of jurisdictions such as the UK and the US (II). This comparison will allow us to identify common objectives among these three jurisdictions and to gain a better understanding of the European Union’s impact on global governance (III).

  1. The EU’s position on Govtech and ESG

The EU has established itself as a pioneer in the development of regulatory frameworks, particularly in the field of GovTech, which encompasses technologies designed to transform the public sector and increase its efficiency, making public services more accessible and effective for citizens. A notable initiative in this area is the Regulation on Artificial Intelligence (EU AI Act) adopted in May 2024. Although primarily aimed at regulating the use of AI within the EU, this legislation has a significant impact on GovTech that incorporates AI systems.[1]

The main objective of this regulation is to impose transparency requirements on companies that develop AI. For example, some AI systems are classified as ‘high risk; due to their significant impact on people’s lives—such as those used in robot-assisted surgery or grading school exams. These high-risk AI systems will need to undergo rigorous assessments to ensure compliance with current laws and regulations. The relationship between the 2023 EU AI Act and GovTech is particularly relevant, as many of the high-risk AI applications listed in Annex III of the regulation relate to public sector uses. Examples include AI systems employed in public services for access to benefits, in policing for facial recognition, and in border control for migration risks. These areas fall directly under GovTech, and the EU AI Act requires public administrations and their technology providers to adhere to strict standards of transparency, security, and ethics. It also aims to prevent the creation of illegal content, such as deepfakes, by ensuring that such AI systems are tested and certified, thereby enhancing the accountability and transparency of tech companies, including those operating in the public sector. As a result, GovTech applications using AI models like GPT-4 or DALLE-2 must comply with these standards, contributing to the development of safer and more ethical systems.

In addition, the EU has implemented specific regulations to oversee the development of GovTech. For example, Directive (EU) 2016/2102 on the accessibility of public sector websites and mobile applications requires public administrations to make digital services accessible to all citizens, including those with disabilities. This reflects the EU’s commitment to the social aspect of CSR in the context of government technology. 

Beyond GovTech, the EU has also positioned itself as a leader in CSR, which the European Commission defines as “the voluntary integration by companies of social and environmental concerns into their business operations and relationships with stakeholders”.[2] Although not mandatory for all organisations, CSR is playing an increasingly important role for technology companies and public sector actors, with initiatives aimed at improving sustainability and responsible governance. The intersection between technology and CSR is particularly evident in the way companies are using technology solutions to measure their environmental impact and improve their transparency.

Indeed, the EU aims to achieve “carbon neutrality” by 2050, in line with the objectives of the SSP1-1.9 scenario, the most optimistic path calculated by the UN,[3] which aims to limit global warming to 1.5 degrees Celsius above pre-industrial levels.[4] A concrete example of this direction is the French law of 15 November 2021, which aims to reduce the environmental impact of digital technology. This law sets limits on the energy consumption of data centres and promotes “digital sobriety”, encouraging companies to align their technological innovations with environmental goals. To support these goals, the European Investment Bank has allocated over €35 billion to fund renewable energy projects over the past decade.[5]

These regulations, binding or not, are significantly impacting the strategic decisions of companies across various industries and altering global market dynamics. For example, a GovTech company developing digital solutions for public administrations will need to ensure that its technologies comply with CSR standards, which may become mandatory in the future. If the company fails to integrate sustainable practices into its operations, the perceived value of its services may fall short of market expectations, potentially affecting its competitiveness.

Applying this model to large conglomerates, the growing recognition that business success is linked to social and environmental responsibility is exemplified by influential figures such as Larry Fink, CEO of BlackRock. In 2019, Fink urged global business leaders to focus on sustainability, environmental stewardship, and social governance.[6]

In addition, concrete initiatives are already visible, particularly with the adoption of regulatory frameworks such as the EU AI Act. Some GovTech companies, along with major tech corporations, are adjusting their strategies to comply with new sustainability and responsible governance standards by integrating environmental, social, and governance (ESG) criteria into their operations. This shift shows that awareness of these issues is not merely theoretical but seen in tangible actions. Given the increasingly globalized nature of commerce, it is essential to examine the regulations adopted by other jurisdictions, particularly the UK and the US, to understand how these new requirements are shaping the global market.

  1. The regulatory position of the UK and the US 

In terms of environmental objectives, the UK has maintained positions close to those of the EU, partly due to its previous EU membership before Brexit (2020). Some of its corporate governance rules translate into disclosure obligations for companies. For example, since October 1, 2013, all publicly listed companies in the UK have been required to report their greenhouse gas emissions and global energy consumption in their annual directors’ reports, in line with the Companies Act 2006 (Strategic Report and Directors’ Report) Regulations 2013.

This reporting is crucial for two main reasons. First, measuring emissions is a fundamental step in managing them.[7]This report enhances a company’s reputation and image, which helps to guide investor preferences. Indeed, investors are increasingly focused on sustainable investments, favouring companies with strong reputations for environmental responsibility and long-term adaptation strategies. Secondly, reporting is important because it helps save money by identifying which business activities consume a lot of energy and how they could be replaced with renewable energy and render their business more profitable. The development of GovTech is closely linked to CSR, as government technology can play a crucial role in achieving social and environmental goals. For example, it allows administrations to digitise their services, reducing paper consumption and travel, which has a positive impact on reducing carbon emissions. It also promotes social inclusion by ensuring digital accessibility, enabling all citizens, including those with disabilities, to access public services. As a result, companies providing GovTech solutions are encouraged to incorporate CSR practices into their operations to meet expectations of sustainability and social responsibility. This is particularly relevant for large companies such as Amazon, which adapt their facilities to meet commercial and sustainability requirements in a model known as “built-to-suit”.[8] In addition, these reports can support legal action, mainly class action lawsuits. A growing trend is inspired by the US, class action lawsuits allow individuals to bring claims on behalf of a group or class.[9]Company reports can serve as grounds for such claims, exposing companies to significant financial penalties. One example is the case of emissions cheating by the German automaker Volkswagen, which led to a $14.7 billion damages award in a 2016 federal court ruling in San Francisco.[10]

When it comes to GovTech and AI, the UK favours a more liberal approach than the EU AI Act. The UK Science Department’s white paper, published in March 2023, introduced a non-statutory framework instead of a ‘far-reaching’legislation to regulate AI. This framework sets out expectations while granting sector regulators, such as the Financial Conduct Authority (FCA) and the Competition and Markets Authority (CMA), the authority to oversee AI in their areas.[11] Given the rapid evolution of AI, this flexible approach is designed to be adaptable, creating an environment that is conducive to innovation.[12] However, while this flexibility encourages innovation, it does not adequately address the ecological footprint of these innovations, particularly in terms of energy consumption.

In contrast, some US states have begun adopting specific laws to regulate AI. For example, New York State enacted Law 144, which requires employers to audit AI tools used in hiring decisions.[13] While the idea of an AI Bill of Rights similar to the EU AI Act is still in the drafting stages, it is worth noting that US AI policies are largely driven by non-governmental organisations. For example, institutional investors such as BlackRock are developing their own AI using Blackrock’s internal standards. Others such as Microsoft are influencing the way AI could be used by investing billions of dollars into leading companies like OpenAI (the creator of ChatGPT).[14]

Leading academics in the US are also playing a central role in educating global leaders on AI governance. Take Stanford’s Human-Centered Artificial Intelligence Center, a key hub for global AI discussions. The centre serves as a forum that helps clarify that AI regulation goes beyond simply imposing “restrictions”; the safety of these technologies depends on the context in which they are used.[15] An AI model that lacks context would miss essential information for informed decision-making. For example, a self-driving car programmed to “protect the driver” might, in the absence of appropriate context, choose to deliberately collide with a pedestrian to achieve its objective without considering the wider implications. This kind of thinking echoes the insights of Mo Gawdat, former Chief Business Officer of X (Google), in his book Scary Smart.

While some developments appear to diverge, the EU, the UK, and the US influence each other and often collaborate on regulatory projects. As we will see, the EU’s pioneering role in regulatory development not only demonstrates its commitment to tackling complex global challenges but also serves as a catalyst, encouraging other jurisdictions to follow its lead.

  1. Global Influences and Common Goals

Achieving carbon neutrality has become a strategic target for all, with companies and governments seeing it not only as a responsible business practice goal but also as a means to safeguard their future operations. 

This is evidenced by the commitment of the EU, UK, and US leaders to the Paris Agreement, signed at the 2015 United Nations Climate Change Conference (COP21). The EU, with its ambitious climate adaptation targets, played a key role in mobilising the international community.[16] For instance, during COP28, the EU encouraged the creation of new green investment funds in Africa, with the United States pledging tens of millions of dollars.[17] In addition to its pivotal role in shaping the Paris Agreement, the EU has advocated for global energy objectives, such as “transitioning away from fossil fuels”. But this shift will only be meaningful if it is followed by concrete action. The EU has led the way by initiating new funds, such as the Global Loss and Damage Fund, pledging more than half of its initial funding (more than 400 million euros).[18] The results of COP28 show that international cooperation is essential to tackle pressing global challenges, and by committing to implement these decisions, the EU is reinforcing its position as a leader in global environmental governance.[19]

Another important consideration is that the impact of EU legislation also affects the way third-party countries deal with the EU, particularly regarding data management. A notable example is the 2018 General Data Protection Regulation (GDPR), which applies to non-European companies that process the data of EU residents. This means that companies operating in jurisdictions outside the EU often need to comply with EU guidelines and reporting requirements.[20]Consequently, other jurisdictions tend to align their regulations with those of the EU, promoting rule harmonization, which makes the market safer for investors and facilitates cross-border business, as demonstrated by the post-Brexit agreements between the EU and the UK. This highlights the role of European GovTech in encouraging other jurisdictions to adopt certain business practices.

In conclusion, the regulatory landscape for emerging technologies and sustainable development in the EU, UK and US reveals a dynamic interplay between innovation, responsibility, and global influence. While each jurisdiction offers unique approaches, common goals are emerging, particularly in combating climate change and strengthening corporate governance practices. The EU is positioning itself as a visionary and responsible actor, pioneering legislation on carbon neutrality and promoting environmental, social and governance practices. Its role in international climate conferences demonstrates its commitment beyond its borders. As its regulations continue to evolve, the EU’s influence on global innovation, corporate governance, and environmental management will pave the way for the adoption of sustainable and responsible practices in the future. Thus, integrating new technological solutions into environmental policy will be crucial to ensure that technological progress and environmental protection go hand in hand.


[1] EU AI Act: first regulation on artificial intelligence’ (European Parliament Topics, 6 August 2023)

[2] Bercy Infos, ‘Qu’est-ce que la responsabilité sociétale des entreprises (RSE) ?’ (Ministère de l’Économie, 18 Juillet 2022’)

[3] What is carbon neutrality and how can it be achieved by 2050’ (European Parliament Topics, 13 March 2019)

[4] Andrea Januta, ‘Explainer: the U.N climate report’s five futures decoded’ (Reuters, business environment, 8 September 2023)

[5] COP28: EIB to support objectives of global renewables and energy efficiency pledge’ (European Investment Bank, 2 December 2023)

[6] A fundamental reshaping of finance- Larry Fink’s 2020 letters to CEOs’ (Blackrock 2019)

[7] Department for Environment, Food & Rural Affairs, ‘Benefits of reporting greenhouse gas emissions’ (UK Government Policy Papers, 8 April 2011)

[8] Matt Mellot, ‘Built-to-Suit: What Does That Even Mean?’ (Sterling CRE Advisors, 2 February 2024) 

[9] Wex Definitions team, ‘Class Action’ (Cornell Law School, April 2023)

[10] Andy Gillin, ‘largest class action lawsuits & settlements’ (GJEL Accident Attorneys, 1 February 2024)

[11] Mark A. Prinsley et al, ‘The UK’s approach to regulating the use of AI’ (Mayer Brown, 7 July 2023)

[12] Hannah Meakin et al, ‘AI and the UK regulatory framework (Norton Rose Fulbright Blog, 15 May 2023)

[13] Goli Madhavi et al, ‘US state-by-state AI legislation snapshot’ (BCLP Client Intelligent, 12 February 2024)

[14] Microsoft backed OpenAI valued at $80bn after company completes deal’ The Guardian (17 February 2024)

[15] Stanford University Human Centered Artificial Intelligence 

[16] David Waskow et al, ‘Unpacking COP28: key outcomes from the Dubai climate talks and what comes next’ (World Resources Institute, December 17 2023)

[17] Office of US Press Relations, ‘USAID commits $53 million to address climate change in cities’ (USAID, 6 December 2023)

[18] ‘COP28’ (European Council, 15 January 2024)

[19] ‘Causes and effects of climate change’ (United Nations)

[20] Brooke Master, ‘BlackRock to roll out first generative AI tools to clients next month’ Financial Times (6 December 2023) 

Share this article
Shareable URL
Prev Post

Régulation de la GovTech et responsabilité sociale des entreprises : une analyse comparée entre l’UE, le Royaume-Uni et les États-Unis

Next Post

Le Legal Design, un moyen prometteur au service d’une justice en crise 

Laisser un commentaire

Votre adresse e-mail ne sera pas publiée. Les champs obligatoires sont indiqués avec *

Ce site utilise Akismet pour réduire les indésirables. En savoir plus sur comment les données de vos commentaires sont utilisées.

Read next