Artificial Intelligence: Walking the tightrope between regulation and innovation

Published on 10th May 2023

As we track and analyse global regulatory changes and initiatives, we see regulators in many jurisdictions working to regulate risks, whether they be operational, prudential, market or otherwise. However, the recent discussions around the control and development of AI has thrown up a new one – existential risk. Some of the narrative surrounding the regulation of AI has more in common with a sci-fi blockbuster than that of market regulation. Hyperbole or not, we are seeing regulators reacting and contending with these issues and others such as privacy, data protection, intellectual property, liability and accountability; and also balancing that with the need to encourage innovation. This post seeks to highlight some of the recent global regulatory developments in this area.


The complex technical and ethical issues surrounding AI have been addressed in a number of publications by influential global bodies and associations. In 2019, the Organisation for Economic Cooperation and Development (OECD) proposed the OECD’s Principles on AI (1), which have been influential and adopted widely. It also issued a baseline framework to assist policy makers, regulators and legislators in assessing AI opportunities and risks (2).

In 2020, the Global Partnership on Artificial Intelligence (GPAI) was launched as an international and multi-stakeholder initiative with the aim to advance the responsible and human-centric development and use of AI.

UNESCO issued a Recommendation on the Ethics of Artificial Intelligence (3) which was adopted by acclamation by 193 Member States at the general conference in November 2021. At the G7 Digital and Tech Ministers Meeting in Japan in April 2023 ministers agreed to pursue “risk-based” and “forward looking” regulation of AI (4) .

In a ground-breaking step, the EU is working to formulate a comprehensive framework of AI regulation under its Digital Package. The proposed text of the AI Act would significantly impact and influence the development of AI in the EU.
Central to the proposed rules is a classification system which determines the level of risk AI technology could pose. The members of the European Parliament reached political agreement on the proposed Act at the end of April 2023 and it will now continue through to the trilogue stage.

Post-Brexit regulatory divergence is increasingly apparent between the UK and the EU, with the differing approaches to AI regulation being a good example of this. In March 2023, the UK Department for Science, Innovation and Technology and Office for Artificial Intelligence issued a policy paper setting out its proposals for “A pro-innovation approach to AI regulation”(5)  with an associated consultation running until 21 June 2023. A “principles based” regulatory framework is proposed underpinned by the following five principles:

  • Safety, security and robustness
  • appropriate transparency and explainability
  • fairness
  • accountability and governance
  • contestability and redress

Existing regulators will be required to incorporate the principles into their existing regulations and provide sector-specific guidance focussing on the use of technology rather than the technology itself.

In the USA, again we see an approach that puts the need to encourage innovation at the heart of regulation. In April 2023, the Department of Commerce’s National Telecommunications and Information Administration (NTIA) launched a request for comment on what policies can support the development of AI audits, assessments, certifications and other mechanisms to create earned trust in AI systems that they work as claimed (6) . In October 2022, the Biden administration published a Blueprint for an AI Bill of Rights (7) , which contains a set of five principles and associated practices to help the guide the design, use and deployment of automated systems to protect the rights of the American public in the age of AI. A high-level regulatory framework for AI is also being discussed and circulated at a federal level by Senator Chuck Schumer (8) . At state level, a patchwork of AI regulation is emerging, with some states having already enacted or proposing legislation to regulate AI on a sectoral basis. For example, the Bauer-Kahn Bill (9) introduced in California earlier this year aims to regulate AI algorithmic bias across a range of sectors including employment, health care and financial services.

China: On 11 April 2023, the Cyberspace Administration of China issued draft measures on managing generative AI services. The measures again place the importance of innovation at their heart and work alongside existing legislation. Notably, under the draft measure, providers would be required to submit security assessment reports to the competent authority and fulfil certain procedures before providing services to the public using generative AI products. As AI technology advances exponentially, regulators are grappling with complex, ground-breaking concepts and the need to balance societal consequences with the benefits and encouragement of innovation and economic success. We will be monitoring future regulatory changes with great interest.

Susie MacKenzie, Head of Legal & Regulatory Analytics.

If you’d like to know more about Corlytics and our solutions, please get in touch with us

Read our other blogs>