An Overview of the UK’s National AI Strategy

Published on: 
Pharmaceutical Technology, Pharmaceutical Technology, March 2022 Issue, Volume 46, Issue 3

The UK’s National AI Strategy underpins the government’s long-term commitment and ambitions to enhancing the country’s digital ecosystem, with the health and life sciences sectors seen as pivotal contributors to meeting these aims.

On 22 Sep. 2021, the Secretary of State for Digital, Culture, Media, and Sport (DCMS) published the United Kingdom’s National Artificial Intelligence (AI) Strategy (the ‘Strategy’). In its published form, the Strategy sets out a 10-year plan to make the UK “a global AI superpower” building on R&D successes in the field, as well as previous AI Sector Deal investment, and the establishment of AI bodies and structures, such as the AI Council and Centre for Data Ethics and Innovation (CDEI) (1).

The Strategy follows a raft of related plans, strategies, and roadmaps, such as the National Data Strategy (December 2020) (2), the AI Council’s AI Roadmap (January 2021) (3), the Plan for Digital Regulation (July 2021) (4), and the UK Innovation Strategy (July 2021) (5). The publication of the UK National AI Strategy follows guidance issued by the European Commission (EC), which published its own comprehensive proposals for regulating AI technologies in April 2021, and which regards health as an important application area (6).

The UK remains first in Europe and ranks third globally, behind the United States and China, for private investment in AI technologies (7). For its part, the UK government has invested more than £2.3 billion into AI since 2014, and the Strategy outlines the government’s vision to build on this progress. The Strategy reflects not just the UK’s ambition to use AI for regional prosperity and across sectors, but also to play a key part in addressing global challenges such as achieving net zero, health resilience, and environmental sustainability (8).

The essence of the Strategy

The Strategy notes that no definition of AI is suitable for every scenario and therefore differentiates AI as “machines that perform tasks normally requiring human intelligence, especially when the machines learn from data how to do those tasks” (9). This definition helps differentiate AI from other technology or digital policy.

The three core pillars

The agenda for the next 10 years is shaped by the three core pillars of the Strategy, under which three key goals have been set.

The goals are:

  • To experience significant growth in the number and type of AI systems discovered in the UK
  • To benefit from the highest amount of economic and productivity growth due to AI
  • To establish the most trusted and pro-innovation system for AI governance in the world (9).

The three core pillars are:

  • Investing in the long-term needs of the AI ecosystem
  • Ensuring AI benefits all sectors and regions
  • Governing AI effectively (9).

Each pillar is divided into short-term (three months) and long-term (3–12 months) outputs, where the Strategy introduces numerous policies and commitments that will be implemented. A summary of the key points is contained in Table I. The three pillars identify ‘people’, ‘data’, ‘computing power’, and ‘finance’ as being key drivers to progress, discovery, and to achieving strategic advantage in AI, with the acknowledgement that governance and regulation will need to keep pace with technological development. As such, the Strategy specifies particular courses of action that will be pursued.

Strategy for AI in health and social care

In August 2019, the National Health Service (NHS) AI Lab was created to accelerate the development of safe, ethical, and effective AI-driven technologies to deal with challenges in health and social care. The NHS AI Lab operates under the auspice of NHSX, a UK government body charged with the responsibility for setting national policy and developing best practice for NHS technology, digital and data, including data sharing and transparency. The NHS AI Lab, under the guidance of NHSX, is also charged with the responsibility for formulating a new National Strategy for AI in Health and Social Care, contained in Pillar 2 of the National AI Strategy, which places a strong emphasis on ensuring that the transition to an AI-enabled economy transcends all sectors and regions in the UK. A draft National Strategy for AI in Health and Social Care is expected in early 2022, which will set the agenda for AI in the health and social care sector towards 2030.

For companies operating in the AI, data-driven health, and life sciences sectors, it is clear that the interplay between factors such as R&D, digital technologies, data security and privacy, infrastructure, and ethics will mean that organizations will need to keep a holistic eye on the spectrum of related developments (both within the UK and abroad), particularly as the UK looks to navigate its new position post-Brexit (1).

Comparison with other national strategies

The UK National AI Strategy’s focus on economic growth is consistent with the national AI strategies adopted by countries elsewhere. Nevertheless, there are some features of the Strategy that differentiate it from approaches adopted by other countries, such as its aim to develop the UK’s compute infrastructure of AI applications (4). Despite the significance of such infrastructure to support the advancement of AI, the only other country to address compute capacity in its national AI strategy is the US.

Another distinguishing feature of the Strategy is its focus on international technical standards as a mechanism for governing AI. In this instance, the UK (along with Australia) appears to be more focused on influencing technical standards for AI systems, such as requirements for documentation and reliability testing, rather than developing the legislation itself (8). This approach underscores the British government’s intention to establish the UK as a ‘safe harbour’ for the development of AI technologies and that less regulation will encourage innovation in the sector. However, this position is in stark contrast with the EC’s own position on AI regulation, where it proposes to strictly regulate certain types of ‘high risk’ AI technologies (6).

Furthermore, divergence from the EU on key points, such as those relating to data, may also make it harder for AI developers and companies using AI technologies in the UK to operate in the EU (and vice versa). More importantly, it could also risk the continuity of the 28 June Data Adequacy Decision (DAD) granted by the European Union (EU) when the UK left the EU, which was designed to avoid any conflict between British and European data protection legislation. The DAD ensures the continued and unimpeded transfer of data from the EU to the UK (10).

The path ahead

The UK’s National AI Strategy, which contrasts markedly with the EU’s own proposed AI rules, indicates that the country is keen to ensure it gains an economic boost, post-Brexit. To this end, the UK government is trying to strike a balance within the middle (11). At a national level, this will indeed be helpful to start-ups and smaller businesses, focused initially on the UK market. However, for international businesses and those looking to scale, there will be increased costs involved in ensuring compliance across borders, the greater the divergence between regulatory environments (12).

While the EC has also set out a plan to boost AI innovation, it will strictly regulate applications that could impinge on fundamental rights and product safety and includes bans for some ‘unacceptable’ uses of AI such as government-conducted social scoring (11). However, implementing this approach in healthcare will be more difficult as it will require the careful balancing of core values with a detailed consideration of nuances of health and AI technologies (13). To this end, the EC needs to consider AI’s ability to adapt to different users and contexts, as a ‘one-size-fits-all’ approach may not yield the necessary outcomes in the context of healthcare, a notion recognized by the US Food and Drug Administration (FDA) amongst others (14).

Ultimately, the timelines for AI regulation are also crucial. The EC’s AI proposal is currently undergoing legislative scrutiny in the European Parliament and Council. If the EU adopts its legislation before the UK formally adopts theirs, the EU will have first mover advantage, depriving the UK of the opportunity to act as a trendsetter for AI regulatory standards. Instead, the UK could find itself obliged to align with EU standards (15). Nevertheless, the formulation of the UK’s National AI Strategy, and within it the anticipated new National Strategy for AI in Health and Social Care, underpins the UK government’s long-term commitment and ambitions to supporting and enhancing the country’s digital ecosystem. It also signals the vital importance of the health and life sciences sectors as a pivotal contributor to meeting these aims.


1. E. Keeling, “The UK’s National AI Strategy: Setting a 10-year Agenda to make the UK a ‘Global AI Superpower’,” Allen & Overy, Digital Hub Blogs, 24 Sep. 2021.
2. UK Gov., National Data Strategy, Policy Paper, [Updated 9 Dec. 2021].
3. UK Gov., UK AI Council. AI Roadmap, Report (January 2021).
4. UK Gov., Digital Regulation: Driving Growth and Unlocking Innovation, Policy Paper, (6 July 2021).
5. UK Gov., UK Innovation Strategy: Leading the Future by Creating It, Policy Paper, (22 July 2021).
6. O. Yaros, A.H. Bruder, and O. Hajda, “The European Union Proposes New Legal Framework for Artificial Intelligence,” Mayer Brown, 5 May 2021.
7. Business Wire, “The UK Leads Europe and Ranks Third Globally in Artificial Intelligence,” Press Release, 15 Dec. 2021.
8. CMS Law-Now, “‘National AI Strategy’: Step Change for the AI Economy in the UK,” Article, 11 Oct. 2021.
9. HM Government, National AI Strategy, Policy Paper (September 2021).
10. T. Reilly, “Data Divergence: A Brexit Dividend?” Covington, Inside Privacy Feature, 8 Sep. 2021.
11. M. Heikkila, “UK Charts Post-Brexit Path with AI Strategy,” POLITICO, 22 Sep. 2021
12. C. Eastham, “UK AI Strategy Focussed on Economic Growth, Resilience and Ethics,” Computer Weekly, 30 Sep. 2021
13. I.G. Cohen, et al., The Lancet, 2 (7) E376–E379 (2020).
14. FDA, Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD), Discussion Paper and Request for Feedback, (2020).
15. D. Cooper, et al., “The UK Government Publishes its AI Strategy,” Covington, Inside Privacy Feature, 4 Oct. 2021.

About the author

Bianca Piachaud-Moustakis is lead writer at Pharmavision,

Article details

Pharmaceutical Technology Europe
Vol. 34, No. 3
March 2022
Pages: 7–8


When referring to this article, please cite it as B. Piachaud-Moustakis, “An Overview of the UK’s National AI Strategy,” Pharmaceutical Technology Europe 34 (3) 2022.