The urgency of AI regulatory framework: Sunak must listen to experts

The urgency of AI regulatory framework: Sunak must listen to experts
Bletchley Park

Authorities in the UK continue to downplay the urgency of AI regulation despite increasing global momentum, from both supranational institutions and nations, in this space.  Secretary of State for Science, Innovation, and Technology, Michelle Donelan has defended this position by suggesting that this will “ease the burden on business” and enable the establishment of the “UK as an AI superpower”. Other cabinet ministers, like Viscount Camrose, Minister for AI and Intellectual Property, have re-iterated this position, stating that there will be no UK laws on AI “in the short term”.

 

This has created serious concern among experts that the UK is lagging behind the likes of China and the US, as well as, institutions like the European Union in regulatory practices. In October 2023, President Biden issued a “landmark executive order … seizing the promise and managing the risks of AI”, whilst the EU has also passed provisional “landmark rules governing the use of AI”. 

 

Downplaying the importance of regulation in the short-term was a position consistently expressed by this government throughout 2023. A white paper, released in March, laid out a clear commitment to opposing strong foundational regulatory policies in the short-term in order to establish what was called a “pro-innovation” framework. This largely placed regulatory onus on businesses themselves. Relying on private organisations to “solve important problems whilst addressing the risk of harm to civilians”, without guiding input from government policy which would “reduce regulatory uncertainty … to drive growth and prosperity”. Critics have suggested that this will create a situation where “the private sector are the tail wagging the dog”. In a clear departure from other burgeoning regulatory regimes, the white paper indicated that the government will not assign risk levels to entire technologies, which would then be banned depending on degree of ‘risk’. Presented as an innovative approach that enables innovation, we at impACT believe that  in reality, this leaves the door open for the building of frontier models that may pose significant humanitarian and possibly existential risk with no external checks. 

 

This stands in clear juxtaposition to the attitudes expressed by many industry and academic experts at the “first-of-it’s-kind” AI summit in November last year. Sunak and British political leaders gathered senior officials, civil society leaders and executives of major AI companies in Bletchley Park to discuss the potentialities and risks of artificial intelligence, stoked by the startling acceleration of it’s development in recent years. Even tech-industry giants, like Elon Musk, have illustrated a desire for regulations, stating that “having a referee is a good thing”. Musk himself “has repeatedly raised alarms about AI’s future impact on civilisation”. 

 

The summit successfully established a joint-commitment from 28 governments and leading AI companies, the Bletchley Declaration, in response to widespread concerns about the technologies risks. The Declaration demands that new AI models are subject to a “battery of safety tests” in order to decrease the potential dangers. Furthermore, the UK AI Safety Institute has been established. 

 

Outwardly, this presents the United Kingdom as a world-leading authority on AI. However, attitudes to developing regulatory frameworks indicate a commitment to business rights rather than the potentialities of responsible, safe, and human rights facing artificial intelligence. Government ministers and Sunak himself have looked to downplay expert understanding and the contemporary regulatory abilities in order to support this business-oriented approach. At Bletchley Park, Sunak indicated that current understanding of the technology dictates that we do not “rush” to build regulatory frameworks, asking; “how can we write laws that makes sense for something we don’t yet fully understand?”. Alongside leaving businesses to assess their own models, this is a position that has received some criticism from experts. Brent Mittelstadt, Associate Professor and Director of Research at the Oxford Internet Institute, at the University of Oxford, in response to this notion, stated:

 

“The idea that we do not understand AI and its impacts is overly bleak and ignores an incredible range of research undertaken in recent years to understand and explain how AI works, and to map and mitigate its greatest social and ethical risks… My worry is that with frontier AI we are effectively letting the private sector and technology development determine what is possible and appropriate to regulate”. 

 

Although Sunak suggests his government are attempting to build agile AI frameworks that will attract global businesses to the UK, the likely realities of implementation are more unsettling and put the United Kingdom in a precarious position. Carissa Véliz, Associate Professor at the Faculty of Philosophy and Institute for Ethics in AI at Oxford University, has expressed that such attitudes mean that they are “not optimistic” about the future of regulation and the practical applications of AI in the UK.

 

We at impACT wish to re-iterate that rather than prioritising business, government attitudes must place ethical concerns at the centre of regulatory practice. In an article written by Professors Alan Winfield and Marina Jirotka, it is argued that the existence of “sound ethical principles” already present in robotics and AI, should be fully translated into practice by policymakers. They lay out clear principles for a cogent regulatory regime, with “good ethical governance” at the centre. 

 

Proposing “five pillars of good ethical governance” to imbed these principles at the base of a proper regulatory framework. This includes a published ethical code of conduct (including whistleblower mechanisms), ethics training for all operating in the space, responsible innovation where ethical risk assessments are undertaken on all new products, clear transparency on the processes of product creation and finally companies must illustrate that they truly value ethical AI creation rather than using it as “a smokescreen for maximising shareholder returns”.

 

We at impACT implore Rishi Sunak and his ministers to truly listen to experts, rather than prioritise business, in this transformative and potentially dangerous industry.  If the government wishes to establish the United Kingdom as a world leader, then it must establish a world-leading foundation for a regulatory regime. Fortunately, the UK is home to a number of industry-leading academics and experts, they must be listened to if the Prime Minister wishes to reach the goal of establishing the UK as “an AI superpower”. 

Related

Egypt: Liquidation of state company without paying worker’s dues, a v...

The company's general assembly had decided to liquidate the company after the high loss rate during the last period and the halt in sea travels because it...

Libya health policies fail to contain coronavirus

Libyan authorities have failed to contain the novel coronavirus, with the number of infections growing rapidly and the health sector unable to provide hea...

“UNRWA-Hamas Linkage” ran by Google Ads when users query “UNRWA”

In what impACT believes to be a violation of their UNGP obligations, Google have run an advert which perpetuates dangerous Israeli and unfounded allegatio...