AI safety comes under further scrutiny in UK

THE UK’s Frontier AI Taskforce is establishing a safety research team to evaluate the emerging technology’s risks.

It will work with tech organisations to work out how AI can improve human capabilities in specialised fields and establish risks under current safeguards. The findings will go to roundtable discussions with civil society groups, government representatives, AI companies and research experts at next month’s AI Safety Summit at Bletchley Park, Buckinghamshire.

Hands, robot and human. Frontier AI Taskforce illustration
The Frontier AI Taskforce will work with tech organisations to work out how AI can improve human capabilities

ITG deputy CEO John Kirk said collaboration to tackle cautions and fears was key. AI has the potential to accelerate business operations in all areas, he says, and the team’s establishment “helps better position it for tech superpower status”. All sectors should benefit its safe development, Kirk says, especially creative industries.

The Frontier AI Taskforce recently announced the establishment of its advisory panel and the appointment of two research directors.

The AI Standards Hub is bolstering the UK’s position in the race, with respondents recognising it as global top player in standards development.

A report from Oxford Information Labs found that nearly 70 percent of respondents considered the hub as “the top coalition in the UK” for promoting international discussion on AI standards.

The Hub was launched in October last year with the aim of giving the UK an internationally recognised voice in the development of AI standards.