UK competition watchdog opens review into AI models

It comes just as the White House confirmed its own AI supervisions.

Dado Ruvic / reuters

The UK government has announced an initial impact review in response to the continued growth and concerns around generative AI and learning language models. The investigation will reportedly look at how the creation and distribution of AI technology impact five wide-reaching areas: appropriate transparency and explainability; accountability and governance; safety, security and robustness; fairness; and contestability and redress. Overall, the review aims to learn how AI foundation models can, and likely will, impact both competition and consumer protections.

Regulating bodies tasked with finding the answers include the Competition and Markets Authority (CMA), which helps people and businesses in competitive markets while working against unethical practices. "It's crucial that the potential benefits of this transformative technology are readily accessible to UK businesses and consumers while people remain protected from issues like false or misleading information," Sarah Cardell, CMA's chief executive, said in a statement. "Our goal is to help this new, rapidly scaling technology develop in ways that ensure open, competitive markets and effective consumer protection."

New advances from leading AI companies like OpenAI, Microsoft and Google have led generative AI tools and learning language models like ChatGPT, Google Bard, and Bing Chat, to rise in popularity. As companies race to include AI-generation tools and other model-based features, evaluations can determine whether checks need to be put in place.

The announcement follows last month's news that the UK is spending £100 million (~$125.7 million) to launch a Foundational Model Taskforce. Prime Minister Rishi Sunak and Technology Secretary Michelle Donelan aim to create "sovereign" AI technology to help the economy without falling into ethical and logistical problems that have arisen with other programs.

Similar regulations and concerns are occurring in the US, with the Biden administration also announcing sweeping efforts to evaluate and regulate AI. The US will put $140 million towards seven new research and development centers within the National Science Foundation, garnered commitments from key AI developers to publicly evaluate their systems at DEFCON 31 and tasked the Office of Management and Budget with establishing AI policies for federal employees. The administration's statement comes ahead of Vice President Harris' meeting with the CEOs of Microsoft, OpenAI, Alphabet and Anthropic.