American companies have fostered ethical uses of AI before. Now, however, the government itself is posed to weigh in. Politico understands that the US, fellow members of the Organization for Economic Cooperation and Development and a "handful" of other countries will adopt a set of non-binding guidelines for creating and using AI. The principles would require that AI respects human rights, democratic values and the law. It should also be safe, open and obvious to users, while those who make and use AI should be held responsible for their actions and offer transparency.
The guidelines also call on governments to boost AI funding and establish frameworks that help turn research into real-world applications. There could be "deregulated environments" to test AI before unleashing it in the wild, as an example.
The guidelines should be released on May 22nd and come from 50 experts in the public and private sectors, including governments and tech companies.
This cooperation isn't unexpected. President Trump pushed for regulations in his executive order prioritizing AI. American companies and institutions have pressed for positive uses of AI, too. The larger concern is that this might not translate to real action. While the principles could help shape laws, there's no obligation to honor them. Also, these guidelines only affect a small number of countries. China isn't included, and it's well-known for abusing AI to erode privacy and free speech. This is a nudge in the right direction, but only a nudge.