Latest in Gear

Image credit: Vladimir Obradovic via Getty Images

The EU releases guidelines to encourage ethical AI development

They list seven requirements for trustworthy AI.
430 Shares
Share
Tweet
Share
Save

Sponsored Links

Vladimir Obradovic via Getty Images

No technology raises ethical concerns (and outright fear) quite like artificial intelligence. And it's not just individual citizens who are worried. Facebook, Google and Stanford University have invested in AI ethics research centers. Late last year, Canada and France teamed up to create an international panel to discuss AI's "responsible adoption." Today, the European Commission released its own guidelines calling for "trustworthy AI."

According to the EU, AI should adhere to the basic ethical principles of respect for human autonomy, prevention of harm, fairness and accountability. The guidelines include seven requirements -- listed below -- and call particular attention to protecting vulnerable groups, like children and people with disabilities. They also state that citizens should have full control over their data.

The European Commission recommends using an assessment list when developing or deploying AI, but the guidelines aren't meant to be -- or interfere with -- policy or regulation. Instead, they offer a loose framework. This summer, the Commission will work with stakeholders to identify areas where additional guidance might be necessary and figure out how to best implement and verify its recommendations. In early 2020, the expert group will incorporate feedback from the pilot phase. As we develop the potential to build things like autonomous weapons and fake news-generating algorithms, it's likely more governments will take a stand on the ethical concerns AI brings to the table.

A summary of the EU's guidelines are below, and you can read the full PDF here.

  • Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
  • Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
  • Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
  • Transparency: The traceability of AI systems should be ensured.
  • Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
  • Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
  • Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.
All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.
Comment
Comments
Share
430 Shares
Share
Tweet
Share
Save

Popular on Engadget

Engadget's Guide to Privacy

Engadget's Guide to Privacy

View
Australia will help NASA go to the Moon and Mars

Australia will help NASA go to the Moon and Mars

View
Apple gets US approval for Mac Pro tariff exemptions

Apple gets US approval for Mac Pro tariff exemptions

View
TiVo says all retail DVR owners will see ads before recorded shows

TiVo says all retail DVR owners will see ads before recorded shows

View
Batman comes to 'Fortnite' along with Catwoman and Gotham City

Batman comes to 'Fortnite' along with Catwoman and Gotham City

View

From around the web

Page 1Page 1ear iconeye iconFill 23text filevr