ethics
Latest
OpenAI fires CEO Sam Altman as 'board no longer has confidence' in his leadership
OpenAI's board of directors announced that CEO Sam Altman is leaving both the company and the board, effective immediately.
Microsoft will phase out facial recognition AI that could detect emotions
Microsoft is shelving facial recognition AI it says could detect your emotions and age.
Engadget Podcast: Google's AI isn't sentient but we must examine the ethics
This week, Devindra and Cherlynn dig into Google engineer Blake Lemoine’s interview with the Washington Post and his belief that the company’s LaMDA language model is alive.
Apple hires former Google AI scientist who left after ethics turmoil
Apple has hired former Google Brain research manager Samy Bengio, who left the company after its firings of two female AI researchers.
Google has fired another AI ethics leader
Google has fired AI ethics researcher Margaret Mitchell, just a couple of months after forcing out her co-lead Timnit Gebru.
Hitting the Books: What do we want our AI-powered future to look like?
For example, the artificial intelligence principles of the Organisation for Economic Co-operation and Development emphasize the human ability to challenge AI-based outcomes. We desperately need an “off ” switch for all AI and robotics in my opinion.
Fairphone 3 can now be bought with a ‘de-googled’ OS
Do you want an ethical smartphone that blocks Google's services? Fairphone seems to think so.
IBM and Microsoft support the Vatican’s guidelines for ethical AI
IBM and Microsoft have signed the Vatican's "Rome Call for AI Ethics," a pledge to develop artificial intelligence in a way that protects all people and the planet, Financial Times reports. Microsoft President Brad Smith and John Kelly, IBM's executive vice-president, are among the first global tech leaders to sign the document.
White House cautions against over-regulating AI in new guidelines
Today, the White House proposed 10 principles for federal agencies to consider when regulating artificial intelligence, Reuters reports. The guidelines stress limiting regulatory "overreach" and encourage Europe and other allies to "avoid heavy handed innovation-killing models."
Former Google exec says he was pushed out for defending human rights
Google's former global head of international relations claims he was pushed out of the company for trying to protect free expression and privacy in China. In a letter shared today, Ross LaJeunesse says that, after 11 years of working to protect human rights in China, he was told there was no longer a job for him as a result of "reorganization." He says the company has strayed from its "don't be evil" motto, and rather than take a lesser role, he's leaving to run for a Senate seat in Maine.
Pentagon's draft AI ethics guidelines fight bias and rogue machines
Tech companies might have trouble establishing groundwork for the ethical use of AI, but the Defense Department appears to be moving forward. The Defense Innovation Board just published draft guidelines for AI ethics at the Defense Department that aim to keep the emerging technology in check. Some of them are more practical (such as demanding reliability) or have roots in years-old policies (demanding human responsibility at every stage), but others are relatively novel for both the public and private spheres.
Fairphone 3 is the 'ethical' smartphone you might actually buy
Doing the right thing is often framed as giving up something. You're not enjoying a vegetarian burger, you're being denied the delights of red meat. But what if the ethical, moral, right choice was also the tastiest one? What if the smartphone made by the yurt-dwelling moralists was also good-looking, inexpensive and useful? That's the question the Fairphone 3 poses.
Axon won’t use facial recognition tech in its police body cameras
Axon, a major supplier of police body cameras and software, announced today that it will not include face-matching technology in its body cameras -- at least not yet. The decision follows a report from Axon's independent AI ethics board, which concluded that face recognition technology is not reliable enough to justify its use in body cameras. According to the report, there is "evidence of unequal and unreliable performance across races, ethnicities, genders and other identity groups."
US to back international guidelines for AI ethics
American companies have fostered ethical uses of AI before. Now, however, the government itself is posed to weigh in. Politico understands that the US, fellow members of the Organization for Economic Cooperation and Development and a "handful" of other countries will adopt a set of non-binding guidelines for creating and using AI. The principles would require that AI respects human rights, democratic values and the law. It should also be safe, open and obvious to users, while those who make and use AI should be held responsible for their actions and offer transparency.
The EU releases guidelines to encourage ethical AI development
No technology raises ethical concerns (and outright fear) quite like artificial intelligence. And it's not just individual citizens who are worried. Facebook, Google and Stanford University have invested in AI ethics research centers. Late last year, Canada and France teamed up to create an international panel to discuss AI's "responsible adoption." Today, the European Commission released its own guidelines calling for "trustworthy AI."
Google forms an external council to foster 'responsible' AI
Google is joining Facebook, Stanford and other outfits setting up institutions to support ethical AI. The company has created an Advanced Technology External Advisory Council that will shape the "responsible development and use" of AI in its products. The organization will ponder facial recognition, fair machine learning algorithms and other ethical issues. The initial council is a diverse group that tackles a range of disciplines and experiences.
Microsoft workers demand end to HoloLens contract with US Army
You can add Microsoft to the growing list of companies whose staff are objecting to the use of their technology for some military purposes. A group of Microsoft workers has published an open letter to CEO Satya Nadella and legal chief Brad Smith asking them to end a $479 million HoloLens contract with the US Army. They contended that Microsoft is effectively developing weapons by helping the Army create a platform that helps its soldiers train and fight using augmented reality. It not only helps kill people, but turns war "into a simulated 'video game'" that disconnects infantry from the "grim stakes" of combat, the workers argued.
CRISPR gene-editing experiment may have impacted twins' brains
Researchers have published research into a gene at the heart of a controversial human gene-editing experiment, lending more weight to the theory that it inhibits cognitive function. But no one knows how the method may have affected the minds of the Chinese twins at the center of the issue. One scientist involved in the study, University of California, Los Angeles neurobiologist Alcino J. Silva, said the "mutations will probably have an impact on cognitive function," but it's impossible as yet to predict the precise effects. The CRISPR-Cas9 gene-editing technique has previously been linked with unintended DNA damage.
AI can write disturbingly believable fake news
AI is getting better and better at writing convincing material, and that's leading its creators to wonder whether they should release the technology in the first place. Elon Musk's OpenAI has developed an algorithm that can generate plausible-looking fake news stories on any topic using just a handful of words as a starting point. It was originally designed as a generalized language AI that could answer questions, summarizing stories and translating text, but researchers soon realized that it could be used for far more sinister purposes, like pumping out disinformation in large volumes. As a result, the team only plans to make a "simplified version" of its AI available to the public, according to MIT Technology Review.
Scientists clone gene-edited monkey for circadian disorder research
Scientists in China announced this week that they've successfully made five clones of a gene-edited monkey to aid in researching a number of conditions relating to circadian rhythms. The idea is that having a group of five genetically identical monkeys will help remove variables in research, but the whole experiment raises some rather murky ethical issues as well. Researchers at the Institute of Neuroscience (ION) of Chinese Academy of Sciences (CAS) in Shanghai intially gene-edited a group of monkeys to make it more prone to disorders that stem from circadian rhythms. Because of this gene editing, the monkeys "exhibited a wide-range of circadian disorder phenotypes, including reduced sleep time, elevated night-time locomotive activities, dampened circadian cycling of blood hormones, increased anxiety and depression, as well as schizophrenia-like behaviors." The researchers then used fibroblasts (connective tissue cells) from one monkey in that groupto produce the five clones using the same technique that successfully produced the first primate clones in early 2018.