Latest in Tomorrow

Image credit:

Google's AI can detect breast cancer more accurately than experts

The AI screened with fewer false positives and false negatives than humans.
3 Shares
Share
Tweet
Share

Sponsored Links

BSIP via Getty Images

DeepMind, a UK-based artificial intelligence company purchased by Google in 2014, has turned its sights to the problem of breast cancer detection. Although breast cancer is the most common type of cancer among women, detection is difficult due to high rates of false positives (when a mammogram is judged to be abnormal even when no cancer is present) which cause distress and can lead to unnecessary medical interventions. DeepMind had developed an AI model which can identify breast cancer from scans with fewer false positives or false negatives (when cancer is present but isn't detected) than experts.

The company trained its AI using de-identified data from patients in both the US and the UK, and showed that it could reduce false positives by 5.7 percent and false negatives by 9.4 percent in the US. Interestingly, a smaller reduction of 1.2 percent and 2.7 percent respectively was seen in the UK, suggesting that the current US detection system has lower accuracy than the current UK system.

Unlike the human experts, who used patient histories and prior mammograms to make their assessments, the AI only had access to the most recent mammogram of each patient. Despite this, it was able to make screening decisions with greater accuracy than the experts, and the model could be generalized to different populations -- such as women in the US compared to women in the UK.

The developers of the AI emphasize that this is early stage research and that more studies and cooperation with healthcare providers will be required before the system is ready for widespread use.

DeepMind has been used in the past for medical purposes from spotting eye diseases to predicting kidney illness, however, it has also been the subject of considerable controversy. In 2017, it was revealed that the UK's National Health System had shared data with DeepMind on an "inappropriate legal basis," with the company receiving 1.6 million patient records without the direct consent of the patients. This broke privacy laws, the UK data watchdog ruled, so the NHS chose continue working with DeepMind but to anonymize data in future.

In 2018, DeepMind was brought under the Google Health initiative, and concerns about privacy were not assuaged when Google dissolved the review board which was supposed to oversee the company's relationship with the NHS. For all the potential good that could be done with a medical AI like DeepMind, there seems to be a concerning lack of oversight over the privacy of patient data and a lack of accountability for past data privacy issues.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.
Comment
Comments
Share
3 Shares
Share
Tweet
Share

Popular on Engadget

Windows XP source code leak sheds light on Microsoft's OS history

Windows XP source code leak sheds light on Microsoft's OS history

View
US slaps trade restrictions on China's top chipmaker

US slaps trade restrictions on China's top chipmaker

View
Recommended Reading: The new Apple Watch's blood oxygen feature

Recommended Reading: The new Apple Watch's blood oxygen feature

View
Hitting the Books: The invisible threat that every ISS astronaut fears

Hitting the Books: The invisible threat that every ISS astronaut fears

View
Here's everything Amazon announced at its big hardware event

Here's everything Amazon announced at its big hardware event

View

From around the web

Page 1Page 1ear iconeye iconFill 23text filevr