Retinal images could allow computers to predict a person’s risk of an imminent heart attack.Credit: Paul Parker/SPL
yes are said to be the window to the soul — but researchers at Google see them as indicators of a person’s health. The technology giant is using deep learning to predict a person’s blood pressure, age and smoking status by analysing a photograph of their retina. Google’s computers glean clues from the arrangement of blood vessels — and a preliminary study suggests that the machines can use this information to predict whether someone is at risk of an impending heart attack.
The research relied on a convolutional neural network, a type of deep-learning algorithm that is transforming how biologists analyse images. Scientists are using the approach to find mutations in genomes and predict variations in the layout of single cells. Google’s method, described in a preprint in August (R. Poplin et al. Preprint at https://arxiv.org/abs/1708.09843; 2017), is part of a wave of new deep-learning applications that are making image processing easier and more versatile — and could even identify overlooked biological phenomena.
“It was unrealistic to apply machine learning to many areas of biology before,” says Philip Nelson, a director of engineering at Google Research in Mountain View, California. “Now you can — but even more exciting, machines can now see things that humans might not have seen before.”
Convolutional neural networks allow computers to process an image efficiently and holistically, without splitting it into parts. The approach took off in the tech sector around 2012, enabled by advances in computer power and storage; for example, Facebook uses this type of deep learning to identify faces in photographs. But scientists struggled to apply the networks to biology, in part because of cultural differences between fields. “Take a group of smart biologists and put them in a room of smart computer scientists and they will talk two different languages to each other, and have different mindsets,” says Daphne Koller, chief computing officer at Calico — a biotechnology company in San Francisco, California, that is backed by Google’s parent, Alphabet.
Scientists also had to identify which types of study could be conducted using networks that must be trained with huge sets of images before they can start making predictions. When Google wanted to use deep learning to find mutations in genomes, its scientists had to convert strands of DNA letters into images that computers could recognize. Then they trained their network on DNA snippets that had been aligned with a reference genome, and whose mutations were known. The end result was DeepVariant, a tool released in December that can find small variations in DNA sequences. In tests, DeepVariant performed at least as well as conventional tools.
Cell biologists at the Allen Institute for Cell Science in Seattle, Washington, are using convolutional neural networks to convert flat, grey images of cells captured with light microscopes into 3D images in which some of a cell’s organelles are labelled in colour. The approach eliminates the need to stain cells — a process that requires more time and a sophisticated lab, and can damage the cell. Last month, the group published details of an advanced technique that can predict the shape and location of even more cell parts using just a few pieces of data — such as the cell’s outline (G. R. Johnson et al. Preprint at bioRxiv http://doi.org/chwv; 2017).
“What you’re seeing now is an unprecedented shift in how well machine learning can accomplish biological tasks that have to do with imaging,” says Anne Carpenter, director of the Imaging Platform at the Broad Institute of MIT and Harvard in Cambridge, Massachusetts. In 2015, her interdisciplinary team began to process cell images using convolutional neural networks; now, Carpenter says, the networks process about 15% of image data at her centre. She predicts that the approach will become the centre’s main mode of processing in a few years.
Others are most excited by the idea that analysing images with convolutional neural networks could inadvertently reveal subtle biological phenomena, prompting biologists to ask questions they might not have considered before. “The most interesting phrase in science isn’t ‘Eureka!’, but ‘That’s weird — what’s going on?’” Nelson says.
Such serendipitous discoveries could help to advance disease research, says Rick Horwitz, the Allen Institute’s executive director. If deep learning can reveal subtle markers of cancer in an individual cell, he says, it could help to improve how researchers classify tumour progression. That could in turn trigger new hypotheses about how cancer spreads.
Other machine-learning connoisseurs in biology have set their sights on new frontiers, now that convolutional neural networks are taking flight for image processing. “Imaging is important, but so is chemistry and molecular data,” says Alex Wolf, a computational biologist at the German Research Center for Environmental Health in Neuherberg. Wolf hopes to tweak neural networks so that they can analyse gene expression. “I think there will be a very big breakthrough in the next few years,” he says, “that allows biologists to apply neural networks much more broadly.”
doi: 10.1038/d41586-018-00004-w
Nenhum comentário:
Postar um comentário