According to research published in journal Circulation:Cardiaovascular imaging cardiac MRI analysis utilizing machine learning algorithms can be performed significantly faster and with similar precision compared to human experts.
In the study, researchers trained a neural network to read the cardiac MRI scans. Utilizing artificial intelligence, a scan can be analyzed in approximately four seconds compared to 13 minutes for the human reviewer. When the AI was tested for precision researchers found that there was no significant difference compared to humans.
Researchers made available the data utilized for this study at thevolumeresource.com. This resource also intends to test and validate new cardiac MRI post-processing technology and machine learning techniques.
Early diagnosis of Alzheimer’s is important as treatments and interventions are more effective early in the course of the disease. However, early diagnosis has proven to be challenging. Research has linked the disease process to changes in metabolism, as shown by glucose uptake in certain regions of the brain, but these changes can be difficult to recognize.
Credit: Radiological Society of North America
“Differences in the pattern of glucose uptake in the brain are very subtle and diffuse,” said study co-author Jae Ho Sohn, M.D., from the Radiology & Biomedical Imaging Department at the University of California in San Francisco (UCSF). “People are good at finding specific biomarkers of disease, but metabolic changes represent a more global and subtle process.”
The researchers trained the deep learning algorithm on a special imaging technology known as 18-F-fluorodeoxyglucose positron emission tomography (FDG-PET). In an FDG-PET scan, FDG, a radioactive glucose compound, is injected into the blood. PET scans can then measure the uptake of FDG in brain cells, an indicator of metabolic activity.
The researchers had access to data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI), a major multi-site study focused on clinical trials to improve the prevention and treatment of this disease. The ADNI dataset included more than 2,100 FDG-PET brain images from 1,002 patients. Researchers trained the deep learning algorithm on 90 percent of the dataset and then tested it on the remaining 10 percent of the dataset. Through deep learning, the algorithm was able to teach itself metabolic patterns that corresponded to Alzheimer’s disease.
Finally, the researchers tested the algorithm on an independent set of 40 imaging exams from 40 patients that it had never studied. The algorithm achieved 100 percent sensitivity at detecting the disease an average of more than six years prior to the final diagnosis.
“We were very pleased with the algorithm’s performance,” Dr. Sohn said. “It was able to predict every single case that advanced to Alzheimer’s disease.”
Although he cautioned that their independent test set was small and needs further validation with a larger multi-institutional prospective study, Dr. Sohn said that the algorithm could be a useful tool to complement the work of radiologists especially in conjunction with other biochemical and imaging tests–in providing an opportunity for early therapeutic intervention.
Future research directions include training the deep learning algorithm to look for patterns associated with the accumulation of beta-amyloid and tau proteins, abnormal protein clumps and tangles in the brain that are markers specific to Alzheimer’s disease, according to UCSF’s Youngho Seo, Ph.D., who served as one of the faculty advisors of the study.
Citation: Yiming Ding, Jae Ho Sohn, Michael G. Kawczynski, Hari Trivedi, Roy Harnish, Nathaniel W. Jenkins, Dmytro Lituiev, Timothy P. Copeland, Mariam S. Aboian, Carina Mari Aparici, Spencer C. Behr, Robert R. Flavell, Shih-Ying Huang, Kelly A. Zalocusky, Lorenzo Nardo, Youngho Seo, Randall A. Hawkins, Miguel Hernandez Pampaloni, Dexter Hadley, and Benjamin L. Franc. “A Deep Learning Model to Predict a Diagnosis of Alzheimer Disease by Using 18F-FDG PET of the Brain.” Radiology, 2018, 180958. doi:10.1148/radiol.2018180958.
Team of researchers from USC Viterbi School of Engineering has created an algorithm that can help policymakers reduce the overall spread of disease. The algorithm is optimized to make the most of limited resources, such as advertising budgets, thus helping cash strapped public health agencies.
To create the artificial intellegence algorithm, the researchers used behavioral, demographic and epidemic disease trends data to generate a model of disease spread that captures underlying population dynamics and contact patterns between people. Using computer simulations, the researchers tested the algorithm on tuberculosis (TB) spread in India and gonorrhea in the United States. In both cases, they found the algorithm did a better job at reducing disease cases than current health outreach policies by sharing information about these diseases with individuals who might be most at risk.
The study was published in the AAAI Conference on Artificial Intelligence. The authors are Bryan Wilder, a candidate for a PhD in computer science, Milind Tambe, the Helen N. and Emmett H. Jones Professor in Engineering, a professor of computer science and industrial and systems engineering and co-founder of the USC Center for AI in Society and Sze-chuan Suen, an assistant professor in industrial and systems engineering. “Our study shows that a sophisticated algorithm can substantially reduce disease spread overall,” says Wilder, the first author of the paper. “We can make a big difference, and even save lives, just by being a little bit smarter about how we use resources and share health information with the public.”
The algorithm also appeared to make more strategic use of resources. The team found it concentrated heavily on particular groups and did not simply allocate more budget to groups with a high prevalence of the disease. This seems to indicate that the algorithm is leveraging non-obvious patterns and taking advantage of sometimes-subtle interactions between variables that humans may not be able to pinpoint. The team’s mathematical models also take into account that people move, age, and die, reflecting more realistic population dynamics than many existing algorithms for disease control.
Universal access to health care was on the minds of computer scientists at Stanford when they set out to create an artificially intelligent diagnosis algorithm for skin cancer. They made a database of nearly 130,000 skin disease images and trained their algorithm to visually diagnose potential cancer. From the very first test, it performed with inspiring accuracy.
This is a dermatologist using a dermatoscope. Credit: Matt Young
“We realized it was feasible, not just to do something well, but as well as a human dermatologist,” said Sebastian Thrun, an adjunct professor in the Stanford Artificial Intelligence Laboratory. “That’s when our thinking changed. That’s when we said, ‘Look, this is not just a class project for students, this is an opportunity to do something great for humanity.'”
The final product, the subject of a paper in Nature, was tested against 21 board-certified dermatologists. In its diagnoses of skin lesions, which represented the most common and deadliest skin cancers, the algorithm matched the performance of dermatologists.
Every year there are about 5.4 million new cases of skin cancer in the United States, and while the five-year survival rate for melanoma detected in its earliest states is around 97 percent, that drops to approximately 14 percent if it’s detected in its latest stages. Early detection could likely have an enormous impact on skin cancer outcomes.
Diagnosing skin cancer begins with a visual examination. A dermatologist usually looks at the suspicious lesion with the naked eye and with the aid of a dermatoscope, which is a handheld microscope that provides low-level magnification of the skin. If these methods are inconclusive or lead the dermatologist to believe the lesion is cancerous, a biopsy is the next step.
Bringing this algorithm into the examination process follows a trend in computing that combines visual processing with deep learning, a type of artificial intelligence modeled after neural networks in the brain. Deep learning has a decades-long history in computer science but it only recently has been applied to visual processing tasks, with great success. The essence of machine learning, including deep learning, is that a computer is trained to figure out a problem rather than having the answers programmed into it.
“We made a very powerful machine learning algorithm that learns from data,” said Andre Esteva, co-lead author of the paper and a graduate student in the Thrun lab. “Instead of writing into computer code exactly what to look for, you let the algorithm figure it out.”
The algorithm was fed each image as raw pixels with an associated disease label. Compared to other methods for training algorithms, this one requires very little processing or sorting of the images prior to classification, allowing the algorithm to work off a wider variety of data. Rather than building an algorithm from scratch, the researchers began with an algorithm developed by Google that was already trained to identify 1.28 million images from 1,000 object categories. While it was primed to be able to differentiate cats from dogs, the researchers needed it to know a malignant carcinoma from a benign seborrheic keratosis.
“There’s no huge dataset of skin cancer that we can just train our algorithms on, so we had to make our own,” said Brett Kuprel, co-lead author of the paper and a graduate student in the Thrun lab. “We gathered images from the internet and worked with the medical school to create a nice taxonomy out of data that was very messy – the labels alone were in several languages, including German, Arabic and Latin.”
After going through the necessary translations, the researchers collaborated with dermatologists at Stanford Medicine, as well as Helen M. Blau, professor of microbiology and immunology at Stanford and co-author of the paper. Together, this interdisciplinary team worked to classify the hodgepodge of internet images. Many of these, unlike those taken by medical professionals, were varied in terms of angle, zoom and lighting. In the end, they amassed about 130,000 images of skin lesions representing over 2,000 different diseases.
During testing, the researchers used only high-quality, biopsy-confirmed images provided by the University of Edinburgh and the International Skin Imaging Collaboration Project that represented the most common and deadliest skin cancers malignant carcinomas and malignant melanomas. The 21 dermatologists were asked whether, based on each image, they would proceed with biopsy or treatment, or reassure the patient. The researchers evaluated success by how well the dermatologists were able to correctly diagnose both cancerous and non-cancerous lesions in over 370 images.
The algorithm’s performance was measured through the creation of a sensitivity-specificity curve, where sensitivity represented its ability to correctly identify malignant lesions and specificity represented its ability to correctly identify benign lesions. It was assessed through three key diagnostic tasks: keratinocyte carcinoma classification, melanoma classification, and melanoma classification when viewed using dermoscopy. In all three tasks, the algorithm matched the performance of the dermatologists with the area under the sensitivity-specificity curve amounting to at least 91 percent of the total area of the graph.
An added advantage of the algorithm is that, unlike a person, the algorithm can be made more or less sensitive, allowing the researchers to tune its response depending on what they want it to assess. This ability to alter the sensitivity hints at the depth and complexity of this algorithm. The underlying architecture of seemingly irrelevant photos — including cats and dogs — helps it better evaluate the skin lesion images.
Although this algorithm currently exists on a computer, the team would like to make it smartphone compatible in the near future, bringing reliable skin cancer diagnoses to our fingertips.
“My main eureka moment was when I realized just how ubiquitous smartphones will be,” said Esteva. “Everyone will have a supercomputer in their pockets with a number of sensors in it, including a camera. What if we could use it to visually screen for skin cancer? Or other ailments?”
The team believes it will be relatively easy to transition the algorithm to mobile devices but there still needs to be further testing in a real-world clinical setting.
“Advances in computer-aided classification of benign versus malignant skin lesions could greatly assist dermatologists in improved diagnosis for challenging lesions and provide better management options for patients,” said Susan Swetter, professor of dermatology and director of the Pigmented Lesion and Melanoma Program at the Stanford Cancer Institute, and co-author of the paper. “However, rigorous prospective validation of the algorithm is necessary before it can be implemented in clinical practice, by practitioners and patients alike.”
Even in light of the challenges ahead, the researchers are hopeful that deep learning could someday contribute to visual diagnosis in many medical fields.
Citation: Andre Esteva, Brett Kuprel, Roberto A. Novoa, Justin Ko, Susan M. Swetter, Helen M. Blau and Sebastian Thrun. “Dermatologist-level classification of skin cancer with deep neural networks.” Nature 2017. DOI: 10.1038/nature21056 Adapted from press release by Stanford University.
UC San Francisco’s Center for Digital Health Innovation (CDHI) today announced a collaboration with Intel Corporation to deploy and validate a deep learning analytics platform designed to improve care by helping clinicians make better treatment decisions, predict patient outcomes, and respond more nimbly in acute situations.
The collaboration brings together Intel’s leading-edge computer science and deep learning capabilities with UCSF’s clinical and research expertise to create a scalable, high-performance computational environment to support enhanced frontline clinical decision making for a wide variety of patient care scenarios. Until now, progress toward this goal has been difficult because complex, diverse datasets are managed in multiple, incompatible systems. This next-generation platform will allow UCSF to efficiently manage the huge volume and variety of data collected for clinical care as well as newer “big data” from genomic sequencing, monitors, sensors, and wearables. These data will be integrated into a highly scalable “information commons” that will enable advanced analytics with machine learning and deep learning algorithms. The end result will be algorithms that can rapidly support data-driven clinical decision-making.
“While artificial intelligence and machine learning have been integrated into our everyday lives, our ability to use them in healthcare is a relatively new phenomenon,” said Michael Blum, MD, associate vice chancellor for informatics, director of CDHI and professor of medicine at UCSF. “Now that we have ‘digitized’ healthcare, we can begin utilizing the same technologies that have made the driverless car and virtual assistants possible and bring them to bear on vexing healthcare challenges such as predicting health risks, preventing hospital readmissions, analyzing complex medical images and more. Deep learning environments are capable of rapidly analyzing and predicting patient trajectories utilizing vast amounts of multi-dimensional data. By integrating deep learning capabilities into the care delivered to critically injured patients, providers will have access to real-time decision support that will enable timely decision making in an environment where seconds are the difference between life and death. We expect these technologies, combined with the clinical and scientific knowledge of UCSF, to be made accessible through the cloud to drive the transformation of health and healthcare.”
UCSF and Intel will work together to deploy the high-performance computing environment on industry-standard Intel® Xeon® processor-based platforms that will support the data management and algorithm development lifecycle, including data curation and annotation, algorithm training, and testing against labeled datasets with particular pre-specified outcomes. The collaboration will also allow UCSF and Intel to better understand how deep learning analytics and machine-driven workflows can be employed to optimize the clinical environment and patient outcomes. This work will inform Intel’s development and testing of new platform architectures for the healthcare industry. “This collaboration between Intel and UCSF will accelerate the development of deep learning algorithms that have great potential to benefit patients,” said Kay Eron, general manager of health and life sciences in Intel’s Data Center Group. “Combining the medical science and computer science expertise across our organizations will enable us to more effectively tackle barriers in directing the latest technologies toward critical needs in healthcare.”
The platform will enable UCSF’s deep learning use cases to run in a distributed fashion on a central processing unit (CPU)-based cluster. The platform will be able to handle large data sets and scale easily for future use case requirements, including supporting larger convolutional neural network models, artificial networks patterned after living organisms, and very large multidimensional datasets. In the future, Intel expects to incorporate the deep learning analytics platform with other Intel analytics frameworks, healthcare data sources, and application program interfaces (APIs) – code that allows different programs to communicate – to create increasingly sophisticated use case algorithms that will continue to raise the bar in health and healthcare.
Duke researchers have devised a computerized method to autonomously and quickly diagnose malaria with clinically relevant accuracy — a crucial step to successfully treating the disease and halting its spread.
In 2015 alone, malaria infected 214 million people worldwide, killing an estimated 438,000. While Western medicine can spot malaria with near-perfect accuracy, it can be difficult to diagnose in resource-limited areas where infection rates are highest.
Malaria’s symptoms can look like many other diseases, and there are simply not enough well-trained field workers and functioning microscopes to keep pace with the parasite. While rapid diagnostic tests do exist, it is expensive to continuously purchase new tests. These tests also cannot tell how severe the infection is by tallying the number of infected cells, which is important for managing a patient’s recovery. In a new study, engineers from Duke University report a method that uses computer ‘deep learning’ and light-based, holographic scans to spot malaria-infected cells from a simple, untouched blood sample without any help from a human. The innovation could form the basis of a fast, reliable test that could be given by most anyone, anywhere in the field, which would be invaluable in the $2.7 billion-per-year global fight against the disease. The results were published online Sept. 16 in the journal PLOS ONE. “With this technique, the path is there to be able to process thousands of cells per minute,” said Adam Wax, professor of biomedical engineering at Duke. “That’s a huge improvement to the 40 minutes it currently takes a field technician to stain, prepare and read a slide to personally look for infection.”
Cells in different stages of infection as analyzed by a new algorithm.
The new technique is based on a technology called quantitative phase spectroscopy. As a laser sweeps through the visible spectrum of light, sensors capture how each discrete light frequency interacts with a sample of blood. The resulting data captures a holographic image that provides a wide array of valuable information that can indicate a malarial infection.
“We identified 23 parameters that are statistically significant for spotting malaria,” said Han Sang Park, a doctoral student in Wax’s laboratory and first author on the paper. For example, as the disease progresses, red blood cells decrease in volume, lose hemoglobin and deform as the parasite within grows larger. This affects features such as cell volume, perimeter, shape and center of mass. “However, none of the parameters were reliable more than 90 percent of the time on their own, so we decided to use them all,” said Park.
“To be adopted, any new diagnostic device has to be just as reliable as a trained field worker with a microscope,” said Wax. “Otherwise, even with a 90 percent success rate, you’d still miss more than 20 million cases a year.”
To get a more accurate reading, Wax and Park turned to deep learning — a method by which computers teach themselves how to distinguish between different objects. By feeding data on more than 1,000 healthy and diseased cells into a computer, the deep learning program determined which sets of measurements at which thresholds most clearly distinguished healthy from diseased cells.
When they put the resulting algorithm to the test with hundreds of cells, it was able to correctly spot malaria 97 to 100 percent of the time — a number the researchers believe will increase as more cells are used to train the program. Because the technique breaks data-rich holograms down to just 23 numbers, tests can be easily transmitted in bulk, which is important for locations that often do not have reliable, fast internet connections, and that, in turn, could eliminate the need for each location to have its own computer for processing.
Wax and Park are now looking to develop the technology into a diagnostic device through a startup company called M2 Photonics Innovations. They hope to show that a device based on this technology would be accurate and cost-efficient enough to be useful in the field. Wax has also received funding to begin exploring the use of the technique for spotting cancerous cells in blood samples.