Early diagnosis of Alzheimer’s disease using artificial intelligence

According to a study published in the journal of radiology, research shows that artificial intelligence (AI) technology predict the development of Alzheimer’s disease early.

Early diagnosis of Alzheimer’s is important as treatments and interventions are more effective early in the course of the disease. However, early diagnosis has proven to be challenging. Research has linked the disease process to changes in metabolism, as shown by glucose uptake in certain regions of the brain, but these changes can be difficult to recognize.

Credit: Radiological Society of North America

“Differences in the pattern of glucose uptake in the brain are very subtle and diffuse,” said study co-author Jae Ho Sohn, M.D., from the Radiology & Biomedical Imaging Department at the University of California in San Francisco (UCSF). “People are good at finding specific biomarkers of disease, but metabolic changes represent a more global and subtle process.”

The researchers trained the deep learning algorithm on a special imaging technology known as 18-F-fluorodeoxyglucose positron emission tomography (FDG-PET). In an FDG-PET scan, FDG, a radioactive glucose compound, is injected into the blood. PET scans can then measure the uptake of FDG in brain cells, an indicator of metabolic activity.

The researchers had access to data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI), a major multi-site study focused on clinical trials to improve the prevention and treatment of this disease. The ADNI dataset included more than 2,100 FDG-PET brain images from 1,002 patients. Researchers trained the deep learning algorithm on 90 percent of the dataset and then tested it on the remaining 10 percent of the dataset. Through deep learning, the algorithm was able to teach itself metabolic patterns that corresponded to Alzheimer’s disease.

Finally, the researchers tested the algorithm on an independent set of 40 imaging exams from 40 patients that it had never studied. The algorithm achieved 100 percent sensitivity at detecting the disease an average of more than six years prior to the final diagnosis.

“We were very pleased with the algorithm’s performance,” Dr. Sohn said. “It was able to predict every single case that advanced to Alzheimer’s disease.”

Although he cautioned that their independent test set was small and needs further validation with a larger multi-institutional prospective study, Dr. Sohn said that the algorithm could be a useful tool to complement the work of radiologists especially in conjunction with other biochemical and imaging tests–in providing an opportunity for early therapeutic intervention.

Future research directions include training the deep learning algorithm to look for patterns associated with the accumulation of beta-amyloid and tau proteins, abnormal protein clumps and tangles in the brain that are markers specific to Alzheimer’s disease, according to UCSF’s Youngho Seo, Ph.D., who served as one of the faculty advisors of the study.

Citation: Yiming Ding, Jae Ho Sohn, Michael G. Kawczynski, Hari Trivedi, Roy Harnish, Nathaniel W. Jenkins, Dmytro Lituiev, Timothy P. Copeland, Mariam S. Aboian, Carina Mari Aparici, Spencer C. Behr, Robert R. Flavell, Shih-Ying Huang, Kelly A. Zalocusky, Lorenzo Nardo, Youngho Seo, Randall A. Hawkins, Miguel Hernandez Pampaloni, Dexter Hadley, and Benjamin L. Franc. “A Deep Learning Model to Predict a Diagnosis of Alzheimer Disease by Using 18F-FDG PET of the Brain.” Radiology, 2018, 180958.
doi:10.1148/radiol.2018180958.

.embed-container { position: relative; padding-bottom: 56.25%; height: 0; overflow: hidden; max-width: 100%; } .embed-container iframe, .embed-container object, .embed-container embed { position: absolute; top: 0; left: 0; width: 100%; height: 100%; }

Collaboration between UCSF, Intel to develop deep learning analytics for healthcare

UC San Francisco’s Center for Digital Health Innovation (CDHI) today announced a collaboration with Intel Corporation to deploy and validate a deep learning analytics platform designed to improve care by helping clinicians make better treatment decisions, predict patient outcomes, and respond more nimbly in acute situations.

The collaboration brings together Intel’s leading-edge computer science and deep learning capabilities with UCSF’s clinical and research expertise to create a scalable, high-performance computational environment to support enhanced frontline clinical decision making for a wide variety of patient care scenarios. Until now, progress toward this goal has been difficult because complex, diverse datasets are managed in multiple, incompatible systems. This next-generation platform will allow UCSF to efficiently manage the huge volume and variety of data collected for clinical care as well as newer “big data” from genomic sequencing, monitors, sensors, and wearables. These data will be integrated into a highly scalable “information commons” that will enable advanced analytics with machine learning and deep learning algorithms. The end result will be algorithms that can rapidly support data-driven clinical decision-making.

“While artificial intelligence and machine learning have been integrated into our everyday lives, our ability to use them in healthcare is a relatively new phenomenon,” said Michael Blum, MD, associate vice chancellor for informatics, director of CDHI and professor of medicine at UCSF. “Now that we have ‘digitized’ healthcare, we can begin utilizing the same technologies that have made the driverless car and virtual assistants possible and bring them to bear on vexing healthcare challenges such as predicting health risks, preventing hospital readmissions, analyzing complex medical images and more. Deep learning environments are capable of rapidly analyzing and predicting patient trajectories utilizing vast amounts of multi-dimensional data. By integrating deep learning capabilities into the care delivered to critically injured patients, providers will have access to real-time decision support that will enable timely decision making in an environment where seconds are the difference between life and death. We expect these technologies, combined with the clinical and scientific knowledge of UCSF, to be made accessible through the cloud to drive the transformation of health and healthcare.”

UCSF and Intel will work together to deploy the high-performance computing environment on industry-standard Intel® Xeon® processor-based platforms that will support the data management and algorithm development lifecycle, including data curation and annotation, algorithm training, and testing against labeled datasets with particular pre-specified outcomes. The collaboration will also allow UCSF and Intel to better understand how deep learning analytics and machine-driven workflows can be employed to optimize the clinical environment and patient outcomes. This work will inform Intel’s development and testing of new platform architectures for the healthcare industry.
“This collaboration between Intel and UCSF will accelerate the development of deep learning algorithms that have great potential to benefit patients,” said Kay Eron, general manager of health and life sciences in Intel’s Data Center Group. “Combining the medical science and computer science expertise across our organizations will enable us to more effectively tackle barriers in directing the latest technologies toward critical needs in healthcare.”

The platform will enable UCSF’s deep learning use cases to run in a distributed fashion on a central processing unit (CPU)-based cluster. The platform will be able to handle large data sets and scale easily for future use case requirements, including supporting larger convolutional neural network models, artificial networks patterned after living organisms, and very large multidimensional datasets. In the future, Intel expects to incorporate the deep learning analytics platform with other Intel analytics frameworks, healthcare data sources, and application program interfaces (APIs) – code that allows different programs to communicate – to create increasingly sophisticated use case algorithms that will continue to raise the bar in health and healthcare.

Adapted from press release by UCSF.

New tool to discover bio-markers for aging using In-silico Pathway Activation Network Decomposition Analysis (iPANDA)

Today the Biogerontology Research Foundation announced the international collaboration on signaling pathway perturbation-based transcriptomic biomarkers of aging. On November 16th scientists at the Biogerontology Research Foundation alongside collaborators from Insilico Medicine, Inc, the Johns Hopkins University, Albert Einstein College of Medicine, Boston University, Novartis, Nestle and BioTime Inc. announced the publication of their proof of concept experiment demonstrating the utility of a novel approach for analyzing transcriptomic, metabolomic and signalomic data sets, titled iPANDA, in Nature Communications.

“Given the high volume of data being generated in the life sciences, there is a huge need for tools that make sense of that data. As such, this new method will have widespread applications in unraveling the molecular basis of age-related diseases and in revealing biomarkers that can be used in research and in clinical settings. In addition, tools that help reduce the complexity of biology and identify important players in disease processes are vital not only to better understand the underlying mechanisms of age-related disease but also to facilitate a personalized medicine approach. The future of medicine is in targeting diseases in a more specific and personalized fashion to improve clinical outcomes, and tools like iPANDA are essential for this emerging paradigm,” said João Pedro de Magalhães, PhD, a trustee of the Biogerontology Research Foundation.

The algorithm, iPANDA, applies deep learning algorithms to complex gene expression data sets and signal pathway activation data for the purposes of analysis and integration, and their proof of concept article demonstrates that the system is capable of significantly reducing noise and dimensionality of transcriptomic data sets and of identifying patient-specific pathway signatures associated with breast cancer patients that characterize their response to Toxicol-based neoadjuvant therapy.

The system represents a substantially new approach to the analysis of microarray data sets, especially as it pertains to data obtained from multiple sources, and appears to be more scalable and robust than other current approaches to the analysis of transcriptomic, metabolomic and signalomic data obtained from different sources. The system also has applications in rapid biomarker development and drug discovery, discrimination between distinct biological and clinical conditions, and the identification of functional pathways relevant to disease diagnosis and treatment, and ultimately in the development of personalized treatments for age-related diseases.

While the team predicted and compared the response of breast cancer patients to Taxol-based neoadjuvant therapy as their proof of concept, the application of this approach to patient-specific responses to biomedical gerontological interventions (e.g. to geroprotectors, which is a clear focus of the team’s past efforts), to the development of both generalized and personalized biomarkers of ageging, and to the characterization and analysis of minute differences in ageging over time, between individuals, and between different organisms would represent a promising and exciting future application” said Franco Cortese, Deputy Director of the Biogerontology Research Foundation.

Citation: “In silico Pathway Activation Network Decomposition Analysis (iPANDA) as a method for biomarker development”. Ivan V. Ozerov, Ksenia V. Lezhnina, Evgeny Izumchenko, Artem V. Artemov, Sergey Medintsev, Quentin Vanhaelen, Alexander Aliper, Jan Vijg, Andreyan N. Osipov, Ivan Labat, Michael D. West, Anton Buzdin, Charles R. Cantor, Yuri Nikolsky, Nikolay Borisov, Irina Irincheeva, Edward Khokhlovich, David Sidransky, Miguel Luiz Camargo & Alex Zhavoronkov. Nature Communications 2016 vol: 7 pp: 13427.
DOI: http://dx.doi.org/10.1038/NCOMMS13427
Adapted from press release by Biogerontology Research Foundation

Computer based model InFlo predicts cell signals and network activity

A multi-institution academic-industrial partnership of researchers led by Case Western Reserve University School of Medicine has developed a new method to broadly assess cell communication networks and identify disease-specific network anomalies. The computer-based method, called InFlo, was developed in collaboration with researchers at Philips and Princeton University and predicts how cells send signals across networks to cause cancer or other disease. Details about the new method were recently published in Oncogene.

“Cellular signaling networks are the mechanisms that cells use to transfer, process, and respond to biological information derived from their immediate surroundings,” said Vinay Varadan, PhD, assistant professor at Case Western Reserve University School of Medicine, member of the Case Comprehensive Cancer Center, and senior corresponding author on the study. “InFlo can be viewed as modeling the flow of information within these signaling networks.”

InFlo works by assessing gene activity levels in tissue samples and predicting corresponding protein levels. It then uses statistical probabilities and other mathematical models to build activity webs showing how the proteins interact. Researchers can use InFlo to compare diseased and healthy tissues and pinpoint signaling differences. InFlo is tissue-specific and accounts for genetic alterations associated with disease, unlike other methods. It represents a major step forward in deciphering the activities of multi-tiered signaling networks commonly used by cells.

“Complex diseases such as cancer involve the simultaneous disruptions of multiple cellular processes acting in tandem,” said Varadan. “We developed InFlo to robustly integrate multiple molecular data streams and develop an integrative molecular portrait of an individual cancer sample.”

InFlo incorporates data related to each level of cell communication within a single sample, including DNA, RNA, proteins, and molecules commonly attached to proteins such as chemical methyl groups. The method also includes strategies to reduce “noise” and only highlight the signaling networks most likely to cause disease.

Analisa DiFeo, PhD, senior co-corresponding author on the study, Norma C. and Al I. Geller Designated Professor of Ovarian Cancer Research at Case Western Reserve University School of Medicine, and member of the Case Comprehensive Cancer Center, validated InFlo using ovarian cancer tumor cells that were resistant to platinum-based chemotherapy. InFlo pinpointed the interaction between two proteins called cAMP and CREB1 as a key mechanism associated with platinum resistance.

“Following up on InFlo’s predictions, we showed that inhibiting CREB1 potently sensitizes ovarian cancer cells to platinum therapy and is also effective in killing ovarian cancer stem cells. We are therefore excited about this discovery and are currently evaluating whether this could lead to a potential therapeutic target for the treatment of platinum-resistant ovarian cancer,” said DiFeo.

InFlo is being incorporated into Philips IntelliSpace Genomics platform, and will soon be available for widespread use in basic and translational research settings. Case Western Reserve University researchers will continue to develop the IntelliSpace Genomics InFlo module and the next step will be to expand InFlo to incorporate other data streams. “We are currently collaborating with the Imaging Informatics research group in the Center for Computational Imaging and Personalized Diagnostics at Case Western Reserve University to integrate InFlo with imaging-features derived from pathology and radiology data,” said Varadan. Such an addition would result in one of the most comprehensive tools available to researchers to infer mechanisms underlying complex diseases such as cancer.

Citation: Dimitrova, N., Nagaraj, A.B., Razi, A., Singh, S., Kamalakaran, S., Banerjee, N., Joseph, P., Mankovich, A., Mittal, P., DiFeo, A. and Varadan, V., 2016. InFlo: a novel systems biology framework identifies cAMP-CREB1 axis as a key modulator of platinum resistance in ovarian cancer. Oncogene.
DOI: http://dx.doi.org/10.1038/onc.2016.398
Research funding: Career Development Program of Case GI SPORE, Career Development Program in Computational Genomic Epidemiology of Cancer, Philips Healthcare, Rosalie and Morton Cohen Family Memorial Genomics Fund of University Hospitals, and others
Adapted from press release by  Case Western Reserve University School of Medicine

Researchers use multi-task deep neural networks to automatically extract data from cancer pathology reports

Despite steady progress in detection and treatment in recent decades, cancer remains the second leading cause of death in the United States, cutting short the lives of approximately 500,000 people each year. To better understand and combat this disease, medical researchers rely on cancer registry programs–a national network of organizations that systematically collect demographic and clinical information related to the diagnosis, treatment, and history of cancer incidence in the United States. The surveillance effort, coordinated by the National Cancer Institute (NCI) and the Centers for Disease Control and Prevention, enables researchers and clinicians to monitor cancer cases at the national, state, and local levels.

Much of this data is drawn from electronic, text-based clinical reports that must be manually curated–a time-intensive process–before it can be used in research.

A representation of a deep learning neural network designed to
intelligently extract text-based information from cancer
 pathology reports. Credit: Oak Ridge National Laboratory

Since 2014 Tourassi has led a team focused on creating software that can quickly identify valuable information in cancer reports, an ability that would not only save time and worker hours but also potentially reveal overlooked avenues in cancer research. After experimenting with conventional natural-language-processing software, the team’s most recent progress has emerged via deep learning, a machine-learning technique that employs algorithms, big data, and the computing power of GPUs to emulate human learning and intelligence.

Using the Titan supercomputer at the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility located at ORNL, Tourassi’s team applied deep learning to extract useful information from cancer pathology reports, a foundational element of cancer surveillance. Working with modest datasets, the team obtained preliminary findings that demonstrate deep learning’s potential for cancer surveillance.

The continued development and maturation of automated data tools, among the objectives outlined in the White House’s Cancer Moonshot initiative, would give medical researchers and policymakers an unprecedented view of the US cancer population at a level of detail typically obtained only for clinical trial patients, historically less than 5 percent of the overall cancer population.

Creating software that can understand not only the meaning of words but also the contextual relationships between them is no simple task. Humans develop these skills through years of back-and-forth interaction and training. For specific tasks, deep learning compresses this process into a matter of hours.

Typically, this context-building is achieved through the training of a neural network, a web of weighted calculations designed to produce informed guesses on how to correctly carry out tasks, such as identifying an image or processing a verbal command. Data fed to a neural network, called inputs, and select feedback give the software a foundation to make decisions based on new data. This algorithmic decision-making process is largely opaque to the programmer, a dynamic akin to a teacher with little direct knowledge of her students’ perception of a lesson.

GPUs, such as those in Titan, can accelerate this training process by quickly executing many deep-learning calculations simultaneously. In two recent studies, Tourassi’s team used accelerators to tune multiple algorithms, comparing results to more traditional methods. Using a dataset composed of 1,976 pathology reports provided by NCI’s Surveillance, Epidemiology, and End Results (SEER) Program, Tourassi’s team trained a deep-learning algorithm to carry out two different but closely related information-extraction tasks. In the first task the algorithm scanned each report to identify the primary location of the cancer. In the second task the algorithm identified the cancer site’s laterality–or on which side of the body the cancer was located.

By setting up a neural network designed to exploit the related information shared by the two tasks, an arrangement known as multitask learning, the team found the algorithm performed substantially better than competing methods.

Another study carried out by Tourassi’s team used 946 SEER reports on breast and lung cancer to tackle an even more complex challenge: using deep learning to match the cancer’s origin to a corresponding topological code, a classification that’s even more specific than a cancer’s primary site or laterality, with 12 possible answers.

The team tackled this problem by building a convolutional neural network, a deep-learning approach traditionally used for image recognition, and feeding it language from a variety of sources. Text inputs ranged from general (e.g., Google search results) to domain-specific (e.g., medical literature) to highly specialized (e.g., cancer pathology reports). The algorithm then took these inputs and created a mathematical model that drew connections between words, including words shared between unrelated texts.

Comparing this approach to more traditional classifiers, such as a vector space model, the team observed incremental improvement in performance as the network absorbed more cancer-specific text. These preliminary results will help guide Tourassi’s team as they scale up deep-learning algorithms to tackle larger datasets and move toward less supervision, meaning the algorithms will make informed decisions with less human intervention.

In 2016 Tourassi’s team learned its cancer surveillance project will be developed as part of DOE’s Exascale Computing Project, an initiative to develop a computing ecosystem that can support an exascale supercomputer–a machine that can execute a billion billion calculations per second. Though the team has made considerable progress in leveraging deep learning for cancer research, the biggest gains are still to come.

Citation: Yoon, Hong-Jun, Arvind Ramanathan, and Georgia Tourassi. “Multi-task Deep Neural Networks for Automated Extraction of Primary Site and Laterality Information from Cancer Pathology Reports.” In INNS Conference on Big Data, pp. 195-204. Springer International Publishing, 2016.
DOI: http://dx.doi.org/10.1007/978-3-319-47898-2_21
Adapted from press release by US Department of Energy, Oak Ridge National Laboratory.

Novel diagnostic test for malaria using holographic imaging and artificial intelligence using deep learning

Duke researchers have devised a computerized method to autonomously and quickly diagnose malaria with clinically relevant accuracy — a crucial step to successfully treating the disease and halting its spread.

In 2015 alone, malaria infected 214 million people worldwide, killing an estimated 438,000.
While Western medicine can spot malaria with near-perfect accuracy, it can be difficult to diagnose in resource-limited areas where infection rates are highest.

Malaria’s symptoms can look like many other diseases, and there are simply not enough well-trained field workers and functioning microscopes to keep pace with the parasite. While rapid diagnostic tests do exist, it is expensive to continuously purchase new tests. These tests also cannot tell how severe the infection is by tallying the number of infected cells, which is important for managing a patient’s recovery.

In a new study, engineers from Duke University report a method that uses computer ‘deep learning’ and light-based, holographic scans to spot malaria-infected cells from a simple, untouched blood sample without any help from a human. The innovation could form the basis of a fast, reliable test that could be given by most anyone, anywhere in the field, which would be invaluable in the $2.7 billion-per-year global fight against the disease. The results were published online Sept. 16 in the journal PLOS ONE.

“With this technique, the path is there to be able to process thousands of cells per minute,” said Adam Wax, professor of biomedical engineering at Duke. “That’s a huge improvement to the 40 minutes it currently takes a field technician to stain, prepare and read a slide to personally look for infection.”


Cells in different stages of infection as analyzed by a new algorithm.

The new technique is based on a technology called quantitative phase spectroscopy. As a laser sweeps through the visible spectrum of light, sensors capture how each discrete light frequency interacts with a sample of blood. The resulting data captures a holographic image that provides a wide array of valuable information that can indicate a malarial infection.

“We identified 23 parameters that are statistically significant for spotting malaria,” said Han Sang Park, a doctoral student in Wax’s laboratory and first author on the paper. For example, as the disease progresses, red blood cells decrease in volume, lose hemoglobin and deform as the parasite within grows larger. This affects features such as cell volume, perimeter, shape and center of mass.
“However, none of the parameters were reliable more than 90 percent of the time on their own, so we decided to use them all,” said Park.

“To be adopted, any new diagnostic device has to be just as reliable as a trained field worker with a microscope,” said Wax. “Otherwise, even with a 90 percent success rate, you’d still miss more than 20 million cases a year.”

To get a more accurate reading, Wax and Park turned to deep learning — a method by which computers teach themselves how to distinguish between different objects. By feeding data on more than 1,000 healthy and diseased cells into a computer, the deep learning program determined which sets of measurements at which thresholds most clearly distinguished healthy from diseased cells.

When they put the resulting algorithm to the test with hundreds of cells, it was able to correctly spot malaria 97 to 100 percent of the time — a number the researchers believe will increase as more cells are used to train the program. Because the technique breaks data-rich holograms down to just 23 numbers, tests can be easily transmitted in bulk, which is important for locations that often do not have reliable, fast internet connections, and that, in turn, could eliminate the need for each location to have its own computer for processing.

Wax and Park are now looking to develop the technology into a diagnostic device through a startup company called M2 Photonics Innovations. They hope to show that a device based on this technology would be accurate and cost-efficient enough to be useful in the field. Wax has also received funding to begin exploring the use of the technique for spotting cancerous cells in blood samples.

Publication: Automated Detection of P. falciparum Using Machine Learning Algorithms with Quantitative Phase Images of Unstained Cells.
DOI: http://dx.doi.org/10.1371/journal.pone.0163045
Adapted from press release by Duke University