Machine learning to predict the clinical utility of biomedical research

A machine learning model to predict which scientific advances are likely to eventually translate to the clinic has been developed by Ian Hutchins and colleagues in the Office of Portfolio Analysis (OPA), a team led by George Santangelo at the National Institutes of Health (NIH).

This work published in the journal PLOS Biology aims to decrease the interval between scientific discovery and clinical application. The model determines the likelihood that a research article will be cited by a future clinical trial or guideline, an early indicator of translational progress.

Researchers have quantified these predictions as a novel metric called “Approximate Potential to Translate” (APT). Approximate Potential to Translate values can be used by researchers and decision-makers to focus attention on areas of science that have strong signatures of translational potential. Although numbers alone should never be a substitute for evaluation by human experts, the Approximate Potential to Translate metric has the potential to accelerate biomedical progress as one component of data-driven decision-making.

The model that computes Approximate Potential to Translate values makes predictions based upon the content of research articles and citations. A long-standing barrier to research and development of metrics like Approximate Potential to Translate is that such citation data has remained hidden behind proprietary, restrictive, and often costly licensing agreements. To disrupt this impediment to the scientific community, to increase transparency, and to facilitate reproducibility, OPA has aggregated citation data from publicly available resources to create an open citation collection (NIH-OCC).

The open citation collection comprises over 420 million citation links at present and will be updated monthly. For publications since 2010, the open citation collection is already more comprehensive than leading proprietary sources of citation data. Citation data from the open citation collection are used to calculate both Approximate Potential to Translate values and Relative Citation Ratios (RCRs). The latter, a measure of scientific influence at the article level, normalized for the field of study and time since publication.

Approximate Potential to Translate values and the open citation collection are publicly available as components of the iCite webtool. This tool will continue as the primary source of Relative Citation Ratios data.

Machine learning algorithms to speed up image biomarker analysis in heart MRI scans

According to research published in journal Circulation:Cardiaovascular imaging cardiac MRI analysis utilizing machine learning algorithms can be performed significantly faster and with similar precision compared to human experts.

In the study, researchers trained a neural network to read the cardiac MRI scans. Utilizing artificial intelligence, a scan can be analyzed in approximately four seconds compared to 13 minutes for the human reviewer. When the AI was tested for precision researchers found that there was no significant difference compared to humans.

Researchers made available the data utilized for this study at thevolumeresource.com. This resource also intends to test and validate new cardiac MRI post-processing technology and machine learning techniques.

Computational model to track flu using Twitter data

An international team led by Alessandro Vespignani from Northeastern University has developed a computational model to predict the spread of the flu in real time. This unique model uses posts on Twitter in combination with key parameters of each season’s epidemic, including the incubation period of the disease, the immunization rate, how many people an individual with the virus can infect, and the viral strains present. When tested against official influenza surveillance systems, the model has been shown to accurately(70 to 90 percent) forecast the disease’s evolution up to six weeks in advance.

The paper on the novel model received a coveted Best Paper Honorable Mention award at the 2017 International World Wide Web Conference last month following its presentation.

While the paper reports results using Twitter data, the researchers note that the model can work with data from many other digital sources, too, as well as online surveys of individuals such as influenzanet, which is very popular in Europe.

“Our model is a work in progress,” emphasizes Vespignani. “We plan to add new parameters, for example, school and workplace structure.

Adapted from press release from the Northeastern University.

Artificial intelligence project to identify skin cancer based on machine learning algorithm works as good as a doctor

Universal access to health care was on the minds of computer scientists at Stanford when they set out to create an artificially intelligent diagnosis algorithm for skin cancer. They made a database of nearly 130,000 skin disease images and trained their algorithm to visually diagnose potential cancer. From the very first test, it performed with inspiring accuracy.

This is a dermatologist using a dermatoscope. Credit: Matt Young

“We realized it was feasible, not just to do something well, but as well as a human dermatologist,” said Sebastian Thrun, an adjunct professor in the Stanford Artificial Intelligence Laboratory. “That’s when our thinking changed. That’s when we said, ‘Look, this is not just a class project for students, this is an opportunity to do something great for humanity.'”

The final product, the subject of a paper in Nature, was tested against 21 board-certified dermatologists. In its diagnoses of skin lesions, which represented the most common and deadliest skin cancers, the algorithm matched the performance of dermatologists.

Every year there are about 5.4 million new cases of skin cancer in the United States, and while the five-year survival rate for melanoma detected in its earliest states is around 97 percent, that drops to approximately 14 percent if it’s detected in its latest stages. Early detection could likely have an enormous impact on skin cancer outcomes.

Diagnosing skin cancer begins with a visual examination. A dermatologist usually looks at the suspicious lesion with the naked eye and with the aid of a dermatoscope, which is a handheld microscope that provides low-level magnification of the skin. If these methods are inconclusive or lead the dermatologist to believe the lesion is cancerous, a biopsy is the next step.

Bringing this algorithm into the examination process follows a trend in computing that combines visual processing with deep learning, a type of artificial intelligence modeled after neural networks in the brain. Deep learning has a decades-long history in computer science but it only recently has been applied to visual processing tasks, with great success. The essence of machine learning, including deep learning, is that a computer is trained to figure out a problem rather than having the answers programmed into it.

“We made a very powerful machine learning algorithm that learns from data,” said Andre Esteva, co-lead author of the paper and a graduate student in the Thrun lab. “Instead of writing into computer code exactly what to look for, you let the algorithm figure it out.”

The algorithm was fed each image as raw pixels with an associated disease label. Compared to other methods for training algorithms, this one requires very little processing or sorting of the images prior to classification, allowing the algorithm to work off a wider variety of data.

Rather than building an algorithm from scratch, the researchers began with an algorithm developed by Google that was already trained to identify 1.28 million images from 1,000 object categories. While it was primed to be able to differentiate cats from dogs, the researchers needed it to know a malignant carcinoma from a benign seborrheic keratosis.

“There’s no huge dataset of skin cancer that we can just train our algorithms on, so we had to make our own,” said Brett Kuprel, co-lead author of the paper and a graduate student in the Thrun lab. “We gathered images from the internet and worked with the medical school to create a nice taxonomy out of data that was very messy – the labels alone were in several languages, including German, Arabic and Latin.”

After going through the necessary translations, the researchers collaborated with dermatologists at Stanford Medicine, as well as Helen M. Blau, professor of microbiology and immunology at Stanford and co-author of the paper. Together, this interdisciplinary team worked to classify the hodgepodge of internet images. Many of these, unlike those taken by medical professionals, were varied in terms of angle, zoom and lighting. In the end, they amassed about 130,000 images of skin lesions representing over 2,000 different diseases.

During testing, the researchers used only high-quality, biopsy-confirmed images provided by the University of Edinburgh and the International Skin Imaging Collaboration Project that represented the most common and deadliest skin cancers malignant carcinomas and malignant melanomas. The 21 dermatologists were asked whether, based on each image, they would proceed with biopsy or treatment, or reassure the patient. The researchers evaluated success by how well the dermatologists were able to correctly diagnose both cancerous and non-cancerous lesions in over 370 images.

The algorithm’s performance was measured through the creation of a sensitivity-specificity curve, where sensitivity represented its ability to correctly identify malignant lesions and specificity represented its ability to correctly identify benign lesions. It was assessed through three key diagnostic tasks: keratinocyte carcinoma classification, melanoma classification, and melanoma classification when viewed using dermoscopy. In all three tasks, the algorithm matched the performance of the dermatologists with the area under the sensitivity-specificity curve amounting to at least 91 percent of the total area of the graph.

An added advantage of the algorithm is that, unlike a person, the algorithm can be made more or less sensitive, allowing the researchers to tune its response depending on what they want it to assess. This ability to alter the sensitivity hints at the depth and complexity of this algorithm. The underlying architecture of seemingly irrelevant photos — including cats and dogs — helps it better evaluate the skin lesion images.

Although this algorithm currently exists on a computer, the team would like to make it smartphone compatible in the near future, bringing reliable skin cancer diagnoses to our fingertips.

“My main eureka moment was when I realized just how ubiquitous smartphones will be,” said Esteva. “Everyone will have a supercomputer in their pockets with a number of sensors in it, including a camera. What if we could use it to visually screen for skin cancer? Or other ailments?”

The team believes it will be relatively easy to transition the algorithm to mobile devices but there still needs to be further testing in a real-world clinical setting.

“Advances in computer-aided classification of benign versus malignant skin lesions could greatly assist dermatologists in improved diagnosis for challenging lesions and provide better management options for patients,” said Susan Swetter, professor of dermatology and director of the Pigmented Lesion and Melanoma Program at the Stanford Cancer Institute, and co-author of the paper. “However, rigorous prospective validation of the algorithm is necessary before it can be implemented in clinical practice, by practitioners and patients alike.”

Even in light of the challenges ahead, the researchers are hopeful that deep learning could someday contribute to visual diagnosis in many medical fields.

Citation: Andre Esteva, Brett Kuprel, Roberto A. Novoa, Justin Ko, Susan M. Swetter, Helen M. Blau and Sebastian Thrun. “Dermatologist-level classification of skin cancer with deep neural networks.” Nature 2017.
DOI: 10.1038/nature21056
Adapted from press release by Stanford University.

Collaboration between UCSF, Intel to develop deep learning analytics for healthcare

UC San Francisco’s Center for Digital Health Innovation (CDHI) today announced a collaboration with Intel Corporation to deploy and validate a deep learning analytics platform designed to improve care by helping clinicians make better treatment decisions, predict patient outcomes, and respond more nimbly in acute situations.

The collaboration brings together Intel’s leading-edge computer science and deep learning capabilities with UCSF’s clinical and research expertise to create a scalable, high-performance computational environment to support enhanced frontline clinical decision making for a wide variety of patient care scenarios. Until now, progress toward this goal has been difficult because complex, diverse datasets are managed in multiple, incompatible systems. This next-generation platform will allow UCSF to efficiently manage the huge volume and variety of data collected for clinical care as well as newer “big data” from genomic sequencing, monitors, sensors, and wearables. These data will be integrated into a highly scalable “information commons” that will enable advanced analytics with machine learning and deep learning algorithms. The end result will be algorithms that can rapidly support data-driven clinical decision-making.

“While artificial intelligence and machine learning have been integrated into our everyday lives, our ability to use them in healthcare is a relatively new phenomenon,” said Michael Blum, MD, associate vice chancellor for informatics, director of CDHI and professor of medicine at UCSF. “Now that we have ‘digitized’ healthcare, we can begin utilizing the same technologies that have made the driverless car and virtual assistants possible and bring them to bear on vexing healthcare challenges such as predicting health risks, preventing hospital readmissions, analyzing complex medical images and more. Deep learning environments are capable of rapidly analyzing and predicting patient trajectories utilizing vast amounts of multi-dimensional data. By integrating deep learning capabilities into the care delivered to critically injured patients, providers will have access to real-time decision support that will enable timely decision making in an environment where seconds are the difference between life and death. We expect these technologies, combined with the clinical and scientific knowledge of UCSF, to be made accessible through the cloud to drive the transformation of health and healthcare.”

UCSF and Intel will work together to deploy the high-performance computing environment on industry-standard Intel® Xeon® processor-based platforms that will support the data management and algorithm development lifecycle, including data curation and annotation, algorithm training, and testing against labeled datasets with particular pre-specified outcomes. The collaboration will also allow UCSF and Intel to better understand how deep learning analytics and machine-driven workflows can be employed to optimize the clinical environment and patient outcomes. This work will inform Intel’s development and testing of new platform architectures for the healthcare industry.
“This collaboration between Intel and UCSF will accelerate the development of deep learning algorithms that have great potential to benefit patients,” said Kay Eron, general manager of health and life sciences in Intel’s Data Center Group. “Combining the medical science and computer science expertise across our organizations will enable us to more effectively tackle barriers in directing the latest technologies toward critical needs in healthcare.”

The platform will enable UCSF’s deep learning use cases to run in a distributed fashion on a central processing unit (CPU)-based cluster. The platform will be able to handle large data sets and scale easily for future use case requirements, including supporting larger convolutional neural network models, artificial networks patterned after living organisms, and very large multidimensional datasets. In the future, Intel expects to incorporate the deep learning analytics platform with other Intel analytics frameworks, healthcare data sources, and application program interfaces (APIs) – code that allows different programs to communicate – to create increasingly sophisticated use case algorithms that will continue to raise the bar in health and healthcare.

Adapted from press release by UCSF.

Dutch universities collaborate on big data in health to understand disease process

Patients with the same illness often receive the same treatment, even if the cause of the illness is different for each person. This represents a new step towards ultimately being able to offer every patient more personalized treatment.

Six Dutch universities are combining forces to chart the different disease processes for a range of common conditions. This represents a new step towards ultimately being able to offer every patient more personalized treatment. The results of this study have been published in two articles in the authoritative scientific journal Nature Genetics.

The researchers were able to make their discoveries thanks to new techniques that make it possible to simultaneously measure the regulation and activity of all the genes of thousands of people, and to link these data to millions of genetic differences in their DNA. The combined analysis of these ‘big data’ made it possible to determine which molecular processes in the body become dysregulated for a range of disparate diseases, from prostate cancer to ulcerative bowel disease, before the individuals concerned actually become ill.

“The emergence of ‘big data’, ever faster computers and new mathematical techniques means it’s now possible to conduct extremely large-scale studies and gain an understanding of many diseases at the same time,” explains Lude Franke (UMCG), head of the research team in Groningen. The researchers show how thousands of disease-related DNA differences disrupt the internal working of a cell and how their effect can be influenced by environmental factors. And all this was possible without the need for a single lab experiment.

The success of this research is the result of the decision taken six years ago by biobanks throughout the Netherlands to share data and biomaterials within the BBMRI consortium. This decision meant it became possible to gather, store and analyze data from blood samples of a very large number of volunteers. The present study illustrates the tremendous value of large-scale collaboration in the field of medical research in the Netherlands.

Heijmans (LUMC), research leader in Leiden and initiator of the partnership: “The Netherlands is leading the field in sharing molecular data. This enables researchers to carry out the kind of large-scale studies that are needed to gain a better understanding of the causes of diseases. This result is only just the beginning: once they have undergone a screening, other researchers with a good scientific idea will be given access to this enormous bank of anonymized data. Our Dutch ‘polder mentality’ is also advancing science.”

Mapping the various molecular causes for a disease is the first step towards a form of medical treatment that better matches the disease process of individual patients. To reach that ideal, however, we still have a long way to go. The large-scale molecular data that have been collected for this research are the cornerstone of even bigger partnerships, such as the national Health-RI initiative. The third research leader, Peter-Bram ’t Hoen (LUMC), says: “Large quantities of data should eventually make it possible to give everyone personalized health advice, and to determine the best treatment for each individual patient.”

The research has been made possible thanks to the cooperation within the BBMRI biobank consortium of six long-running Dutch population studies carried out by the university medical centres in Groningen (LifeLines), Leiden (Leiden Longevity Study), Maastricht (CODAM Study), Rotterdam (Rotterdam Study), Utrecht (Netherlands Prospective ALS Study) and by the Vrije Universiteit (Netherlands Twin Register). The molecular data were generated in a standardized fashion at a central site (Human Genomics Facility HuGE-F, ErasmusMC) and subsequently securely stored and analyzed at a second central site (SURFSara). The study links in with the Personalised Medicine route of the National Research Agenda and the Health-RI and M3 proposals on the large-scale research infrastructure agenda of the Royal Netherlands Academy of Arts and Sciences (KNAW).

Citations:
1. Bonder, Marc Jan, René Luijk, Daria Zhernakova, Matthijs Moed, Patrick Deelen, Martijn Vermaat, Maarten van Iterson et al. “Disease variants alter transcription factor levels and methylation of their binding sites.” bioRxiv (2015): 033084. Nature Genetics 2016.
DOI: 10.1038/ng.3721

2.  Zhernakova, Daria V,  Patrick Deelen, Martijn Vermaat, Maarten van Iterson, Michiel van Galen, Wibowo Arindrarto et al. “Identification of context-dependent expression quantitative trait loci in whole blood”. Nature Genetics 2016.
DOI: doi:10.1038/ng.3737

Adapted from press release by Leiden University.

Dynamic undocking, a new computational method for efficient drug research

Researchers of the University of Barcelona have developed a more efficient computational method to identify new drugs. The study, published in the scientific journal Nature Chemistry, proposes a new way of facing the discovery of molecules with biological activity.

Researchers devised dynamic undocking (DUck), a fast computational method to calculate the work necessary to reach a quasi-bound state at which the ligand has just broken the most important native contact with the receptor. Since it is based on a different principle, this method complements conventional tools and allows going forward in the path of rational drug design. ICREA researcher Xavier Barril, from the Faculty of Pharmacy and Food Sciences and The Institute of Biomedicine of the University of Barcelona (IBUB), has led this project, which has the participation of professor Francesc Xavier Luque and PhD student Sergio Ruiz Carmona, members of the same Faculty.

The improvement on efficiency and effectiveness in the discovery of drugs is a key target in pharmaceutical research. In this process, the target are molecules that can be added to a target protein and modify its behavior according to clinical needs. “All current methods to predict if a molecule will join the wished protein are based on affinity, that is, in the complex’s thermodynamic stability. What we are proving is that molecules have to create complexes that are structurally stable, and that it is possible to distinguish between active and inactive by looking at what specific interactions are hard to break”, says Professor Xavier Barril.

This approach has been applied in software that identifies molecules with more possibilities to join the targeted protein. “The method allows selecting molecules that can be starting points to create new drugs”, says Barril. “Moreover, -he continues- the process is complementary with existing methods and allows multiplying five times the efficiency of the current processes with lower computational prices. We are actually using it successfully in several projects in the field of cancer and infectious diseases, among others”.

This work introduces a new way of thinking regarding the ligand-protein interaction. “We don’t look at the balancing situation, where two molecules make the best possible interactions, but we also think how the complex will break, which the breaking points are and how we can improve the drug to make it more resistant to separation. Now we have to focus on this phenomenon to understand it better and see if by creating more complex models we can still improve our predictions”, says the researcher. The team of the University of Barcelona is already using this method, which is open to all the scientific community.

Citation: “Dynamic undocking and the quasi-bound state as tools for drug discovery”. Sergio Ruiz-Carmona,  Peter Schmidtke, F. Javier Luque, Lisa Baker, Natalia Matassova, Ben Davis, Stephen Roughley, James Murray, Rod Hubbard & Xavier Barril. Nature Chemistry 2016.
DOI: 10.1038/nchem.2660
Adapted from press release by University of Barcelona.

Computer models to analyze Huntington disease pathology

Rice University scientists have uncovered new details about how a repeating nucleotide sequence in the gene for a mutant protein may trigger Huntington’s and other neurological diseases. Researchers used computer models to analyze proteins suspected of misfolding and forming plaques in the brains of patients with neurological diseases. Their simulations confirmed experimental results by other labs that showed the length of repeating polyglutamine sequences contained in proteins is critical to the onset of disease. The study led by Rice bioscientist Peter Wolynes appears in the Journal of the American Chemical Society.

Glutamine is the amino acid coded for by the genomic trinucleotide CAG. Repeating glutamines, called polyglutamines, are normal in huntingtin proteins, but when the DNA is copied incorrectly, the repeating sequence of glutamines can become too long. The result can be diseases like Huntington’s or spinocerebellar ataxia.

Simulations at Rice show how a repeating sequence in a mutant
 protein may trigger Huntington’s and other neurological diseases.
Credit:Mingchen Chen/Rice University

The number of repeats of glutamine can grow as the genetic code information is passed down through generations. That means a healthy parent whose huntingtin gene encodes proteins with 35 repeats may produce a child with 36 repeats. A person having the longer repeat is likely to develop Huntington’s disease.

Aggregation in Huntington’s typically begins only when polyglutamine chains reach a critical length of 36 repeats. Studies have demonstrated that longer repeat chains can make the disease more severe and its onset earlier.

The paper builds upon techniques used in an earlier study of amyloid beta proteins. That study was the lab’s first attempt to model the energy landscape of amyloid aggregation, which has been implicated in Alzheimer’s disease.  This time, Wolynes and his team were interested in knowing how the varying length of repeats, as few as 20 and as many as 50 influenced how aggregates form.

The Rice team found that at intermediate lengths between 20 and 30 repeats, polyglutamine sequences can choose between straight or hairpin configurations. While longer and shorter sequences form aligned fiber bundles, simulations showed intermediate sequences are more likely to form disordered, branched structures.

Mutations that would encourage polyglutamine sequences to remain unfolded would raise the energy barrier to aggregation, they found. “What’s ironic is that while Huntington’s has been classified as a misfolding disease, it seems to happen because the protein, in the bad case of longer repeats, carries out an extra folding process that it wasn’t supposed to be doing,” Wolynes said.

The team’s ongoing study is now looking at how the complete huntingtin protein, which contains parts in addition to the polyglutamine repeats, aggregates.

Citation: Chen, Mingchen, MinYeh Tsai, Weihua Zheng, and Peter G. Wolynes. “The Aggregation Free Energy Landscapes of Polyglutamine Repeats.” Journal of the American Chemical Society (2016).
DOI: 10.1021/jacs.6b08665
Research funding: NIH/National Institute of General Medical Sciences, Ministry of Science and Technology of Taiwan
Adapted from press release by Rice University.

New tool to discover bio-markers for aging using In-silico Pathway Activation Network Decomposition Analysis (iPANDA)

Today the Biogerontology Research Foundation announced the international collaboration on signaling pathway perturbation-based transcriptomic biomarkers of aging. On November 16th scientists at the Biogerontology Research Foundation alongside collaborators from Insilico Medicine, Inc, the Johns Hopkins University, Albert Einstein College of Medicine, Boston University, Novartis, Nestle and BioTime Inc. announced the publication of their proof of concept experiment demonstrating the utility of a novel approach for analyzing transcriptomic, metabolomic and signalomic data sets, titled iPANDA, in Nature Communications.

“Given the high volume of data being generated in the life sciences, there is a huge need for tools that make sense of that data. As such, this new method will have widespread applications in unraveling the molecular basis of age-related diseases and in revealing biomarkers that can be used in research and in clinical settings. In addition, tools that help reduce the complexity of biology and identify important players in disease processes are vital not only to better understand the underlying mechanisms of age-related disease but also to facilitate a personalized medicine approach. The future of medicine is in targeting diseases in a more specific and personalized fashion to improve clinical outcomes, and tools like iPANDA are essential for this emerging paradigm,” said João Pedro de Magalhães, PhD, a trustee of the Biogerontology Research Foundation.

The algorithm, iPANDA, applies deep learning algorithms to complex gene expression data sets and signal pathway activation data for the purposes of analysis and integration, and their proof of concept article demonstrates that the system is capable of significantly reducing noise and dimensionality of transcriptomic data sets and of identifying patient-specific pathway signatures associated with breast cancer patients that characterize their response to Toxicol-based neoadjuvant therapy.

The system represents a substantially new approach to the analysis of microarray data sets, especially as it pertains to data obtained from multiple sources, and appears to be more scalable and robust than other current approaches to the analysis of transcriptomic, metabolomic and signalomic data obtained from different sources. The system also has applications in rapid biomarker development and drug discovery, discrimination between distinct biological and clinical conditions, and the identification of functional pathways relevant to disease diagnosis and treatment, and ultimately in the development of personalized treatments for age-related diseases.

While the team predicted and compared the response of breast cancer patients to Taxol-based neoadjuvant therapy as their proof of concept, the application of this approach to patient-specific responses to biomedical gerontological interventions (e.g. to geroprotectors, which is a clear focus of the team’s past efforts), to the development of both generalized and personalized biomarkers of ageging, and to the characterization and analysis of minute differences in ageging over time, between individuals, and between different organisms would represent a promising and exciting future application” said Franco Cortese, Deputy Director of the Biogerontology Research Foundation.

Citation: “In silico Pathway Activation Network Decomposition Analysis (iPANDA) as a method for biomarker development”. Ivan V. Ozerov, Ksenia V. Lezhnina, Evgeny Izumchenko, Artem V. Artemov, Sergey Medintsev, Quentin Vanhaelen, Alexander Aliper, Jan Vijg, Andreyan N. Osipov, Ivan Labat, Michael D. West, Anton Buzdin, Charles R. Cantor, Yuri Nikolsky, Nikolay Borisov, Irina Irincheeva, Edward Khokhlovich, David Sidransky, Miguel Luiz Camargo & Alex Zhavoronkov. Nature Communications 2016 vol: 7 pp: 13427.
DOI: http://dx.doi.org/10.1038/NCOMMS13427
Adapted from press release by Biogerontology Research Foundation

Virtual clinical trials use mathematical modelling to predict melanoma response

Researchers from Moffitt Cancer Center’s Integrated Mathematical Oncology (IMO) Department are overcoming the limitations of common preclinical experiments and clinical trials by studying cancer through mathematical modeling. A study led by Alexander “Sandy” Anderson, Ph.D., chair of IMO, and Eunjung Kim, Ph.D., an applied research scientist, shows how mathematical modeling can accurately predict patient responses to cancer drugs in a virtual clinical trial. This study was recently published in the November issue of the European Journal of Cancer.

Cancer is a complicated process based on evolutionary principals and develops as a result of changes in both tumor cells and the surrounding tumor environment. Similar to how animals can change and adapt to their surroundings, tumor cells can also change and adapt to their surroundings and to cancer treatments. Those tumor cells that adapt to their environment or treatment will survive, while tumor cells that are unable to adapt will die.

Preclinical studies with tumor cell models cannot accurately measure these changes and adaptations in a context that accurately reflects what occurs in patients. “Purely experimental approaches are unpractical given the complexity of interactions and timescales involved in cancer. Mathematical modeling can capture the fine mechanistic details of a process and integrate these components to extract fundamental behaviors of cells and between cells and their environment,” said Anderson.

The research team wanted to demonstrate the power of mathematical modeling by developing a model that predicts the responses of melanoma to different drug treatments: no treatment, chemotherapy alone, AKT inhibitors, and AKT inhibitors plus chemotherapy in sequence and in combination. They then tested the model predictions in laboratory experiments with Keiran Smalley, Ph.D., director of the Donald A. Adam Comprehensive Melanoma and Skin Cancer Research Center of Excellence at Moffitt, to confirm that their model was accurate.

To determine the long-term outcome of therapy in different patients, the researchers developed a virtual clinical trial that tested different combinations of AKT inhibitors and chemotherapy in virtual patients. The researchers show that this Phase i trial (i for in silico, and representing the imaginary number) or virtual clinical trial was able to reproduce patient responses to those observed in the published results of an actual clinical trial. Importantly, their approach was able to stratify patient responses and predict a better treatment schedule for AKT inhibitors in melanoma patients that improves patient outcomes and reduces toxicities.

“By using a range of mathematical modeling approaches targeted at specific types of cancer, Moffitt’s IMO Department is aiding in the development and testing of new treatment strategies, as well as facilitating a deeper understanding of why they fail. This multi-model, multi-scale approach has led to a diverse and rich interdisciplinary environment within our institution, one that is creating many novel approaches for the treatment and understanding cancer,” Anderson said.

Citation: Kim, Eunjung, Vito W. Rebecca, Keiran SM Smalley, and Alexander RA Anderson. “Phase i trials in melanoma: A framework to translate preclinical findings to the clinic.” bioRxiv (2015): 015925. European journal of cancer 2016 vol: 67 pp: 213-222.
DOI: http://dx.doi.org/10.1016/j.ejca.2016.07.024
Adapted from press release by Moffitt Cancer Center