A new algorithm to solve memory problems in large-scale human brain simulations

 Researchers have come closer towards advancing technology to create computer simulations of the brain networks using exascale-class supercomputers. Their findings are published in journal  Frontiers in Neuroinformatics.

Computer brain simulations
Credit: Forschungszentrum Jülich

The human brain is a complex network composed of approximately 100 billion neurons. With current computing power, it is impossible to simulate 100 percent working brain. Currently, researchers use simulating software called NEST to simulate the brain. NEST, a free, open-source simulation code in widespread use by the neuroscientific community and a core simulator of the European Human Brain Project.

“Since 2014, our software can simulate about one percent of the neurons in the human brain with all their connections,” says Markus Diesmann, Director at the Jülich Institute of Neuroscience and Medicine (INM-6). To achieve this impressive feat, the software requires the entire main memory of petascale supercomputers.

With NEST, the behavior of each neuron in the network is represented by a handful of mathematical equations. Future exascale computers, such as the post-K computer planned in Kobe and JUWELS in Jülich, will exceed the performance of today’s high-end supercomputers by 10- to 100-fold. For the first time, researchers will have the computer power available to simulate neuronal networks on the scale of the human brain.

While current simulation technology enabled researchers to begin studying large neuronal networks, it also represented a dead end on the way to exascale technology. Supercomputers are composed of about 100,000 small computers, called nodes, each equipped with many processors doing the actual calculations.

“Before a neuronal network simulation can take place, neurons and their connections need to be created virtually, which means that they need to be instantiated in the memory of the nodes. During the simulation, a neuron does not know on which of the nodes it has target neurons. Therefore, its short electric pulses need to be sent to all nodes. Each node then checks which of all these electric pulses are relevant for the virtual neurons that exist on this node,” explains Susanne Kunkel of KTH Royal Institute of Technology in Stockholm.

The current algorithm for network creation is efficient because all nodes construct their particular part of the network at the same time. However, sending all electric pulses to all nodes is not suitable for simulations on exascale systems.

“Checking the relevance of each electric pulse efficiently requires one Bit of information per processor for every neuron in the whole network. For a network of 1 billion neurons, a large part of the memory in each node is consumed by just this single Bit of information per neuron,” adds Markus Diesmann.

This is the main problem when simulating even larger networks: the amount of computer memory required per processor for the extra Bits per neuron increases with the size of the neuronal network. At the scale of the human brain, this would require the memory available to each processor to be 100 times larger than in today’s supercomputers. This, however, is unlikely to be the case in the next generation of supercomputers. The number of processors per compute node will increase, but the memory per processor and the number of compute nodes will rather stay the same.

The breakthrough published in Frontiers in Neuroinformatics is a new way of constructing the neuronal network in the supercomputer. Due to the algorithms, the memory required on each node no longer increases with network size. At the beginning of the simulation, the new technology allows the nodes to exchange information about who needs to send neuronal activity data to whom. Once this knowledge is available, the exchange of neuronal activity data between nodes can be organized such that a node only receives the information it requires. An additional Bit for each neuron in the network is no longer necessary.

While testing their new ideas, the scientists made an additional key insight, reports Susanne Kunkel: “When analyzing the new algorithms we realized that our novel technology would not only enable simulations on exascale systems, but it would also make simulations faster on presently available supercomputers.”

In fact, as the memory consumption is now under control, the speed of simulations becomes the main focus of further technological developments. For example, a large simulation of 0.52 billion neurons connected by 5.8 trillion synapses running on the supercomputer JUQUEEN in Jülich previously required 28.5 minutes to compute one second of biological time. With the improved data structures simulation, the time is reduced to 5.2 minutes.

“With the new technology we can exploit the increased parallelism of modern microprocessors a lot better than previously, which will become even more important in exascale computers,” remarks Jakob Jordan, lead author of the study, from Forschungszentrum Jülich.

“The combination of exascale hardware and appropriate software brings investigations of fundamental aspects of brain function, like plasticity and learning unfolding over minutes of biological time within our reach,” adds Markus Diesmann.

With one of the next releases of the simulation software NEST, the researchers will make their achievement freely available to the community as open source.

“We have been using NEST for simulating the complex dynamics of the basal ganglia circuits in health and Parkinson’s disease on the K computer. We are excited to hear the news about the new generation of NEST, which will allow us to run whole-brain-scale simulations on the post-K computer to clarify the neural mechanisms of motor control and mental functions,” says Kenji Doya of Okinawa Institute of Science and Technology (OIST).

“The study is a wonderful example of the international collaboration in the endeavor to construct exascale computers. It is important that we have applications ready that can use these precious machines from the first day they are available,” concludes Mitsuhisa Sato of the RIKEN Advanced Institute for Computer Science in Kobe.

Citation: Jordan, Jakob, Tammo Ippen, Moritz Helias, Itaru Kitayama, Mitsuhisa Sato, Jun Igarashi, Markus Diesmann, and Susanne Kunkel. “Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers.” Frontiers in Neuroinformatics 12 (2018). doi:10.3389/fninf.2018.00002.

Research funding: Helmholtz Portfolio Supercomputing and Modeling for the Human Brain (SMHB), Helmholtz young investigator group, EU 7th Framework Programme (Human Brain Project), EU Horizon 2020 research and innovation programme (Human Brain Project).

Adapted from press release by Frontiers.

New flu simulation map using health and social media data

Researchers at the University of Chicago have created computer simulations to predict spread of flu across the United States utilizing datasets of demographics, healthcare visits and geographic movements of 150 million people over nine years. The study is published in the journal eLife.

Simulation map of flu spread. Credit: Andrey Rzhetsky, UChicago

Researchers utilized deidentified patient data from more than 40 million families in the US using Truven MarketScan to analyze insurance claims for treatment of flu-like conditions from 2003 to 2009. To get people movement data they used 1.7 billion geolocated twitter posts. Researchers also incorporated data on “social connectivity,” which is information about how often they visit friends and neighbors, air travel, weather, vaccination rates and changes in the flu virus itself.

The study results show that seasonal flu outbreaks originate in warm, humid areas of the south and the southeastern U.S. and move northward. The team utilized newer models based on all above variables to understand what factors drive the northward spread of the flu each year. In the paper, they liken the typical outbreak to a forest fire.

The researchers were able to use these models to recreate three years of historical flu data fairly accurately.

Citation: Chattopadhyay, Ishanu, Emre Kiciman, Joshua W. Elliott, Jeffrey L. Shaman, and Andrey Rzhetsky. “Conjunction of factors triggering waves of seasonal influenza.” ELife 7 (2018). doi:10.7554/elife.30756.

Research funding: National Institutes of Health, Defense Advanced Research Projects Agency, Liz and Kent Dauten.

Adapted from press release by the University of Chicago.

Computer simulations help understand lung cancer drug resistance

Scientists from the Universities of Bristol and Parma, Italy, have used computer simulations to understand drug resistance to osimertinib. Findings of this study are published in journal Chemical Science.

Osimertinib works by binding to epidermal growth factor receptor (EGFR), which is over expressed in many tumors. EGFR is involved in a pathway that signals for cell proliferation, and so is a target for drugs. Blocking the action of EGFR (inhibiting it) can switch it off, and so is a good way to treat tumors. Osimertinib is used to treat non-small-cell lung cancer (NSCLC), in cases where the cancer cells have a particular (T790M) mutant form of EGFR.

Osimertinib drug resistance computer simulations
Credit: University of Bristol

Although patients generally respond well to osimertinib, most acquire drug resistance within one year of treatment, so the drug stops working. Drug resistance arises because the EGFR protein mutates, so that the drug binds less tightly. One such mutation, called L718Q, was recently discovered in patients in the clinic by the Medical Oncology Unit of the University Hospital of Parma. In this drug resistant mutant, a single amino acid is changed. Unlike other drug resistant mutants, it was not at all clear how this change stops the drug from binding effectively, information potentially crucial in developing new drugs to overcome resistance.

Using a range of advanced molecular simulation techniques which included modelling chemical step using combined quantum mechanics/molecular mechanics and recognition step using molecular dynamics simulations and free energy calculations.  These calculation gave researchers understanding of molecular basis for the drug resistance. Such knowledge could be exploited in future to find solution to problem.

Citation: Callegari, D., K. E. Ranaghan, C. J. Woods, R. Minari, M. Tiseo, M. Mor, A. J. Mulholland, and A. Lodola. “L718Q mutant EGFR escapes covalent inhibition by stabilizing a non-reactive conformation of the lung cancer drug osimertinib.” Chemical Science, 2018. doi:10.1039/c7sc04761d.

Adapted from press release by University of Bristol.

Finding new uses for old medication using computer program DrugPredict

Researchers at Case Western Reserve University School of Medicine have developed a computer program called DrugPredict to discover new indications for old drugs. This program matches existing data about FDA-approved drugs to diseases, and predicts potential drug efficacy.

In a recent study published in Oncogene, the researchers successfully translated DrugPredict results into the laboratory, and showed common pain medications non-steroidal anti-inflammatory drugs, also known as NSAIDs could have applications for epithelial ovarian cancer.

DrugPredict was developed by co-first author QuanQiu Wang of ThinTek, LLC, and co-senior author Rong Xu, PhD, associate professor of biomedical informatics in the department of population and quantitative health sciences at Case Western Reserve University School of Medicine. The program works by connecting computer-generated drug profiles including mechanisms of action, clinical efficacy, and side effects with information about how a molecule may interact with human proteins in specific diseases, such as ovarian cancer.

DrugPredict searches databases of FDA-approved drugs, chemicals, and other naturally occurring compounds. It finds compounds with characteristics related to a disease-fighting mechanism. These include observable characteristics (phenotypes) and genetic factors that may influence drug efficacy. Researchers can collaborate with Xu to input a disease into DrugPredict and receive an output list of drugs or potential drugs with molecular features that correlate with strategies to fight the disease.

In the Oncogene study, DrugPredict produced a prioritized list of 6,996 chemicals with potential to treat epithelial ovarian cancer. At the top of the list were 15 drugs already FDA-approved to treat the cancer, helping to validate the DrugPredict approach. Of other FDA-approved medications on the list, NSAIDs ranked significantly higher than other drug classes. The researchers combined the DrugPredict results with anecdotal evidence about NSAIDs and cancer before confirming DrugPredict results in their laboratory experiments.

Citation: Nagaraj, A. B., Q. Q. Wang, P. Joseph, C. Zheng, Y. Chen, O. Kovalenko, S. Singh, A. Armstrong, K. Resnick, K. Zanotti, S. Waggoner, R. Xu, and A. Difeo. “Using a novel computational drug-repositioning approach (DrugPredict) to rapidly identify potent drug candidates for cancer treatment.” Oncogene, 2017.

DOI: 10.1038/onc.2017.328

Funding: Gynecological Cancer Translation Research Program, Case Comprehensive Cancer Center, The Mary Kay Foundation, NIH/Eunice Kennedy Shriver National Institute Of Child Health & Human Development.

Adapted from press release by Case Western Reserve University.

Computational model uncovers progression of HIV infection in brain

University of Alberta research team successfully uncovered the progression of HIV infection in the brain using a new mathematical model. The team is utilizing this model to develop a nasal spray to administer  antiretroviral medication effectively. Their research is published in Journal of Neurovirology.

Research was done by PhD student Weston Roda and Prof. Michael Li. They used data from patients who died five to 15 years after they were infected, as well as known biological processes for the HIV virus to build the model that predicts the growth and progression of HIV in the brain, from the moment of infection onward. It is the first model of an infectious disease in the brain.

“The nature of the HIV virus allows it to travel across the blood-brain barrier in infected macrophage–or white blood cell–as early as two weeks after infection. Antiretroviral drugs, the therapy of choice for HIV, cannot enter the brain so easily,” said Roda. This creates what is known as a viral reservoir, a place in the body where the virus can lay dormant and is relatively inaccessible to drugs.

Prior to this study, scientists could only study brain infection at autopsy. The new model allows scientists to backtrack, seeing the progression and development of HIV infection in the brain. Using this information, researchers can determine what level of effectiveness is needed for antiretroviral therapy in the brain to decrease active infection.

“The more we understand and can target treatment toward viral reservoirs, the closer we get to developing total suppression strategies for HIV infection,” said Roda. A research team led by Chris Power, Roda’s co-supervisor who is a professor in the Division of Neurology, is planning clinical trials for a nasal spray that would get the drugs into the brain faster, with critical information on dosage and improvement rate provided by Roda’s model.

“Our next steps are to understand other viral reservoirs, like the gut, and develop models similar to this one, as well as understand latently infected cell populations in the brain,” said Roda. “With the antiretroviral therapy, infected cells can go into a latent stage. The idea is to determine the size of the latently infected population so that clinicians can develop treatment strategies”

Citation: Roda, Weston C., Michael Y. Li, Michael S. Akinwumi, Eugene L. Asahchop, Benjamin B. Gelman, Kenneth W. Witwer, and Christopher Power. “Modeling brain lentiviral infections during antiretroviral therapy in AIDS.” Journal of NeuroVirology, 2017.
doi:10.1007/s13365-017-0530-3.
Adapted from press release by University of Alberta.

Computational model to track flu using Twitter data

An international team led by Alessandro Vespignani from Northeastern University has developed a computational model to predict the spread of the flu in real time. This unique model uses posts on Twitter in combination with key parameters of each season’s epidemic, including the incubation period of the disease, the immunization rate, how many people an individual with the virus can infect, and the viral strains present. When tested against official influenza surveillance systems, the model has been shown to accurately(70 to 90 percent) forecast the disease’s evolution up to six weeks in advance.

The paper on the novel model received a coveted Best Paper Honorable Mention award at the 2017 International World Wide Web Conference last month following its presentation.

While the paper reports results using Twitter data, the researchers note that the model can work with data from many other digital sources, too, as well as online surveys of individuals such as influenzanet, which is very popular in Europe.

“Our model is a work in progress,” emphasizes Vespignani. “We plan to add new parameters, for example, school and workplace structure.

Adapted from press release from the Northeastern University.

Dynamic undocking, a new computational method for efficient drug research

Researchers of the University of Barcelona have developed a more efficient computational method to identify new drugs. The study, published in the scientific journal Nature Chemistry, proposes a new way of facing the discovery of molecules with biological activity.

Researchers devised dynamic undocking (DUck), a fast computational method to calculate the work necessary to reach a quasi-bound state at which the ligand has just broken the most important native contact with the receptor. Since it is based on a different principle, this method complements conventional tools and allows going forward in the path of rational drug design. ICREA researcher Xavier Barril, from the Faculty of Pharmacy and Food Sciences and The Institute of Biomedicine of the University of Barcelona (IBUB), has led this project, which has the participation of professor Francesc Xavier Luque and PhD student Sergio Ruiz Carmona, members of the same Faculty.

The improvement on efficiency and effectiveness in the discovery of drugs is a key target in pharmaceutical research. In this process, the target are molecules that can be added to a target protein and modify its behavior according to clinical needs. “All current methods to predict if a molecule will join the wished protein are based on affinity, that is, in the complex’s thermodynamic stability. What we are proving is that molecules have to create complexes that are structurally stable, and that it is possible to distinguish between active and inactive by looking at what specific interactions are hard to break”, says Professor Xavier Barril.

This approach has been applied in software that identifies molecules with more possibilities to join the targeted protein. “The method allows selecting molecules that can be starting points to create new drugs”, says Barril. “Moreover, -he continues- the process is complementary with existing methods and allows multiplying five times the efficiency of the current processes with lower computational prices. We are actually using it successfully in several projects in the field of cancer and infectious diseases, among others”.

This work introduces a new way of thinking regarding the ligand-protein interaction. “We don’t look at the balancing situation, where two molecules make the best possible interactions, but we also think how the complex will break, which the breaking points are and how we can improve the drug to make it more resistant to separation. Now we have to focus on this phenomenon to understand it better and see if by creating more complex models we can still improve our predictions”, says the researcher. The team of the University of Barcelona is already using this method, which is open to all the scientific community.

Citation: “Dynamic undocking and the quasi-bound state as tools for drug discovery”. Sergio Ruiz-Carmona,  Peter Schmidtke, F. Javier Luque, Lisa Baker, Natalia Matassova, Ben Davis, Stephen Roughley, James Murray, Rod Hubbard & Xavier Barril. Nature Chemistry 2016.
DOI: 10.1038/nchem.2660
Adapted from press release by University of Barcelona.

Computer models to analyze Huntington disease pathology

Rice University scientists have uncovered new details about how a repeating nucleotide sequence in the gene for a mutant protein may trigger Huntington’s and other neurological diseases. Researchers used computer models to analyze proteins suspected of misfolding and forming plaques in the brains of patients with neurological diseases. Their simulations confirmed experimental results by other labs that showed the length of repeating polyglutamine sequences contained in proteins is critical to the onset of disease. The study led by Rice bioscientist Peter Wolynes appears in the Journal of the American Chemical Society.

Glutamine is the amino acid coded for by the genomic trinucleotide CAG. Repeating glutamines, called polyglutamines, are normal in huntingtin proteins, but when the DNA is copied incorrectly, the repeating sequence of glutamines can become too long. The result can be diseases like Huntington’s or spinocerebellar ataxia.

Simulations at Rice show how a repeating sequence in a mutant
 protein may trigger Huntington’s and other neurological diseases.
Credit:Mingchen Chen/Rice University

The number of repeats of glutamine can grow as the genetic code information is passed down through generations. That means a healthy parent whose huntingtin gene encodes proteins with 35 repeats may produce a child with 36 repeats. A person having the longer repeat is likely to develop Huntington’s disease.

Aggregation in Huntington’s typically begins only when polyglutamine chains reach a critical length of 36 repeats. Studies have demonstrated that longer repeat chains can make the disease more severe and its onset earlier.

The paper builds upon techniques used in an earlier study of amyloid beta proteins. That study was the lab’s first attempt to model the energy landscape of amyloid aggregation, which has been implicated in Alzheimer’s disease.  This time, Wolynes and his team were interested in knowing how the varying length of repeats, as few as 20 and as many as 50 influenced how aggregates form.

The Rice team found that at intermediate lengths between 20 and 30 repeats, polyglutamine sequences can choose between straight or hairpin configurations. While longer and shorter sequences form aligned fiber bundles, simulations showed intermediate sequences are more likely to form disordered, branched structures.

Mutations that would encourage polyglutamine sequences to remain unfolded would raise the energy barrier to aggregation, they found. “What’s ironic is that while Huntington’s has been classified as a misfolding disease, it seems to happen because the protein, in the bad case of longer repeats, carries out an extra folding process that it wasn’t supposed to be doing,” Wolynes said.

The team’s ongoing study is now looking at how the complete huntingtin protein, which contains parts in addition to the polyglutamine repeats, aggregates.

Citation: Chen, Mingchen, MinYeh Tsai, Weihua Zheng, and Peter G. Wolynes. “The Aggregation Free Energy Landscapes of Polyglutamine Repeats.” Journal of the American Chemical Society (2016).
DOI: 10.1021/jacs.6b08665
Research funding: NIH/National Institute of General Medical Sciences, Ministry of Science and Technology of Taiwan
Adapted from press release by Rice University.

New tool to discover bio-markers for aging using In-silico Pathway Activation Network Decomposition Analysis (iPANDA)

Today the Biogerontology Research Foundation announced the international collaboration on signaling pathway perturbation-based transcriptomic biomarkers of aging. On November 16th scientists at the Biogerontology Research Foundation alongside collaborators from Insilico Medicine, Inc, the Johns Hopkins University, Albert Einstein College of Medicine, Boston University, Novartis, Nestle and BioTime Inc. announced the publication of their proof of concept experiment demonstrating the utility of a novel approach for analyzing transcriptomic, metabolomic and signalomic data sets, titled iPANDA, in Nature Communications.

“Given the high volume of data being generated in the life sciences, there is a huge need for tools that make sense of that data. As such, this new method will have widespread applications in unraveling the molecular basis of age-related diseases and in revealing biomarkers that can be used in research and in clinical settings. In addition, tools that help reduce the complexity of biology and identify important players in disease processes are vital not only to better understand the underlying mechanisms of age-related disease but also to facilitate a personalized medicine approach. The future of medicine is in targeting diseases in a more specific and personalized fashion to improve clinical outcomes, and tools like iPANDA are essential for this emerging paradigm,” said João Pedro de Magalhães, PhD, a trustee of the Biogerontology Research Foundation.

The algorithm, iPANDA, applies deep learning algorithms to complex gene expression data sets and signal pathway activation data for the purposes of analysis and integration, and their proof of concept article demonstrates that the system is capable of significantly reducing noise and dimensionality of transcriptomic data sets and of identifying patient-specific pathway signatures associated with breast cancer patients that characterize their response to Toxicol-based neoadjuvant therapy.

The system represents a substantially new approach to the analysis of microarray data sets, especially as it pertains to data obtained from multiple sources, and appears to be more scalable and robust than other current approaches to the analysis of transcriptomic, metabolomic and signalomic data obtained from different sources. The system also has applications in rapid biomarker development and drug discovery, discrimination between distinct biological and clinical conditions, and the identification of functional pathways relevant to disease diagnosis and treatment, and ultimately in the development of personalized treatments for age-related diseases.

While the team predicted and compared the response of breast cancer patients to Taxol-based neoadjuvant therapy as their proof of concept, the application of this approach to patient-specific responses to biomedical gerontological interventions (e.g. to geroprotectors, which is a clear focus of the team’s past efforts), to the development of both generalized and personalized biomarkers of ageging, and to the characterization and analysis of minute differences in ageging over time, between individuals, and between different organisms would represent a promising and exciting future application” said Franco Cortese, Deputy Director of the Biogerontology Research Foundation.

Citation: “In silico Pathway Activation Network Decomposition Analysis (iPANDA) as a method for biomarker development”. Ivan V. Ozerov, Ksenia V. Lezhnina, Evgeny Izumchenko, Artem V. Artemov, Sergey Medintsev, Quentin Vanhaelen, Alexander Aliper, Jan Vijg, Andreyan N. Osipov, Ivan Labat, Michael D. West, Anton Buzdin, Charles R. Cantor, Yuri Nikolsky, Nikolay Borisov, Irina Irincheeva, Edward Khokhlovich, David Sidransky, Miguel Luiz Camargo & Alex Zhavoronkov. Nature Communications 2016 vol: 7 pp: 13427.
DOI: http://dx.doi.org/10.1038/NCOMMS13427
Adapted from press release by Biogerontology Research Foundation

Virtual clinical trials use mathematical modelling to predict melanoma response

Researchers from Moffitt Cancer Center’s Integrated Mathematical Oncology (IMO) Department are overcoming the limitations of common preclinical experiments and clinical trials by studying cancer through mathematical modeling. A study led by Alexander “Sandy” Anderson, Ph.D., chair of IMO, and Eunjung Kim, Ph.D., an applied research scientist, shows how mathematical modeling can accurately predict patient responses to cancer drugs in a virtual clinical trial. This study was recently published in the November issue of the European Journal of Cancer.

Cancer is a complicated process based on evolutionary principals and develops as a result of changes in both tumor cells and the surrounding tumor environment. Similar to how animals can change and adapt to their surroundings, tumor cells can also change and adapt to their surroundings and to cancer treatments. Those tumor cells that adapt to their environment or treatment will survive, while tumor cells that are unable to adapt will die.

Preclinical studies with tumor cell models cannot accurately measure these changes and adaptations in a context that accurately reflects what occurs in patients. “Purely experimental approaches are unpractical given the complexity of interactions and timescales involved in cancer. Mathematical modeling can capture the fine mechanistic details of a process and integrate these components to extract fundamental behaviors of cells and between cells and their environment,” said Anderson.

The research team wanted to demonstrate the power of mathematical modeling by developing a model that predicts the responses of melanoma to different drug treatments: no treatment, chemotherapy alone, AKT inhibitors, and AKT inhibitors plus chemotherapy in sequence and in combination. They then tested the model predictions in laboratory experiments with Keiran Smalley, Ph.D., director of the Donald A. Adam Comprehensive Melanoma and Skin Cancer Research Center of Excellence at Moffitt, to confirm that their model was accurate.

To determine the long-term outcome of therapy in different patients, the researchers developed a virtual clinical trial that tested different combinations of AKT inhibitors and chemotherapy in virtual patients. The researchers show that this Phase i trial (i for in silico, and representing the imaginary number) or virtual clinical trial was able to reproduce patient responses to those observed in the published results of an actual clinical trial. Importantly, their approach was able to stratify patient responses and predict a better treatment schedule for AKT inhibitors in melanoma patients that improves patient outcomes and reduces toxicities.

“By using a range of mathematical modeling approaches targeted at specific types of cancer, Moffitt’s IMO Department is aiding in the development and testing of new treatment strategies, as well as facilitating a deeper understanding of why they fail. This multi-model, multi-scale approach has led to a diverse and rich interdisciplinary environment within our institution, one that is creating many novel approaches for the treatment and understanding cancer,” Anderson said.

Citation: Kim, Eunjung, Vito W. Rebecca, Keiran SM Smalley, and Alexander RA Anderson. “Phase i trials in melanoma: A framework to translate preclinical findings to the clinic.” bioRxiv (2015): 015925. European journal of cancer 2016 vol: 67 pp: 213-222.
DOI: http://dx.doi.org/10.1016/j.ejca.2016.07.024
Adapted from press release by Moffitt Cancer Center