Imaging and Artificial Intelligence Tools Help Predict Response to Breast Cancer Therapy

Source: Memorial Sloan Kettering - On Cancer
Date: 10/23/2020
Link to original

For people with breast cancer, biopsies have long been the gold standard for characterizing the molecular changes in a tumor, which can guide treatment decisions. Biopsies remove a small piece of tissue from the tumor so pathologists can study it under the microscope and make a diagnosis. Thanks to advances in imaging technologies and artificial intelligence (AI), however, experts are now able to use the characteristics of the whole tumor rather than the small sample removed during biopsy to assess tumor characteristics.

In a study published October 8, 2020, in EBioMedicine, a team led by experts from Memorial Sloan Kettering report that — for breast cancers that have high levels of a protein called HER2 — AI-enhanced imaging tools may also be useful for predicting how patients will respond to the targeted chemotherapy given before surgery to shrink the tumor (called neoadjuvant therapy). Ultimately, these tools could help to guide treatment and make it more personalized.

“We’re not aiming to replace biopsies,” says MSK radiologist Katja Pinker, the study’s corresponding author. “But because breast tumors can be heterogeneous, meaning that not all parts of the tumor are the same, a biopsy can’t always give us the full picture.”

Harnessing the Power of Machine Learning

The study looked at data from 311 patients who had already been treated at MSK for early-stage breast cancer. All the patients had HER2-positive tumors — meaning that the tumors had high levels of the protein HER2, which can be targeted with drugs like trastuzumab (Herceptin®). The researchers wanted to see if AI-enhanced magnetic resonance imaging (MRI) could help them learn more about each specific tumor’s HER2 status.

One goal was to look at factors that could predict response to neoadjuvant therapy in people whose tumors were HER2-positive. “Breast cancer experts have generally believed that people with heterogeneous HER2 disease don’t do as well, but recently a study suggested they actually did better,” says senior author Maxine Jochelson, Director of Radiology at MSK’s Breast and Imaging Center. “We wanted to find out if we could use imaging to take a closer look at heterogeneity and then use those findings to study patient outcomes.”

The MSK team took advantage of AI and radiomics analysis, which uses computer algorithms to uncover disease characteristics. The computer helps reveal features on an MRI scan that can’t be seen with the naked eye.

Using an Algorithm to Personalize Treatment

In this study, the researchers used machine learning to combine radiomics analysis of the entire tumor with clinical findings and biopsy results. They took a closer look at the HER2 status of the 311 patients, with the aim of predicting their response to neoadjuvant chemotherapy. By comparing the computer models to actual patient outcomes, they were able to verify that the models were effective.

“Our next step is to conduct a larger multicenter study that includes different patient populations treated at different hospitals and scanned with different machines,” Dr. Pinker says. “I’m confident that our results will be the same, but these larger studies are very important to do before you can apply these findings to patient treatment.”

“Once we’ve confirmed our findings, our goal is to perform risk-adaptive treatment,” Dr. Jochelson says. “That means we could use it to monitor patients during treatment and consider changing their chemotherapy during treatment if their early response is not ideal.”

Dr. Jochelson adds that conducting more frequent scans and using them to guide therapies has improved treatments for people with other cancers, including lymphoma. “We hope that this will get us to the next level of personalized treatment for breast cancer,” she concludes.

Artificial intelligence can diagnose and triage retinal diseases

Source: Cell Press
Date: 02/22/2018
Link to original
Image of article

While we might trust virtual assistants to give us directions or recommend as spot for lunch, trusting artificial intelligence (AI) with something as important as a medical diagnosis is a step that many people are not yet willing to take. A team of scientists in the United States and China aim to change that. In the February 22 issue of Cell, they describe a platform that uses big data and AI not only to recognize two of the most common retinal diseases but also to rate their severity. It can also distinguish between bacterial and viral pneumonia in children based on chest X-ray images.

“Macular degeneration and diabetic macular edema are the two most common causes of irreversible blindness but are both very treatable if they are caught early,” says senior author Kang Zhang, a professor of ophthalmology at the University of California, San Diego’s Shiley Eye Institute. “Deciding how and when to treat patients has historically been handled by a small community of specialists who require years of training and are concentrated mostly in urban areas. In contrast, our AI tool can be used anywhere in the world, especially in the rural areas. This is important in places like China, India, and Africa, where there are relatively fewer medical resources.”

The platform looked at more than 200,000 optical coherence tomography (OCT) images collected with a noninvasive scan that uses light waves to image the layers of the retina. Earlier studies have used machine learning to study retinal images, but the authors of the new study say their platform goes a step further by using a technique called transfer learning. This is a type of machine learning in which general knowledge related to classification can be transferred from one disease area to another and can enable the AI system to learn effectively with a much smaller dataset than traditional methods. In addition to making a medical diagnosis, this AI platform also can make referral and treatment recommendations, which is another step that goes beyond previous studies.

The researchers also used occlusion testing, which allowed them to show areas of greatest importance when reviewing the scan images. “Machine learning is often like a black box, where we don’t know exactly what is happening,” Zhang explains. “With occlusion testing, the computer can tell us where it is looking in an image to arrive at a diagnosis, so we can figure out why the system got the result it did. This makes the system more transparent and increases our trust in the diagnosis.”

In the study, the researchers compared the diagnoses from the computer with those from five ophthalmologists who reviewed the scans. “With simple training, the machine could perform to the level of a well-trained ophthalmologist. It could generate a decision on whether or not the patient should be referred for treatment within 30 seconds and with more than 95% accuracy,” Zhang says.

He explains that diagnosing and treating retinal diseases normally involves visiting a general medical doctor or an optometrist, then a general ophthalmologist, and finally a retina specialist. This referral process can waste valuable time and resources for a disease in which prompt treatment can mean the difference between going blind or retaining sight. “Having an automated diagnosis could enable patients who would benefit from treatment to see a specialist and get that treatment much sooner and change outcomes,” he says.

Zhang estimates that the test will be only a fraction of the current cost. “In addition to economic benefit, there are significant non-economic benefits in increased personal and society productivity regarding a patient’s wait time spent to see a doctor and better access to care in remote areas,” he says.

The researchers also applied the tool to childhood pneumonia. By reviewing chest X-rays, the computer was able to determine the difference between viral and bacterial pneumonia with greater than 90% accuracy. Viral pneumonia is treated mainly with supportive care, whereas bacterial pneumonia requires swift initiation of antibiotic treatment. This showed that the tool is adaptable and can be used effectively with multiple types of medical images.

Zhang says this technology has many other potential applications, such as distinguishing between cancerous and noncancerous lesions on CT scans or MRIs, and his group has made their data and tools open source so that other groups can use it. “If we all work together as a community, we can develop better and better tests with higher computational power,” he says. “The future is more data, more computational power, and more experience of the people using this system, so that we can provide the best patient care possible, while still being cost effective.”

DeepPET Uses Artificial Intelligence to Generate Images of the Body’s Internal Activities

Source: Memorial Sloan Kettering - On Cancer
Date: 04/19/2019
Link to original
Image of article

Recently, the first-ever image of a black hole was splashed across front pages and filled up news feeds around the world. The image was made in part thanks to tremendous computing power that analyzed millions of gigabytes of data that had been collected from space.

Research that uses computer algorithms to create pictures from massive volumes of data is also going on at Memorial Sloan Kettering. Instead of probing the outer limits of the universe, this work seeks new ways to see what’s going on inside our bodies.

In a paper published in the May 2019 issue of Medical Image Analysis, MSK investigators led by medical physics researcher Ida Häggström report the details of a new method they developed for PET imaging. The system generates images more than 100 times faster than conventional techniques. The images are also of higher quality.

“Using deep learning, we trained our convolutional neural network to transform raw PET data into images,” Dr. Häggström says. “No one has done PET imaging in this way before.” Convolutional neural networks are computer systems that try to mimic how people see and learn what the important shapes and features in images are.

Deep learning is a type of artificial intelligence. In this technique, a computer system learns to recognize features in the training data and apply that knowledge to new, unseen data. This allows the system to solve tasks, such as classifying cancerous lesions, predicting treatment outcomes, or interpreting medical charts. The MSK researchers, including medical physicist Ross Schmidtlein and data scientist Thomas Fuchs, the study’s senior author, named their new technique DeepPET.

Peering into the Body’s Inner Workings

PET, short for positron-emission tomography, is one of several imaging technologies that have changed the diagnosis and treatment of cancer, as well as other diseases, over the past few decades. Other imaging technologies, such as CT and MRI, generate pictures of anatomical structures in the body. PET, on the other hand, allows doctors to see functional activity in cells.

The ability to see this activity is especially important for studying tumors, which tend to have dynamic metabolisms. PET uses biologically active molecules called tracers that can be detected by the PET scanner. Depending on which tracers are used, PET can image the uptake of glucose or cell growth in tissues, among other phenomena. Revealing this activity can help doctors distinguish between a rapidly growing tumor and a benign mass of cells.

PET is often used along with CT or MRI. The combination provides comprehensive information about a tumor’s location as well as its metabolic activity. Dr. Häggström says that if DeepPET can be developed for clinical use, it also could be combined with these other methods 

Improving on an Important Technique

There are drawbacks to PET as it’s currently performed. Processing the data and creating images can take a long time. Additionally, the images are not always clear. The researchers wanted to look for a better approach.

The team began by training the computer network using large amounts of PET data, along with the associated images. “We wanted the computer to learn how to use data to construct an image,” Dr. Häggström notes. The training used simulated scans of data that looked like images that may have come from a human body but were artificial.

The images from the new system were not only generated much faster than with current PET technologies but they were clearer as well.

Conventionally, PET images are generated through a repeating process where the current image estimate is gradually updated to match the measured data. In DeepPET, where the system has learned the PET scanner’s physical and statistical characteristics as well as how typical PET images look, no repeats are required. The image is generated by a single, fast computation.

Dr. Häggström’s team is currently getting the system ready for clinical testing. She notes that MSK is the ideal place to do this kind of research. “MSK has clinical data that we can use to test this system. We also have expert radiologists who can look at these images and interpret what they mean for a diagnosis.

“By combining that expertise with the state-of-the-art computational resources that are available here, we have a great opportunity to have a direct clinical impact,” she adds. “The gain we’ve seen in reconstruction speed and image quality should lead to more efficient image evaluation and more reliable diagnoses and treatment decisions, ultimately leading to improved care for our patients.”

Machine Learning May Help Classify Cancers of Unknown Primary

Source: Memorial Sloan Kettering - On Cancer
Date: 11/14/2019
Link to original
Image of article

Experts estimate that between 2 and 5% of all cancers are classified as cancer of unknown primary (CUP), also called occult primary cancer. This means that the place in the body where the cancer began cannot be determined. Despite many advances in diagnostic technologies, the original site of some cancers will never be found. However, characteristic patterns of genetic changes occur in cancers of each primary site, and these patterns can be used to infer the origin of individual cases of CUP.

In a study published November 14, a team from Memorial Sloan Kettering reports that they have harnessed data from MSK-IMPACT to develop a machine-learning algorithm to help determine where a tumor originates. MSK-IMPACT is a test to detect mutations and other critical changes in the genes of tumors. When combined with other pathology tests, the algorithm may be a valuable addition to the tool kit used to make more-accurate diagnoses. The findings were reported in JAMA Oncology.

“This tool will provide additional support for our pathologists to diagnose tumor types,” says geneticist Michael Berger, one of the senior authors of the new study. “We’ve learned through clinical experience that it’s still important to identify a tumor’s origin, even when conducting basket trials involving therapies targeting genes that are mutated across many cancers.”

Basket trials are designed to take advantage of targeted treatments by assigning drugs to people based on the mutations found in their tumors rather than where in the body the cancer originated. Yet doctors who prescribe these treatments have learned that, in many cases, the tissue or organ in which the tumor started is still an important factor in how well targeted therapies work. Vemurafenib (Zelboraf®) is one drug where this is the case. It is effective at treating melanoma with a certain mutation but doesn’t provide the same benefit in colon cancer, even when it’s driven by the same mutation.Cancer of Unknown Primary Origin

If it is unclear where in the body a cancer started, it is called cancer of unknown primary (CUP) or occult primary cancer.

Harnessing Valuable Data

Since MSK-IMPACT launched in 2014, more than 40,000 people have had their tumors tested. The test is now offered to all people treated for advanced cancer at MSK.

In addition to providing detailed information about thousands of patients’ tumors, the test has led to a wealth of genomic data about cancers. It has become a major research tool for learning more about cancer’s origins.

The primary way that pathologists diagnose tumors is to look through a microscope at tissue samples. They also examine the specific proteins expressed by cancers, which can help predict a cancer’s origin. But these tests do not always allow a definitive conclusion.

“However, there are occasionally cases where we think we know the diagnosis based on the conventional pathology analysis, but the molecular pattern we observe with MSK-IMPACT suggests that the tumor is something different,” Dr. Berger explains. “This new tool is a way to computationally formalize the process that our molecular pathologists have been performing based on their experience and knowledge of genomics. Going forward, it can help them confirm these diagnoses.”

“Because cancers that have spread usually retain the same pattern of genetic alterations as the primary tumor, we can leverage the specific genetic changes to suggest a cancer site that was not apparent by imaging or conventional pathologic testing,” says co-author David Klimstra, Chair of MSK’s Department of Pathology.

“Usually the first question from patients and doctors alike is: ‘Where did this cancer start?’ ” says study co-author Anna Varghese, a medical oncologist who treats many people with CUP. “Although even with MSK-IMPACT we can’t always determine where the cancer originated, the MSK-IMPACT results can point us in a certain direction with respect to further diagnostic tests to conduct or targeted therapies or immunotherapies to use.”

Collecting Data on Common Cancers

In the current study, the investigators used data from nearly 7,800 tumors representing 22 cancer types to train the algorithm. The researchers excluded rare cancers, for which not enough data were available at the time. But all the most common types are represented, including lung cancerbreast cancerprostate cancer, and colorectal cancer.

The analysis incorporated not only individual gene mutations but more complex genomic changes. These included chromosomal gains and losses, changes in gene copy numbers, structural rearrangements, and broader mutational signatures.

“The type of machine learning we use in this study requires a lot of data to train it to perform accurately,” says computational oncologist Barry Taylor, the study’s other senior author. “It would not have been possible without the large data set that we have already generated and continue to generate with MSK-IMPACT.”

Both Drs. Berger and Taylor emphasize that this is still early research that will need to be validated with further studies. In addition, since the method was developed specifically using test results from MSK-IMPACT, it may not be as accurate for genomic tests made by companies or other institutions.

Improving Diagnosis for Cancer of Unknown Primary

MSK’s pathologists and other experts hope this tool will be particularly valuable in diagnosing tumors in people who have CUP. Up to 50,000 people in the United States are diagnosed with CUP every year. If validated for this purpose, MSK-IMPACT could make it easier to select the best therapies and to enroll people in clinical trials.

“This study emphasizes that the diagnosis and treatment of cancer is truly a multidisciplinary effort,” Dr. Taylor says. “We want to get all the data we can from each patient’s tumor so we can inform the diagnosis and select the best therapy for each person.”

This work was funded in part by Illumina, the Marie-Josée and Henry R. Kravis Center for Molecular OncologyCycle for Survival, National Institutes of Health grants (P30-CA008748, R01 CA204749, and R01 CA227534), an American Cancer Society grant (RSG-15-067-01-TBG), the Sontag Foundation, the Prostate Cancer Foundation, and the Robertson Foundation.

Dr. Varghese has received institutional research support from Eli Lilly and Company, Bristol-Myers Squibb, Verastem Oncology, BioMed Valley Discoveries, and Silenseed. Dr. Klimstra reports equity in Paige.AI, consulting activities with Paige.AI and Merck, and publication royalties from UpToDate and the American Registry of Pathology. Dr. Berger reports research funding from Illumina and advisory board activities with Roche. All stated activities were outside of the work described in this study.