August 18, 2020
FastMRI, a joint research initiative from Facebook AI and NYU Langone Health, aims to develop new ways to use AI to accelerate the MRI scanning process. Unlike most AI medical imaging projects, which try to use AI to automatically review images to detect anomalies, fastMRI is using AI to create images in a new way that requires much less data.
In a rigorous new clinical study, radiologists found fastMRI’s AI-generated images — created with about 4x less data from the scanning machine — were diagnostically interchangeable with traditional MRIs. This means fastMRI can make the scanning process much faster.
By speeding up MRI scans, fastMRI could improve patients’ experience, help expand access to the technology, and open up new use cases. FastMRI will also help advance AI research in other domains.
In the new study, radiologists produced the same diagnoses with the AI-accelerated images and could not distinguish which images were produced with AI and which came from the slower traditional scans. All examiners rated the AI-generated images as being higher in quality than traditional scans.
As we’ve done previously with fastMRI and other Facebook AI research projects, we’re sharing our models and data because we believe open collaboration will accelerate progress and lead to future breakthroughs that will benefit everyone.
Magnetic resonance imaging (MRI) scanners, the machines that doctors use to image the body’s soft tissue and organs, have been crucial tools for medical diagnoses for almost half a century. But one of their major shortcomings is the time it takes to carry out a scan. Clinicians can spend up to an hour gathering sufficient data for a diagnostic MRI examination, which eats into a hospital’s demanding schedule and can feel like an eternity for patients who are anxious or in pain.
Today Facebook AI and NYU Langone Health are announcing a major research milestone that could significantly improve the patient experience, expand access to MRIs, and potentially enable new use-cases for MRIs.
In a first-of-its-kind clinical study, the team of computer scientists and radiologists demonstrated that AI can generate equally accurate and detailed MRIs using about one-fourth of the raw data traditionally required for a full MRI. Since less data is required, MRI scans could run nearly 4x faster.
In the interchangeability study, which was published in the American Journal of Roentgenology, radiologists reviewed both traditional MRIs and images generated with an AI model from about 75 percent less raw data. The radiologists produced the same diagnoses with both and could not tell which were created using the new method.
The results represent the culmination of nearly two years of open research from Facebook AI and NYU Langone’s fastMRI initiative, a collaborative effort to improve medical imaging technology and advance research on using AI to generate images from limited data.
“This study is an important step toward clinical acceptance and utilization of AI-accelerated MRI scans, because it demonstrates for the first time that AI-generated images are indistinguishable in appearance from standard clinical MR exams and are interchangeable in regards to diagnostic accuracy,” says Michael P. Recht, MD, Louis Marx Professor and Chair of Radiology at NYU Langone.
Capturing all the raw (“k-space”) data for a diagnostic MRI examination can take up to an hour. Accelerating the process will mean patients will spend less time in a noisy, claustrophobia-inducing tube during scanning — an uncomfortable experience for everyone but especially difficult or even impossible for young children and the seriously ill.
Faster scanning could also expand access to MRIs by increasing the number of patients each machine can serve. Further, it could enable expanded uses for MRIs, potentially allowing doctors to use MRIs in place of X-rays and CT scans for some cases. This is particularly exciting because, unlike those other forms of scans, MRIs don’t use ionizing radiation.
Achieving these benefits poses a novel interdisciplinary challenge. To solve it, Facebook AI and NYU Langone Health have closely collaborated to combine our mutual expertise and resources across AI, medical imaging, radiology, and diagnostic evaluations. Since the start of fastMRI, we’ve also committed to working openly and sharing our tools, models, and datasets so that other researchers learn from our approaches and can contribute new ideas of their own. Following fastMRI’s previous releases of the largest-ever datasets of knee and brain MRIs complete with k-space data and our launch of the fastMRI image reconstruction challenge, we are now publishing our work and sharing our model code on GitHub. Sharing this suite of resources reflects fastMRI’s mission to engage the larger AI and medical imaging research community rather than to develop proprietary methods for accelerating MR imaging.
We hope this approach will have a far-reaching, lasting positive impact on the world and provide a model for future AI research collaborations.
Standard MRIs take the sequence of k-space data collected by the scanner and then use a mathematical technique, such as an inverse Fourier transform, to generate MR images. FastMRI takes an entirely different approach, feeding a much more limited amount of k-space data into an AI model that has been trained to create an image that matches the ground truth.
This poses a difficult AI challenge. The neural network must be able to effectively and reliably bridge the gaps in scanning data without sacrificing accuracy. While previous computer vision techniques in other domains have succeeded in generating believable images, the fastMRI model must take incomplete data and generate an image that both looks plausible and accurately matches the ground truth. A few missing or incorrectly modeled pixels could mean the difference between an all-clear scan and one in which radiologists find a torn ligament or a possible tumor.
Generating an accurate image isn’t the only challenge, however. The AI model must also create images that are visually indistinguishable from traditional MRI images. Radiologists spend many hours carefully analyzing these images and an unfamiliar look and feel could make radiologists less likely to adopt fastMRI in their practices.
The interchangeability study published today was designed expressly to show that AI-generated images will reliably result in the same diagnoses and meet radiologists’ needs just as traditional images would. For the study, six expert radiologists reviewed two sets of de-identified and anonymized knee MRIs of 108 test patients who had been evaluated at NYU Langone Health. The radiologists in the study completed fellowships specifically in diagnosing and treating musculoskeletal disorders.
Two sets of MRIs were generated for each patient case: one set using the standard imaging techniques widely used in hospitals and clinics today, and one set using the fastMRI AI model with about 4x undersampled k-space data. The evaluators were not told which images were created with AI, and to limit the potential for recall bias, the evaluations of the standard images and AI-accelerated images were spaced at least one month apart.
The radiologists systematically evaluated the images for pathology, such as meniscal tears, ligament abnormalities, and cartilage defects, and noted these in a structured report. Reviewers were also asked to grade image quality and to say whether they believed that the image had been created with AI. After the radiologists had reviewed the AI-accelerated and traditional MRIs for each case, results were compared to see whether there were discrepancies in their diagnoses.
The study found there were no significant differences in the radiologists’ evaluations. They found the same abnormalities and pathology regardless of whether they were examining the standard or AI-generated MRIs. All the examiners judged the AI-accelerated images to be of better overall quality than the traditional ones. Five of the six radiologists were not able to correctly discern which images were generated using AI.
This is important because it confirms the AI is representing the ground truth as well as the traditional image. The AI didn't miss or add anything, and the images are just as good for diagnostic purposes as traditional MRIs.
This image shows the knee of a young patient who suffered an acute injury. It was generated using the standard, fully sampled MRI method. There is a bone bruise of the lateral femoral condyle (arrowheads) and an avulsion fracture of the lateral proximal tibia (“Segond” fracture) (arrow), which are associated with an anterior cruciate ligament tear (not seen on these images). Move the slider to the left to see an example of fully sampled k-space data used to create these images.
This image shows the same patient’s knee, but it was generated using the fastMRI model, which requires far less raw data. Move the slider to the left to see an example of 4x undersampled k-space data. The pathology is seen equally well on both the standard, fully sampled (above) and the accelerated image here. Note the decreased noise on the accelerated image.
The scans did not represent just one age group or gender. The patients’ ages ranged from 18 to 89, and there were 57 females and 51 males. Institutional review board approval was obtained for the data collection. The fastMRI data used in the project, including scans used for the study, are from the open-source dataset that NYU Langone created in 2018. Before open-sourcing the data, NYU Langone ensured that all scans were de-identified, and no patient information was available to reviewers or researchers working on the fastMRI project. No Facebook user data was contributed to creation of the fastMRI dataset.
Building a model that generates complete and highly accurate MRIs from far less raw data poses a significant challenge. Facebook AI researchers experimented with thousands of model variations and refined our approach in close collaboration with our NYU Langone partners, who provided feedback and identified qualitative issues that would not be clear to nonexperts and wouldn’t be measurable with standard metrics such as SSIM.
The resulting variational model can reconstruct MR images directly from undersampled raw data, using multicoil undersampled k-space data as input and applying a sequence of 12 refinement steps, known as cascades. This sequence of cascades allows the network to iteratively fill in missing data points. The structure of the neural network is informed by the underlying MR physics, which provides the right inductive bias for efficient training. The network was trained end-to-end to maximize the structural similarity of its output to gold-standard images using the largest open source dataset of knee MRIs, which fastMRI released in 2018. The k-space data for the approximately 1,200 scans in the dataset was retrospectively undersampled for use in training the network. More details on our method and results are available in this paper.
Our partnership also enabled us to adjust our model to create images that truly meet radiologists’ needs. Perhaps counterintuitively, we found that carefully adding low levels of noise to the AI-accelerated images actually enhanced the perceived sharpness and made the images appear more similar to images reconstructed using the slower traditional techniques. We developed a new kind of diagnostically aware noise addition in order to avoid corrupting or otherwise obscuring diagnostic content.
To bring AI-accelerated MRI scans to clinical practice, fastMRI will need to show that the technology works equally well for all use cases and with other types of MRI scanning machines. We are planning additional studies with additional datasets, and we are developing alternative architectures for generating MRIs, such as integrating traditional parallel imaging methods into deep neural networks. We will work to validate fastMRI for brain scans, using the neuroimaging dataset we open-sourced earlier this year, as well as scans for other parts of the body. While this clinical study was performed using 3 Tesla (3 T) machines, which use an extremely high-powered magnet, we have also developed techniques to use fastMRI with 1.5 T machines, which are also in common use around the world.
Since fastMRI is an open source project, other researchers and the companies that build MRI scanners can build on our work and test our code on their machines. Our hope is that hardware vendors will get FDA approvals to bring these algorithms into production. This open approach — with the fastMRI Challenge, shared datasets, and open source tools — has been a driving force for fastMRI.
We look forward to sharing updates and learning how other researchers leverage our work to help us all improve MRI technology for the benefit of patients everywhere.
We’re announcing updates to Facebook’s population density maps, which can be used to coordinate and improve the delivery of humanitarian aid around the world, including global COVID-19 vaccinations.
April 15, 2021
Working with Inria researchers, we’ve developed a self-supervised image representation method, DINO, which produces remarkable results when trained with Vision Transformers. We are also detailing PAWS, a new method for 10x more efficient training.
April 30, 2021