top of page
  • Writer: Campbell Arnold
    Campbell Arnold
  • Sep 9
  • 5 min read

Updated: Sep 10


ree

"[This] is really about giving MR scanners a memory. One day, the more we see you, the faster we will be able to scan you.”


Dan Sodickson, Chief of Innovation, NYU Radiology Department



Welcome to Radiology Access! your biweekly newsletter on the people, research, and technology transforming global imaging access.


In this issue, we cover:

  • OmniMRI: One Model to Rule Them All?

  • Trust, but Verify: Smarter MR Reconstruction

  • AI-Guided, On-the-Fly Protocol Adaptation


If you want to stay up-to-date with the latest in Radiology and AI, then don't forget to subscribe!



OmniMRI: One Model to Rule Them All?

How a single framework could unify fragmented datasets into a radiology foundation model.


ree

A recent arXiv publication introduced OmniMRI, an ambitious vision-language foundation model designed to assist radiologists across the entire MRI pipeline. What made this work stand out to me was its sheer scope: the model was trained on data curated from 60 public sources, spanning 220,000 MRI volumes (over 19 million slices!). Unlike most task-specific AI models, OmniMRI is built to handle a wide range of tasks through natural language instructions, including image reconstruction, segmentation, abnormality detection, diagnostic suggestion, and report generation.


To me, the major contribution of this article is the description of their multi-stage training paradigm, which is capable of flexibly integrating virtually any MRI dataset into the training process. Their Instruction-Response paradigm is capable of encoding virtually any task, from image reconstruction to report generation. They also provide a structured method for image-text pretraining that takes advantage of metadata and any available labels, while leveraging another vision-language model to fill in the gaps.


The brief article shows some promising qualitative results, with a few strong caveats:

  • The paper currently only offers qualitative examples, and the accompanying GitHub repository is empty. The authors have stated that a quantitative benchmarking study as well as code and models are forthcoming.

  • Image-based results (i.e., reconstruction and segmentation) look reasonable and are shown for multiple anatomies, but text-based results show inconsistencies that highlight the model’s immaturity for clinical use.

  • There’s no evidence yet of generalization to unseen tasks beyond those represented in training.


While there are likely major kinks to be worked out, the development of OmniMRI is a noteworthy achievement. Creating a unified training paradigm for such a diverse set of tasks and processing over 220,000 MRI volumes is an impressive feat on its own. OmniMRI may represent a significant step toward a generalist radiology foundation model. For now, this is a project to follow with cautious optimism (or healthy skepticism) as we await the quantitative results and model release. 


Bottom line: OmniMRI could represent a step toward general-purpose, radiology foundation models, but its true impact will depend on forthcoming benchmarking and real-world validation.



Trust, but Verify: Smarter MR Reconstruction

How your old images could speed up your next MRI.


ree

A new IEEE-TMI study introduces the Trust-Guided Variational Network (TGVN), a deep learning reconstruction framework designed to harness “side information” such as images from complementary contrasts, prior scans, or even other modalities. Unlike most generative methods, which risk over-relying on the auxiliary data and hallucinating false structures, TGVN learns to trust side information only when it is relevant, while defaulting back to measured k-space data when it isn’t.


The model was tested across knee and brain MRI with high acceleration factors (up to 20× undersampling). Compared to other algorithms, TGVN produced images with higher SSIM and PSNR scores and better preservation of fine anatomical and pathological features. Importantly, the framework proved robust even when side information was degraded or misaligned. In ablation studies it avoided hallucinations by down-weighting the irrelevant inputs. The algorithms also demonstrated quantitative superiority to prior methods for meniscus tear segmentation, showcasing clinically utility.


TGVN pushes MRI reconstruction beyond simple acceleration, toward context-aware imaging that can integrate prior exams, multi-contrast data, and potentially even clinical notes. This approach could not only shorten scan times, but also make MRI more broadly accessible by enabling diagnostic-quality imaging from limited or lower-SNR acquisitions, such as low-field scanners. The code is openly available on GitHub.


Bottom line: Trust-guided MRI reconstruction offers a safer way to use side information to  accelerate scans while reducing the risk of hallucinations.




AI-Guided, On-the-Fly Protocol Adaptation

Using AI to adapt MRI protocols in real time to catch lesions & optimize workflows.


ree

A new study in the European Journal of Radiology explores whether AI can adapt brain MRI protocols in real time, potentially reducing inefficiencies in consultation workflows. Researchers at Copenhagen University Hospital and collaborators evaluated Cerebriu’s Apollo algorithm for detecting infarcts, intracranial hemorrhages, and tumors. The tool uses a 3- or 4-sequence abbreviated protocol and can run while the patient is still in the scanner. If abnormalities are flagged, the algorithm can tailor the protocol to account for the findings, such as recommending contrast-enhancement for a suspected tumor.


The team tested the tool on 414 brain MRI exams. Compared with consultant neuroradiologists, the AI achieved similar sensitivities (94% for infarcts, 82% for hemorrhages, 74% for tumors) but lower specificities, leading to increased false positives. While AI support didn’t significantly improve overall reader performance, it did highlight missed lesions in some cases and performed better than MR technologists for tumor detection.


The study underscores both the promise and the challenges of AI-driven, adaptive MRI workflows. Real-time protocol adjustment could accelerate patient throughput, while cutting down on missed findings and reduce costly patient recalls. However, the lower specificity means radiologist oversight remains essential. The authors suggest further refinements will be key before these systems can be broadly deployed. Still, this represents an important step toward a future where scanners become smarter, context-aware, and dynamically guided by AI.


Bottom line: AI-driven, real-time MRI protocol adjustments show promise, but lower specificity means radiologist oversight remains essential.



Feedback


We’re eager to hear your thoughts as we continue to refine and improve RadAccess. Is there an article you expected to see but didn’t? Have suggestions for making the newsletter even better? Let us know! Reach out via email, LinkedIn, or X—we’d love to hear from you.


References



Disclaimer: There are no paid sponsors of this content. The opinions expressed are solely those of the newsletter authors, and do not necessarily reflect those of referenced works or companies.



 
 

©2024 by Radiology Access. All rights reserved.

bottom of page