- Campbell Arnold
- May 20
- 7 min read
Updated: May 21
“It's not just about image quality—it’s about preserving clinically relevant features that doctors rely on.”
— Long Wang, Subtle Medical ML Engineer
Welcome to Radiology Access! your biweekly newsletter on the people, research, and technology transforming global imaging access.
In this issue, we cover:
If you want to stay up-to-date with the latest in Radiology and AI, then don't forget to subscribe!
Vendors showcase MRI of the future at ISMRM
How hardware and AI will change in the next generation of devices.

ISMRM wrapped up in Honolulu last week, with major vendors presenting a shared vision for the future of MRI — one driven by AI reconstruction, operational flexibility, and next-generation hardware performance. Across all plenary sessions, leading OEMs like GE, Siemens, Philips, Canon, and United Imaging emphasized common pillars for the MRI systems of tomorrow:
1. Deep Learning Reconstruction is the New Standard-of-Care
Every major vendor presented advancements in deep learning-based reconstruction — signaling its transition from innovation to baseline expectation for new systems.
GE showcased AIR DL and Sonic DL, delivering sharp images from scans accelerated up to 10x.
Canon highlighted AiCE and PIQE, a combination of denoising, zero-padding, and deblurring models.
Philips introduced SmartSpeed Precision with dual AI — separate models for denoising and super-resolution.
Siemens featured DeepResolve Pro, branding its latest DL reconstruction platform.
United Imaging debuted uAiFI Live, the first time-resolved DL recon platform capable of visualizing real-time movements like eye motion.
2. Helium-Free and Portable (Overall Reduced Siting Costs)
Vendors are also focused on making MRI easier to access and operate by lowering siting costs. Helium-free systems are on the horizon, which can help reduce dependency on scarce cryogens. Additionally, portable and compact MRIs were a continued theme, pointing toward broader deployment in new environments, like outpatient and remote settings.
3. An Autonomus Scanning Proccess
Canon, Philips, and United Imaging all demonstrated steps toward autonomous scanning — with a focus on auto-positioning and cardiac MRI automation. Researchers also presented auto-positioning solutions for complex use cases like fetal MRI, suggesting a future with fewer operator dependencies and greater consistency.
4. Higher-Performance Hardware to Enable More Specialized Imaging
New scanner platforms with high-gradient performance are pushing the limits of microstructure and neuroimaging:
GE’s MAGNUS 3T offers up to 300 mT/m, 750 T/m/s gradient power.
Siemens’ MAGNETOM Cima.X 3T provides 200 mT/m, 200 T/m/s — enabling advanced brain studies and diffusion imaging.
Researcher working on Hyperfine’s Swoop 64mT also demonstrated diffusion tensor imaging for the first time, unlocking more advanced sequences in the low-field regime.
5. More Variety in Field Strengths to Meet Varied User Needs
Despite high costs and operational complexity, vendors remain committed to ultra-high field for pushing the state-of-the-art in MRI. Siemens and United Imaging showed new results on their 7T and 5T systems, underscoring ongoing research and niche clinical adoption. At the other end of the spectrum, low-field systems are gaining traction, with growing interest from vendors and academic groups alike.
Bottom line: Vendors at ISMRM projected a future MRI that is faster, smarter, and more accessible — driven by deep learning, automation, and hardware innovation.
Super-Resolution Boosts Low-Field MRI Stroke Sensitivity
And how AI could help clinicians make the right call when every minute matters.

While low-field MRI scanners offer a more affordable and accessible alternative to high-field systems, their lower image quality limits their diagnostic value—particularly in the early stages of stroke, when timely intervention is critical. A recent study published in Stroke introduces a deep learning–based super-resolution model designed to enhance low-field images and improve ischemic stroke detection.
The model, called SCUNet, uses a hybrid convolution-transformer architecture (Swin-Conv UNet) to boost spatial resolution and suppress noise. To train and validate the model, the researchers:
Pretrained SCUNet on open-source MRI datasets (IXI and M4Raw).
Fine-tuned and tested it using paired 0.23T (ACUTA Elfin; Rayplus) and 3T (Prisma; Siemens) scans from 282 stroke patients.
Beyond technical optimization, the team emphasized the importance of clinical integration. “It’s not just about image quality—it’s about preserving clinically relevant features that doctors rely on” said author Long Wang. In collaboration with clinical team members, the authors assessed performance using sensitivity, specificity, lesion volume, ADC values, and standard stroke scoring metrics, with 3T MRI as the reference. The results were striking:
super-resolved images significantly outperformed native low-field scans, with higher sensitivity (89% vs. 77%) and specificity (91% vs. 71%) for lesion detection.
They also showed excellent agreement with high-field MRI in both qualitative and quantitative metrics, including stroke scoring (ICC > 0.95) and lesion volume (r = 0.98 vs. 0.27) and ADC measurements (r = 0.78 vs. 0.36).
This study demonstrates how deep learning can dramatically expand the clinical value of portable low-field MRI. By narrowing the performance gap with high-field systems, tools like SCUNet may unlock broader access to high-quality stroke imaging where it’s needed most. The authors shared the code for this project on github.
Bottom line: Deep learning–based super-resolution can significantly enhance low-field MRI for stroke detection, bringing high-field image quality to more affordable, accessible scanners.
ViT-Fuser: Leveraging Prior Scans to Accelerate Low-Field MRI
Why start from scratch when you’re collecting a follow up scan?

Traditional high-field MRI systems, while providing high-quality images, are expensive and less accessible, especially for patients requiring frequent scans. Low-field MRI offers a cost-effective alternative but suffers from lower image quality and longer scan times. A recent arXiv paper introduces a promising new approach that leverages a patient’s prior high-field MRI to enhance subsequent low-field scans. Author Efrat Shimron said “Our idea was to explore how those two types of systems could be used in tandem to enable more accessible MRI, without compromising image quality.”
The proposed method, ViT-Fuser, is a vision transformer-based model that extracts personalized features from past scans to improve the quality of current acquisitions. Key features include:
Feature-level fusion of prior high-field and current low-field images
Compatibility with prior scans of any field strength, vendor, or sequence type.
Strong generalizability to new patients and datasets—no retraining required.
The authors validated their approach on both simulated and real-world low-field data, including 47mT and 6.5mT acquisitions. Results showed that ViT-Fuser could:
Enhance signal-to-noise ratio and tissue contrast in low-field scans.
Support up to 4× accelerated acquisitions while maintaining image quality.
Outperform two state-of-the-art reconstruction models.
Improve visibility of clinically relevant features such as tumors.
This work represents an important step toward expanding the clinical viability of low-field MRI. By personalizing image reconstruction using prior scans, ViT-Fuser could greatly enhance longitudinal monitoring for diseases like multiple sclerosis and help unlock low-field MRI’s potential as a scalable, cost-effective screening tool. Efrat Shimron think that their article could motivate a new line of studies that “explore new workflows, where low-field and high-field systems are used alternately.”
Bottom line: ViT-Fuser leverages a patient’s prior high-field MRI to significantly enhance low-field image quality, accelerating scans and improving diagnostic utility.
The New DEAL for Large Language Models
A checklist to improve LLM research reporting.

Large language models (LLMs) are quickly becoming embedded in radiology workflows — generating reports, editing and summarizing notes, and supporting patient communication. As their use cases expand, the need for rigorous, transparent reporting has become urgent. A new technical report in NEJM AI introduces the Development, Evaluation, and Assessment of Large Language Models (DEAL) checklist, aimed at improving the consistency, reproducibility, and clinical relevance of LLM research across healthcare.
Many LLM applications currently operate outside regulatory purview — not being classified as medical devices by the FDA, despite calls from many in the regulatory community for tighter oversight of these models and classification as medical devices. Checklists like DEAL can help bridge the gap by defining best research practices for reporting key model training and data details — the kind of information likely to be required for future regulatory pathways.
“As LLMs began moving rapidly from lab prototypes to clinical use cases, we noticed that most studies lacked consistent reporting of key details,” said co-author Satvik Tripathi. DEAL was built to fill that gap, offering two pathways:
DEAL-A: For advanced use cases such as model development, fine-tuning, or retrieval-augmented systems.
DEAL-B: For applied studies using pretrained models, such as those focused on prompt engineering.
In both pathways, the checklist aims to ensure transparency, reproducibility, and real-world applicability. DEAL is currently under review for inclusion in the EQUATOR Network and is expected to be used in peer review workflows, much like STARD, CONSORT, or CLAIM in other areas of medical AI.
Bottom line: As LLMs become more common in radiology, the DEAL checklist offers a critical framework for ensuring transparency, reproducibility, and regulatory readiness.
Resource Highlight: The Imaging Wire
For this week's resource highlight, I’m recommending you subscribe to The Imaging Wire. It’s one of my favorite ways to stay updated on industry trends. This publication posts twice a week, offering an in-depth exploration of a single topic while also providing a brief overview of other notable developments.
Feedback
We’re eager to hear your thoughts as we continue to refine and improve RadAccess. Is there an article you expected to see but didn’t? Have suggestions for making the newsletter even better? Let us know! Reach out via email, LinkedIn, or X—we’d love to hear from you.
References
Oved, Tal, et al. "Deep learning of personalized priors from past MRI scans enables fast, quality-enhanced point-of-care MRI with low-cost systems." arXiv preprint arXiv:2505.02470 (2025).
Bian, Yueyan, et al. "Quantitative Ischemic Lesions of Portable Low–Field Strength MRI Using Deep Learning–Based Super-Resolution." Stroke (2024).
Tripathi, Satvik, et al. "Development, Evaluation, and Assessment of Large Language Models (DEAL) Checklist: A Technical Report." NEJM AI (2025): AIp2401106.
Disclaimer: There are no paid sponsors of this content. The opinions expressed are solely those of the newsletter authors, and do not necessarily reflect those of referenced works or companies.