- Campbell Arnold
- Apr 22
- 7 min read
Welcome to Radiology Access! your biweekly newsletter on the people, research, and technology transforming global imaging access.
You might notice a few changes in this issue—thanks to a thoughtful reader who reached out with some great suggestions! I always love hearing from people who read the newsletter, and I’m incredibly grateful for the feedback. Love it? Hate it? Let me know—just shoot me an email!
In this issue, we cover:
Want to stay up-to-date with the latest in Radiology and AI? Then don't forget to subscribe!
RAM: A Unified Model for Image Reconstruction
And how it could impact AI algorithm development.

In AI-based image reconstruction, most approaches fall into two camps: task-specific unrolled architectures or iterative methods like diffusion models. Task-specific networks often achieve the highest performance, but are computationally intensive and have poor generalization. Diffusion models, on the other hand, offer improved flexibility and require less data, but they tend to be slower at inference and less accurate—making them a tough fit for clinical settings where turnaround time and accuracy matter.
Fortunately for data-strapped researchers and impatient developers, a team from France recently introduced the Reconstruct Anything Model (RAM)—a fast, generalizable approach that can handle diverse modalities (e.g., CT, MRI, microscopy) and tasks (e.g., denoising, inpainting, super-resolution) without extensive retraining.
Methodological Summary
Model Architecture: RAM uses a lightweight, non-iterative design based on DRUNet that embeds physics-based priors directly into the network, achieving up to 8× lower computational complexity than unrolled methods while maintaining strong performance.
Training Data: The model was trained across multiple modalities (natural images, MRI, and CT) and tasks (e.g., reconstruction, denoising, deblurring, super-resolution, and inpainting).
Validation: RAM was benchmarked on both in-distribution and out-of-distribution datasets, achieving performance on par with or better than existing methods across MRI and CT, and generalizing well to electron microscopy, photon imaging, and satellite data with minimal fine-tuning.
RAM’s ability to perform well across a wide range of imaging tasks and modalities with a single, efficient model could reshape how we approach AI in medical imaging. Instead of building and tuning task-specific architectures, researchers and developers can use more generalizable tools like RAM, lowering both the technical barrier and development costs. That could translate into faster innovation, easier deployment, and broader access—especially in data-scarce or resource-limited environments.
Bottom line: RAM offers a fast and generalizable alternative to traditional reconstruction methods and can serve as a robust starting point for developing more task specific algorithms.
New Challenge Launched to Advance Low-Field MRI Quality
Can your algorithm take the prize?

Low-field MRI systems hold tremendous promise for expanding access to medical imaging in resource-limited settings. However, these systems produce lower-quality images compared to their high-field counterparts, which hinders their diagnostic utility. To address this gap, researchers (including me!) have been developing enhancement algorithms that can translate low-field acquisitions into high-field like images. This year researchers at Monash University in Australia are hosting the first low-field image enhancement competition for the Hyperfine system, with results being presented at MICCAI 2025 in South Korea.
Key Challenge Details
Task: Participants will develop an image enhancement algorithm that takes in a low-field image and outputs a high-field-like image, while maintaining structural integrity.
Dataset: The dataset includes paired T1, T2, and FLAIR scans from Siemens Skyra (3T) and Hyperfine Swoop (64mT) scanners collected at a single site. Data will be split into 50 training, 10 validation, and 15 test cases.
Prizes:
🥇 1st place: $1,500 AUD
🥈 2nd place: $750 AUD
🥉 3rd place: $500 AUD
Important Deadlines:
Submission Deadlines: July 31
Winners Announced: September 1
MICCAI Presentation: September 23-37
Improving image quality for low-field MRI has the potential to dramatically broaden access to high-quality diagnostic imaging in under-resourced settings around the world. If you're passionate about medical imaging, AI, or global health, this is a great opportunity to make a meaningful impact.
Bottom line: New MICCAI 2025 competition to develop low-field MRI enhancement algorithms offers a cash prize and publication opportunity.
More Human Than Human: LLMs Pass the Turing Test
And how that could impact future clinical care.

Perhaps to few people's surprise, a recent arXiv article by UC San Diego researchers presents empirical evidence that certain large language models (LLMs) can pass the Turing Test—the classic benchmark for assessing whether a machine can convincingly mimic human intelligence. In the experiments, participants held 5 minute conversations with both a human and an AI, then tried to identify which was which. GPT-4.5 was mistaken for a human 73% of the time—more often than the actual human! LLaMa-3.1 also reached 56%, while older models like ELIZA and GPT-4o scored significantly lower.
These results mark the first documented instance of an LLM passing a standard three-party Turing Test, signaling just how humanlike today’s AI systems have become. The study also underscores the importance of understanding the implications of AI systems that can convincingly mimic human behavior, raising questions about their role in society and the potential need for guidelines to manage their integration into various aspects of our lives.
For the medical imaging and broader healthcare communities, this study raises critical questions about how we should deploy this technology. As AI tools become more deeply embedded in healthcare systems—whether in patient communication portals, scheduling systems, or clinical decision support—there is a growing need to clarify when patients or providers are interacting with a human clinician versus an automated system. A patient might ask a question about their MRI report and assume they’re speaking with a radiologist, when in fact it’s a conversational AI. While this can help streamline care by reducing workloads and delays, it also highlights the need for transparency and oversight for responsible deployment of LLMs in clinical environments.
Bottom line: Large language models like GPT-4.5 are now capable of passing the Turing Test, effectively making us all a little less special.
Integrating Multimodal Models into Clinical Care
Without taking the human out of the loop.
A recent Nature Perspectives article authored by titans of the field explores the potential of multimodal generative AI for automating clinical tasks, with a particular focus on radiology report generation. These models integrate both images and text to directly produce full-length diagnostic reports, representing a major step beyond traditional AI systems that focus only on classification or segmentation.
In radiology, this could mean generating draft reports for chest X-rays or CT scans that radiologists can review and finalize—much like a resident's preliminary read. A large number of US radiologists already rely on AI support for aspects of report generation, which has helped to streamline workflows and reduce reporting fatigue. These multimodal vision language models could further amplify this trend.
The authors emphasize that while this technology could ease clinical burdens and expand access to diagnostic expertise, particularly in under-resourced settings, it must be rigorously validated and thoughtfully integrated. Transparency, real-world performance evaluation, and a clear role for human oversight are critical. For instance, in a busy emergency department generative AI could provide initial chest x-ray interpretations to help triage patients—provided a radiologist remains in the loop to ensure diagnostic accuracy and patient safety.
Bottom line: Multimodal generative AI could transform radiology workflows by drafting full diagnostic reports from imaging data, but expert oversight remains essential.
Phantom Improves DTI Data Harmonization & Quantitative Analyses
Potentially helping researchers improve large consortium data collections.
As an imaging community, we have done an excellent job banding together to generate large dataset consortiums. These open source datasets have provided researchers across the world with access to high quality imaging and have helped make radiology the premier specialty for AI development. However, harmonizing data across multiple sites is not without its own challenges.
In a recent study published in MAGMA evaluated the consistency of diffusion tensor imaging measurements across different MRI systems using a specialized quality control phantom designed to mimic white matter fiber tracts. Researchers scanned the phantom on 3T systems from GE, Siemens, and Philips, assessing the impact of various motion probing gradients and performed multiple repeated scans to analyze variability in key diffusion metrics. The analysis revealed highly consistent measurements across vendors, probes, and repeated measures.
These findings have important implications for the standardization of large diffusion MRI datasets. Consistency is crucial for quantitative MRI analyses, as it can enhance the comparability of data and methods across different scanners and institutions. Approaches that improve data harmonization can help researchers develop more accurate models that can generalize across many vendors and settings.
Bottom line: Standardizing diffusion MRI using phantoms can improve the reliability of large consortium datasets, laying the groundwork for more robust quantitative imaging.
Feedback
We’re eager to hear your thoughts as we continue to refine and improve RadAccess. Is there an article you expected to see but didn’t? Have suggestions for making the newsletter even better? Let us know! Reach out via email, LinkedIn, or X—we’d love to hear from you.
References
Terris, Matthieu, et al. "Reconstruct Anything Model: a lightweight foundation model for computational imaging." arXiv preprint arXiv:2503.08915 (2025).
MICCAI 2025 - Ultra-Low-Field MRI Image Enhancement Challenge (ULF-EnC). synapse.org.
Jones, Cameron R., and Benjamin K. Bergen. "Large language models pass the turing test." arXiv preprint arXiv:2503.23674 (2025).
Rao, Vishwanatha M., et al. "Multimodal generative AI for medical image interpretation." Nature 639.8056 (2025): 888-896.
Simard, Nicholas, et al. "Assessing measurement consistency of a diffusion tensor imaging (DTI) quality control (QC) anisotropy phantom." Magnetic Resonance Materials in Physics, Biology and Medicine (2025): 1-17.
Disclaimer: There are no paid sponsors of this content. The opinions expressed are solely those of the newsletter authors, and do not necessarily reflect those of referenced works or companies.





