top of page
  • Writer: Campbell Arnold
    Campbell Arnold
  • Jan 13
  • 4 min read


It is increasingly unjustifiable to use radiologists' time for manual image annotation.”


— Wood et al., Radiology: AI 2026



Welcome to Radiology Access! your biweekly newsletter on the people, research, and technology transforming global imaging access.


In this issue, we cover:

  • Teaching MRI to Fix Itself: AI Denoising Without Massive Training Datasets

  • The End of Labeling? How AI is Learning Neuroradiology on Its Own

  • Resouce Highlight: Free MRI Textbook


If you want to stay up-to-date with the latest in Radiology and AI, then don't forget to subscribe!



🚀 We’re Hiring at Subtle Medical!


Interested in building AI that actually ships into clinical practice and impacts patient care at scale? Subtle Medical is hiring US-based Research Scientists and Machine Learning Software Engineers to work on cutting-edge algorithms used by hospitals around the world.

 

If you’re excited about deep learning, medical imaging, and translating research into real products, I’d love to hear from you. Email me directly with your resume attached.



Teaching MRI to Fix Itself

AI Denoising Without Massive Training Datasets.



The dream of low-field MRI is often dampened by one grainy reality: noise. Although deep learning based denoising has transformed conventional imaging, these denoising algorithms don’t generalize to novel scanners and training new versions requires massive “clean” training datasets which simply don’t exist for new, unique low-field systems. 


To solve this, a team at the University of Aberdeen developed Zero-Shot Noise-as-Clean (ZS-NAC), a method that bypasses these hurdles by training a neural network directly on the acquired scan itself, eliminating the need for external data entirely. The algorithm employs a simple, yet clever self-supervised training mechanism:

  1. Synthetic Noise Injection: Add synthetic Gaussian noise to the acquired image to create a "noisier" version.

  2. Mapping Task: Train a network to map this noisier input back to the acquired image.

  3. Inference: Run inference on the original image to predict a cleaner, denoised output.


To ensure rapid results, the team deployed a modified residual U-Net with a single downsampling/upsampling step. Denoising a typical low-field volumetric brain scan 128x128x14 takes just ~2 seconds on a GPU and ~22 seconds on a CPU. The study found that training on just 20% of the image slices yielded a 10x acceleration in training time with a negligible drop in image quality. The code is available on github.


Bottom Line: ZS-NAC offers fast and effective MRI denoising using only the acquired scan, making it a quick option for novel, data-strapped low-field systems.



The End of Labeling?

How AI is Learning Neuroradiology on Its Own.



As the global shortage of radiologists collides with an explosion in MRI demand, the bottleneck increasingly isn’t hardware, it’s expert time. While AI offers a potential lifeline, traditional deep learning models hit a massive wall: they typically require thousands of expert-labeled images for training and only detect a limited number of pathologies. A recent Radiology: AI paper offers an elegant solution to a pressing question: Can we build AI that meaningfully helps without requiring massive, manually labeled datasets that radiologists simply don’t have time to create?


In the multicenter study, dubbed ALIGN, the authors proposed a self-supervised text-vision framework that learns directly from paired images and free-text radiology reports. By aligning 3D MRI scans and report embeddings generated using NeuroBERT, the system teaches itself to detect abnormalities by "reading" existing reports and correlating them with images, bypassing the expensive human annotation process entirely. 


Because the model aligns images with the semantic meaning of reports rather than predefined labels, it offers a flexible approach that allows clinicians to query an image for virtually any pathology present in the training dataset. Additionally, the system enables visual-semantic retrieval, meaning a radiologist can pull up relevant example images from the database simply by typing a text description.


Here is a breakdown of the key study details:

  • Zero-Label Training: The model trained on 63,178 unlabeled examinations, learning to map images to clinical text without a single human-annotated category.

  • High-Accuracy Triage: It achieved an AUC of 0.95 for distinguishing normal from abnormal scans on internal data.

  • Robust Generalization: When tested on data from four external hospitals the model maintained high performance with AUCs ranging from 0.85 to 0.90.

  • Versatile "Zero-Shot" Detection: Beyond simple triage, the system accurately detected specific conditions it wasn't explicitly trained to find, such as stroke, multiple sclerosis, and hemorrhage (mean AUC 0.89).


This approach powerfully reframes the anomaly detection problem. By enabling automated triage without the need for labeled data, the ALIGN framework offers a clear path to more scalable AI solutions without painstaking manual labels.


Bottom line: the ALIGN framework achieves high-accuracy, zero-shot MRI abnormality detection and visual-semantic retrieval without requiring a single manually labeled training image.




Resource Highlight: Free MRI textbook



If you’re in need of a solid refresher on MR physics, check out the brand-new, freely available MRI textbook released this week by Peder Larson from UCSF. It covers all the key concepts of MRI from basic physics to advanced imaging techniques, making it an ideal reference text regardless of your background.


What you’ll find inside:

  • Clear explanations of MRI principles

  • Diagrams and illustrations to demystify complex topics

  • Links to additional educational resources

  • A format built for self-study and teaching


Whether you’re just starting out or deepening your understanding of MRI, this is a fantastic no-cost resource from one of the field’s leading educators.




Feedback


We’re eager to hear your thoughts as we continue to refine and improve RadAccess. Is there an article you expected to see but didn’t? Have suggestions for making the newsletter even better? Let us know! Reach out via email, LinkedIn, or X—we’d love to hear from you.


References



Disclaimer: There are no paid sponsors of this content. The opinions expressed are solely those of the newsletter authors, and do not necessarily reflect those of referenced works or companies.



 
 

©2024 by Radiology Access. All rights reserved.

bottom of page