top of page
  • Writer: Campbell Arnold
    Campbell Arnold
  • Aug 26
  • 4 min read

Updated: Aug 27


ree

“In healthcare, adoption happens at the speed of trust


Kevin Field, CRO Rad AI



Welcome to Radiology Access! your biweekly newsletter on the people, research, and technology transforming global imaging access.


In this issue, we cover:

  • Dialing in Image Generation: Metadata-Guidance during Synthesis

  • Harrison.ai reveals preliminary foundation model trial results

  • Halve Your Scan Time, Same Great Quality

  • Rad AI Launches New Blog, the Readout


If you want to stay up-to-date with the latest in Radiology and AI, then don't forget to subscribe!



Dialing in Image Generation: Metadata-Guidance during Synthesis

Be honest with me, have you been throwing away your metadata?


ree

Not all MRI sequences are created equal. Acquisition parameters for a T1-weighted scan can vary widely across sites. Yet most image synthesis algorithms ignore this variability, simply taking an input image and producing an output, without any control over the “acquisition” parameters. In a recent Cell Reports Medicine article, researchers took a major step toward solving this problem by introducing a metadata-guided image synthesis framework.


Unlike standard generative methods, their approach incorporates both an image encoder and a text encoder that embeds metadata (e.g., patient demographics and imaging parameters) directly into the synthesis process. The result is not just high-fidelity synthetic images, but also the ability to customize outputs based on desired attributes such as voxel size, echo time, or even patient age.


The authors validated their framework using more than 30K 3D brain scans from 13 public datasets. In benchmarking studies, their method consistently outperformed or matched other state-of-the-art algorithms, while maintaining strong alignment with the conditioned metadata. They also showed how synthetic datasets can be tailored to reflect specific demographics or acquisition settings, opening the door to broader research and clinical applications.


The implications are far-reaching. Metadata-driven synthesis could give radiologists more realistic, parameter-specific images, while offering researchers a powerful data augmentation tool for algorithm training. Best of all, the code is available on Github. This approach marks a major step toward synthetic medical images that are not only realistic, but also clinically relevant and customizable.


Bottom line: Metadata-guided synthesis could unlock a powerful new way to generate images while controlling imaging parameters, demographics, and potentially much more.



Harrison.ai reveals preliminary foundation model trial results

Could AI be drafting your next x-ray report?


ree

Harrison.ai, an Australian company that develops both radiology and pathology applications, recently announced their foundation model Harrison.rad.1 is being evaluated in the US Healthcare AI Challenge. Harrison.rad.1 is a vision language foundation model designed to enhance radiography workflows by generating draft reports and assisting with diagnostic interpretation.


The US Healthcare AI Challenge is a nationwide assessment led by the American College of Radiology and the Mass General Brigham AI Arena, which is directly pitting AI-generated chest X-ray reports against those written by human radiologists. While the study is still on-going, the preliminary results cover nearly three thousand evaluations by 113 participants on 117 real-world cases. For each evaluation, radiologists rate if a report is clinically acceptable or not, then are shown whether the report was generated by a human or AI.


Thus far, Harrison.rad.1 has achieved a 65% acceptability rate, compared to nearly 80% for clinician-authored reports. This demonstrates strong performance, but also shows that AI-generated reports still require human review and revision. Importantly, these results reflect real-world cases and interpretations, meaning that roughly two-thirds of Harrison.rad.1’s reports were acceptable in clinically realistic scenarios involving a wide range of pathologies, not just simple binary or multiclass tasks.


Harrison.rad.1 highlights the potential utility of AI in drafting preliminary reports and streamlining workflows. For radiologists, this represents a step toward AI as an integral supportive tool that can not only provide diagnostic classifications, but can also enhance read efficiency by drafting reports.


Bottom line: Harrison.rad.1 achieved 65% acceptability rate in real-world report generation scenario, highlighting utility for automated report drafting.




Halve Your Scan Time, Same Great Quality

How AI can accelerate acquisition times across neuroimaging exams.


ree

With ever increasing demand for imaging services, increasing the efficiency of existing equipment is essential. In a recent Radiology Advances article, researchers conducted a direct comparison between conventional MRI and DL-accelerated sequences using Siemens' Deep Resolve. The team analyzed 113 paired sequences from 26 patients across a wide range of routine neuro exams, including brain, spine, head, and neck scans. Four neuroradiologists, who were blinded to the technique used, rated the images.


Here are the key takeaways from their findings:

  • Speed Boost: DL-accelerated sequences were 51.6% faster than conventional ones on average (54s versus 111s).

  • Image Quality Maintained: On average, all quality metrics slightly favored DL-accelerated images over conventional MRI.

  • New Artifacts Noted: The readers observed DL-acceleration can produce distinct artifact patterns (increased CSF pulsation or Gibbs ringing), though differences did not lower diagnostic quality ratings.


The implications are significant, as increased scanner throughput could ease scheduling bottlenecks and reduce wait times. Additionally, faster scans can lead to a better patient experience and reduce motion artifacts.


Bottom line: Siemens Deep Resolve reduced scan time by half, while preserving diagnostic image quality.



Resource Highlight: The Readout by Rad AI


ree

Rad AI, one of the largest radiology generative AI companies serving nearly half of all U.S. health systems, has launched a new blog called The Readout. The blog launched in August and aims to bring clarity to the rapidly evolving world of imaging. Rather than serving as another corporate product feed, The Readout provides diverse perspectives on AI’s role in clinical workflows and insights into industry trends. It also offers a unique window into Rad AI itself, with articles from everyone ranging from research scientist to chief revenue officer. The content is longer-form, packed with in-depth details, definitely worth checking out a few posts.




Feedback


We’re eager to hear your thoughts as we continue to refine and improve RadAccess. Is there an article you expected to see but didn’t? Have suggestions for making the newsletter even better? Let us know! Reach out via email, LinkedIn, or X—we’d love to hear from you.


References



Disclaimer: There are no paid sponsors of this content. The opinions expressed are solely those of the newsletter authors, and do not necessarily reflect those of referenced works or companies.



 
 

©2024 by Radiology Access. All rights reserved.

bottom of page