Most RSNA 2025 - affiliated radiologists report unapproved AI tools in daily use; fewer than one in five say their organizations strongly enforce a policy on public LLMs.
CHICAGO, IL / ACCESS Newswire / December 1, 2025 / Black Book Market Research today released results from a flash poll of 208 radiologists who are current or prior RSNA attendees, revealing that "shadow AI", AI tools used outside formally approved enterprise programs, has become routine in imaging departments while governance, policy and validation lag behind.
The poll, conducted among radiologists and imaging leaders in the weeks leading up to the RSNA annual meeting and via follow‑up online outreach, found that 78% of respondents say their organizations are using at least some imaging or reporting AI tools that are not part of a formally approved enterprise program. More than a third (37%) characterize that usage as involving "several" unapproved tools or "widespread but ungoverned" adoption.
"On the RSNA show floor, AI looks fully governed and tightly integrated," said Douglas Brown, Founder of Black Book Research. "But when we directly polled radiologists from RSNA's global community, we see a very different reality: unofficial pilots, generic chatbots and home‑grown tools in daily use, with policies struggling to keep pace."
Shadow AI concentrated in reporting and education
Shadow AI use is emerging across multiple parts of the imaging workflow, but is most concentrated in text‑heavy tasks:
63% say shadow AI tools most commonly appear in report drafting or impression assistance (including generic large language models).
47% report shadow AI use in teaching and resident education.
39% cite triage and worklist prioritization.
28% mention protocol selection and decision support.
22% report billing or coding assistance.
14% say patient communications (letters and portal messages) are touched by shadow AI tools.
"Radiologists are experimenting first where AI feels least intrusive: drafting text, teaching and coding," Brown added. "But these tools still touch PHI and clinical decision‑making, and that raises a governance bar most organizations haven't cleared."
Policy and validation lag behind rapid, informal adoption
Despite the spread of unofficial AI tools, only a minority of organizations report a strong policy posture:
Just 19% of respondents say their organization has a formal policy on public large language model (LLM) use that is consistently enforced for radiologists.
27% report a policy that is in place but weakly enforced.
31% say a policy is still in development.
23% admit there is no policy at all on radiologists' use of generic chatbots and public LLMs in clinical work.
Validation and ROI assessment are similarly limited:
29% say none of the AI tools in their environment have completed a full clinical validation and ROI assessment.
Another 33% report only one or two tools have been validated and evaluated for financial impact.
Just 18% report three or more fully validated tools.
20% of respondents are unsure how many AI solutions have gone through a formal validation and ROI process.
"Radiology departments are piloting faster than they are validating," Brown said. "For many organizations, AI is everywhere, but the number of tools that have been rigorously vetted and measured for return on investment is surprisingly small."
Legal and privacy fears dominate; governance ownership is unclear
When asked for their single biggest concern about ungoverned AI use:
38% chose PHI leakage and privacy risk.
21% cited malpractice liability.
16% pointed to data quality and reliability.
14% named regulatory compliance (e.g., HIPAA, FDA).
7% selected reputational risk.
4% reported no clear concern or "don't know."
AI governance responsibility remains fragmented:
26% say the radiology department clearly owns AI governance for imaging.
17% point to an enterprise digital or AI office.
14% name the IT department.
19% report a shared governance body.
4% say their vendors effectively own governance.
20% acknowledge no clear owner for imaging AI governance in their organization.
"In one out of five organizations, nobody clearly owns imaging AI risk," Brown said. "In another segment, governance is effectively outsourced to vendors. That's a serious misalignment with the level of clinical and legal exposure leaders say they're worried about."
Implications for radiology leaders
The findings do not argue for or against the use of AI in imaging; instead, they highlight a widening gap between how radiologists are actually using AI and how formally it is being governed. As AI capabilities accelerate and participation in RSNA's AI and informatics programs continues to grow, the results highlight three practical areas many organizations are now focusing on:
1. Create a real inventory of AI use, including "shadow" tools.
Rather than assuming all AI is centrally managed, many leaders start by documenting which tools are in use at the radiologist and department level-commercial tools, home‑grown solutions and generic public LLMs.
2. Clarify governance ownership and policy without slowing innovation.
Assigning clear responsibility for imaging AI governance-whether within radiology, an enterprise AI office or a shared committee-can help organizations set and enforce policies that protect patients and clinicians while still allowing thoughtful experimentation.
3. Expand validation and monitoring beyond a few flagship tools.
Moving more AI tools through structured clinical validation, ROI assessment and ongoing performance monitoring can help align day‑to‑day practice with the risk and benefit profile that boards, regulators and patients expect.
Black Book will continue to track how radiology organizations evolve their AI strategies, governance structures and investment priorities as the technology matures across imaging service lines.
About the Black Book RSNA 2025 Shadow AI in Radiology Flash Poll
The Black Book Shadow AI in Radiology Flash Poll was conducted among 208 radiologists who are current or prior RSNA attendees, drawn from a broader contact group of 250 radiology and imaging leaders, in November 2025. Based on an estimated 20,000 radiologist attendees at RSNA, the theoretical margin of sampling error for a simple random sample of 208 would be approximately ±6.8 percentage points at the 95% confidence level for proportions near 50%. Because the poll used a convenience sample of RSNA‑attending and RSNA‑affiliated radiologists, results should be interpreted as directional rather than strictly representative of all radiology practices.
More information and gratis industry reports are available to download at www.blackbookmarketresearch.com or by emailing research@blackbookmarketresearch.com
Contact Information
Press Office
research@blackbookmarketresearch.com
8008637590
SOURCE: Black Book Research
View the original press release on ACCESS Newswire
