Hello Cori,
Great questions. You're smart to think beyond just a policy. Honestly, a policy alone won't solve the problems you're describing. Policies tell people what they should do. But, people have to remember them! They're also subject to interpretation at the point-of-use. They don't help your staff and providers actually evaluate whether AI-generated content meets your health literacy standards and brand requirements or catch the moments where AI gets something wrong, or just misses the mark somehow, in patient-facing materials.
Based on my experience working with teams using AI in the health literacy space over the past 18 months, here are a few things I'd suggest considering alongside a policy:
Training that's specific to creating and evaluating AI-generated content within your context of health literacy and brand requirements. Generic AI training won't address the health literacy, accuracy and appropriateness concerns you're raising. Your teams need a shared framework for creating with AI and looking at AI outputs critically, especially when the content goes to patients and families.
Documented standards, frameworks, artifacts and workflows that connect your existing health literacy standards and branding requirements to AI workflows. If your readability standards and brand guidelines aren't built into how teams use AI, every person is making judgment calls independently, which is how inconsistent messaging happens across a system.
Context and guardrails at the point of use. Ideally, the tools and workflows your teams are using should have your standards embedded in them so that the guidance is active while people are creating content with AI.
(This is actually the core of what my firm works on. We developed a framework called HumanLens™ that helps teams, including healthcare and health literacy teams, do exactly what you're describing: create with AI responsibly and evaluate its outputs against the standards that matter.)
If a 30-minute conversation would be useful to you, I'm happy to share what we've seen work at other organizations and hear more about your vision for this.
Best, Temese Szalai
temese@subtextive.com
------------------------------
Temese Szalai
Principal
Subtextivetemese@subtextive.com------------------------------
Original Message:
Sent: 04-01-2026 08:02 AM
From: Cori Gibson
Subject: Organizational AI policies
As more staff, providers, and teams are starting to use AI in healthcare to create patient family education, resources, graphics and assisting with translations, our organization needs to develop an organizational AI policy. We want to provide guidance to ensure we have safeguards in place to verify the accuracy of content and make sure we are meeting our health literacy standards and branding requirements as a system. We also want to have some oversight as to what is being shared with our patients, clients, and families to help ensure we have clear consistent messaging across the system on similar topics. Does anyone have a organizational AI in healthcare policy they'd be willing to share? Does anyone have any suggestions on what should be included in this type of policy? Besides a policy, are there other things we should be considering as a system? Thank you in advance for any guidance you can share.
------------------------------
Cori Gibson, MSN, RN, CNL
Children's Wisconsin
Program Manager for Health Literacy
Director of the Health Literacy Task Force
cgibson@childrenswi.org
------------------------------