AI: Project Listings and Collaborations

 View Only
  • 1.  Organizational AI policies

    Posted 19 days ago

    As more staff, providers, and teams are starting to use AI in healthcare to create patient family education, resources, graphics and assisting with translations, our organization needs to develop an organizational AI policy.  We want to provide guidance to ensure we have safeguards in place to verify the accuracy of content and make sure we are meeting our health literacy standards and branding requirements as a system.  We also want to have some oversight as to what is being shared with our patients, clients, and families to help ensure we have clear consistent messaging across the system on similar topics. Does anyone have a organizational AI in healthcare policy they'd be willing to share?  Does anyone have any suggestions on what should be included in this type of policy?  Besides a policy, are there other things we should be considering as a system?  Thank you in advance for any guidance you can share.



    ------------------------------
    Cori Gibson, MSN, RN, CNL
    Children's Wisconsin
    Program Manager for Health Literacy
    Director of the Health Literacy Task Force
    cgibson@childrenswi.org
    ------------------------------


  • 2.  RE: Organizational AI policies

    Posted 18 days ago
    Edited by Chris Trudeau 18 days ago

    Thanks for your post, Cori. I have been thinking a lot about this over the past few months. I am a law professor, plain language lawyer, and responsible AI-use advocate. So my role is different than your role as a health literacy practitioner dealing with these problems in practice. I want to dive into this more specifically this summer, but I have already started to do some research into what's out there. The problem, of course, is that much of what is out there is not written in a health literate way.

    To start, I suggest looking at two templates that already address the technical and ethical risks inherent in healthcare AI.

    The Kansas Health Institute (KHI) Template is  the one I like the best so far.  You can pull the specific sections that might work for your system. It touches on bias and transparency, though it still reads like a typical policy document and isn't that health literate, but it's better written than the next one.

    For a more specific (and more difficult) governance approach, look at the Coalition for Health AI (CHAI) Policy Template. Their template is very granular and covers practical steps like testing tools in "silent mode" and training staff properly before any technology goes live. It may be more than you need, but it will give you plenty to think about. 

    Best of luck with it, 

    Chris Trudeau, JD 



    ------------------------------
    Chris Trudeau

    professortrudeau@GMAIL.COM
    ------------------------------



  • 3.  RE: Organizational AI policies

    Posted 18 days ago

    Hello Cori,

    Great questions. You're smart to think beyond just a policy. Honestly, a policy alone won't solve the problems you're describing. Policies tell people what they should do. But, people have to remember them! They're also subject to interpretation at the point-of-use. They don't help your staff and providers actually evaluate whether AI-generated content meets your health literacy standards and brand requirements or catch the moments where AI gets something wrong, or just misses the mark somehow, in patient-facing materials. 

    Based on my experience working with teams using AI in the health literacy space over the past 18 months, here are a few things I'd suggest considering alongside a policy:

    Training that's specific to creating and evaluating AI-generated content within your context of health literacy and brand requirements. Generic AI training won't address the health literacy, accuracy and appropriateness concerns you're raising. Your teams need a shared framework for creating with AI and looking at AI outputs critically, especially when the content goes to patients and families.

    Documented standards, frameworks, artifacts and workflows that connect your existing health literacy standards and branding requirements to AI workflows. If your readability standards and brand guidelines aren't built into how teams use AI, every person is making judgment calls independently, which is how inconsistent messaging happens across a system.

    Context and guardrails at the point of use. Ideally, the tools and workflows your teams are using should have your standards embedded in them so that the guidance is active while people are creating content with AI.

    (This is actually the core of what my firm works on. We developed a framework called HumanLens™ that helps teams, including healthcare and health literacy teams, do exactly what you're describing: create with AI responsibly and evaluate its outputs against the standards that matter.) 

    If a 30-minute conversation would be useful to you, I'm happy to share what we've seen work at other organizations and hear more about your vision for this.   

    Best, Temese Szalai
    temese@subtextive.com



    ------------------------------
    Temese Szalai
    Principal
    Subtextive
    temese@subtextive.com
    ------------------------------



  • 4.  RE: Organizational AI policies

    Posted 17 days ago

    Hello Cori,

    I really appreciate the way you're approaching this - especially the instinct to look beyond a standalone policy. That tells me you're thinking about how this actually plays out for real teams, under real constraints, rather than how it reads on paper.

    From my experience building and operationalizing AI frameworks at Highmark, I've seen the same pattern Temese describes. Policies are necessary, but they're not sufficient on their own. They set intent, but they don't reliably change outcomes at the point where content is created, reviewed, and released-particularly when that content is patient- or member-facing.

    Where I've found traction is in treating AI use less like a compliance artifact and more like a governed system:

    • Shared mental models for how teams create with and evaluate AI-generated content.
    • Explicit translation of health literacy standards, brand voice, and risk considerations directly into usable frameworks and workflows.
    • Guidance that's present at the moment of creation, not just documented elsewhere or after the creative.

    Without those elements, the burden shifts to individual choice. That's where inconsistency creeps in - different interpretations of readability, tone, appropriateness, or "good enough" accuracy, all applied under time pressure.

    I also want to underscore the training point. Generic AI education helps people understand what AI is, but it rarely helps them judge whether a specific output is acceptable in a healthcare context. The most effective training I've seen is context-specific and evaluative: it gives teams a consistent lens for assessing accuracy, clarity, equity, and brand alignment before content ever reaches patients or families.

    For me, this work ultimately sits at the intersection of empathy, governance, and trust. If we want AI to support - not erode- patient understanding and confidence, then our standards have to be embedded into how our colleagues actually work, not just articulated in principle.

    I appreciate Temese naming this so clearly, and I'm glad you're creating space for this level of thinking.  

    Best,
    George Myers

    Highmark Health



    ------------------------------
    George Myers
    Senior Copywriter
    Highmark Health
    PA United States
    George.myers@ucci.com
    ------------------------------