eLearning · Scenario-Based · HIPAA Compliance

Trust But Verify: Using AI Safely in Clinical Settings

Hospital nursing staff are encountering AI-generated clinical content — care summaries, medication lists, discharge documents — with little guidance on how to evaluate it before acting. This module moves learners from passive recipients to active, confident verifiers equipped with a repeatable process they can apply under real working conditions.

Trust But Verify module — stylized EHR interface showing a three-step AI content verification checklist
DeliverableScenario-Based eLearning
PlatformArticulate Rise
Duration20–25 minutes
AudienceHospital Nursing Staff
ComplianceHIPAA · AI Acceptable Use
Context

The Performance Gap

Hospital nursing staff are increasingly encountering AI-generated clinical content embedded directly into their EHR workflows: care summaries, medication lists, and discharge documents. Most have received little to no guidance on how to evaluate that content before acting on it.

The gap isn't awareness. Most nurses know AI exists. The gap is behavior: the tendency to either over-trust AI-generated content because it looks authoritative, or dismiss it entirely because it feels unfamiliar. Neither response is safe.

The performance goal was to move learners from passive recipients to active, confident verifiers, equipped with a repeatable process they could apply under real working conditions: time pressure, competing priorities, and low comfort with technology.

Design Rationale

Why Scenario-Based Learning

A traditional compliance format — slides, policy language, a multiple-choice quiz — would have checked the box without changing behavior. For a learner like Maya, mandatory training delivered as a lecture is something to get through, not something to learn from.

Scenario-based learning was the right fit, though the reasons aren't as obvious.

1

The stakes are real.

When acting on inaccurate AI-generated content can mean patient harm or a HIPAA violation, learners need to feel the weight of that decision in a safe environment before they face it on the floor. A scenario creates that. A slide deck doesn't.

2

The audience is skeptical.

A nurse with eight years of clinical experience and low enthusiasm for mandatory training isn't going to be persuaded by policy language. She needs to see herself in the content, face a decision that feels like her actual job, and discover through her own choices why the verification process matters. That only happens through decision-based design.

3

Behavior change requires practice, not just exposure.

The verification checklist at the center of this module is only useful if learners have already practiced applying it before they need it. The scenarios provide practice in a low-stakes environment with immediate, coaching-oriented feedback. In my experience, that piece is almost always skipped in compliance training.

Process

How AI Was Used in the Design Workflow

AI was used deliberately, at specific stages, with human judgment applied at every decision point.

StageAI's Role
Research & Content Development Generated an initial framework of common AI-related risks in clinical settings. Reviewed, validated against current HIPAA guidance, and refined against instructional design best practices. AI didn't determine what was clinically accurate or compliant.
Persona & Scenario Development Generated initial drafts of the learner persona, scenario setups, and decision-point options. Each draft was evaluated for clinical realism, instructional integrity, and alignment with learning objectives before being revised.
Copywriting Generated first drafts of feedback language and knowledge check questions. Rewritten to match the coaching tone the audience required and to ensure clinical accuracy.
What AI didn't do Make instructional design decisions, determine the branching logic, validate clinical accuracy, or write the final content. Those calls were human throughout.
Evaluation

How Effectiveness Would Be Measured

Measurement would follow the Kirkpatrick Model, though the levels worth watching most closely depend on the deployment context.

Level 1Reaction

A brief post-module survey: learner confidence in applying the verification checklist, and whether the scenarios felt relevant to actual work. For a skeptical, time-pressured audience like Maya, perceived relevance is a leading indicator of whether anything transfers.

Level 2Learning

Scenario accuracy scores from the two branching decision points and the knowledge check. The more useful data is which distractors learners choose most frequently — that's where the misconceptions are, and where the content may need refinement.

Level 3Behavior

A 30 and 90-day observation or supervisor check-in to assess whether nurses are actually applying the checklist when they encounter AI-generated content. A brief self-report survey asking learners to describe a specific situation where they applied what they learned can also uncover further revision opportunities.

Level 4Results

Tracked in partnership with compliance and patient safety teams: AI-related documentation errors, PHI incidents tied to AI-generated content, and help desk escalations around AI content uncertainty. A meaningful reduction across subsequent training deployments would indicate the module hit its target.

Module Design Document

Trust But Verify: Using AI Safely in Clinical Settings

FormatScenario-based eLearning (Articulate Rise)
Duration20–25 minutes
AudienceHospital/acute care nursing staff
PrerequisitesNone
Compliance TagsHIPAA · PHI Handling · AI Acceptable Use
Learner Persona

Registered Nurse

Maya Chen, RN

Maya Chen, RN
SettingMed-surg unit, mid-sized hospital
Experience8 years of nursing; solid clinical instincts, minimal enthusiasm for mandatory training
Tech ComfortUses the EHR because she has to; prefers paper when possible
Attitude Toward AIMildly suspicious; hasn't seen it help her yet
GoalGet through the training and get back to her patients
Learning Objectives

By the end of this module, learners will be able to:

1

Identify at least three situations where AI-generated clinical content requires human verification before acting on it

2

Apply a simple verification checklist to evaluate AI-generated patient information for accuracy and HIPAA compliance

3

Recognize and respond appropriately to a PHI risk in an AI-generated document

Section 1 · 3–4 min

What You Need to Know Before You Click Accept

What AI-Generated Clinical Content Is and Where It Comes From

You may have already seen it without realizing it. A care summary that appeared in the EHR faster than usual. A discharge document that seemed almost too complete. A medication list that pulled together information from multiple visits in seconds.

That's AI-generated clinical content.

Hospitals are integrating AI tools directly into the platforms you already use — Epic, Oracle Health, Cerner, and others. These tools analyze patient data and generate summaries, recommendations, and documentation to support clinical workflows. The content looks clinical, uses the right terminology, and is formatted the way you'd expect.

That's exactly why it requires your attention.

AI generates content by identifying patterns in existing data. It is not applying clinical judgment. It doesn't know about the conversation you had with your patient this morning, or that her daughter mentioned she stopped taking one of her medications last week. That context lives with you. Not with the AI.

Why Hospitals Are Using It

AI-generated clinical content exists to reduce the administrative burden that pulls nurses and physicians away from direct patient care. Hospitals use it to summarize patient histories, flag potential medication interactions, generate draft discharge summaries, and surface relevant data at the point of care.

When it works well, it saves time and reduces documentation errors. When it doesn't — when it pulls from outdated records or generates a summary that looks complete but isn't — the consequences can be serious. That's where you come in.

Why Human Oversight Is Not Optional

AI tools are built to be helpful. They are not built to be accountable. That accountability belongs to you.

When an AI-generated document enters your workflow, it carries no clinical or legal weight until a qualified human reviews it. Your review is not a formality. It is the safeguard between a pattern-matching algorithm and your patient's safety.

HIPAA has no exception for AI. If a document contains PHI that was generated, stored, or transmitted incorrectly, the legal and ethical responsibility falls on the organization and the individuals in that workflow. That includes you.

You don't need to be a technology expert to use AI-generated content safely. You need a process. That's what this module gives you.

Section 2 · 5–6 min

The Verification Checklist: Three Steps Before You Act

You're busy. Your patients need you. The last thing you have time for is a complicated process that slows you down every time an AI-generated document lands in your queue.

This checklist has three steps. It takes less time than you think. And it could be the difference between a safe patient outcome and a serious clinical or compliance error. Use it every time.

1 Is the Patient Information Accurate?

AI tools pull from existing data. If that data is outdated, incomplete, or entered incorrectly at some point in the patient's history, the AI will reproduce those errors — confidently and in a format that looks authoritative.

Before you act on any AI-generated document, cross-reference it against the current EHR record. Look specifically for discrepancies in medications and dosages, allergies and adverse reactions, diagnoses, the active problem list, and recent lab values or test results.

If anything doesn't match, stop. Do not act on the document until the discrepancy is resolved and documented.

2 Is PHI Handled Correctly?

AI tools process large amounts of patient data to generate their outputs. That creates opportunities for PHI to appear where it shouldn't — in a document intended for a different patient, sent to an unauthorized recipient, or generated using data outside approved parameters.

Before you approve or forward any AI-generated document, ask yourself: Does this document contain information that belongs to a different patient? Is it going only to authorized recipients? Does anything feel inconsistent with what I know about this patient?

If you spot a PHI concern, do not try to fix it yourself and move on. Stop, report it to your supervisor and the compliance team, and document the incident every time.

3 Does This Align With Your Clinical Judgment?

This is the step no algorithm can replace.

AI summarizes patterns across data. It does not know your patient as an individual. It did not hear her describe her pain level this morning. It did not notice that she seemed more confused than yesterday, or that she mentioned she ran out of one of her prescriptions two weeks ago. You did.

Before acting on any AI-generated clinical content, ask yourself: Does this feel right? Does it align with what I know about this patient from direct observation and care? If something feels off, trust that instinct. Your clinical judgment is not a backup to the AI. It is the primary check.

Rise build note: Tabbed interactive block, one tab per step, with a short real-world example for each. Build as a standalone accordion or tab block — extractable as a job aid later.
Section 3 · 5–6 min

Scenario 1: The Medication Discrepancy

Scene Setup

It's 7:15 AM. Maya is twenty minutes into her shift and already behind. She pulls up the EHR for Mr. Alderman, a 74-year-old patient admitted two days ago following a cardiac event. An AI-generated care summary is flagged at the top of his record.

She scans it quickly. Diagnoses look right. Allergies match. Then she gets to the medication list.

The AI summary lists Mr. Alderman's blood thinner — warfarin — at 7.5mg daily. But the current EHR record shows 5mg daily, updated by the attending physician at yesterday's evening rounds following an elevated INR result. The AI pulled from an earlier record. The dosage shown is outdated.

Maya has four other patients waiting. The summary looks complete and professional.

Decision point: What does Maya do?

Choice A — Incorrect

The AI has access to more data than I do and is probably pulling from the most recent source. She updates the EHR to match the summary and prepares to administer the higher dose.

Administering warfarin at 7.5mg when the physician adjusted the dose to 5mg based on yesterday's INR result puts Mr. Alderman at significant risk of a bleeding event. The AI wasn't wrong based on the data it had — but it didn't know about the physician's update from last night. When AI-generated content conflicts with the current EHR record, the EHR is your source of truth until a qualified clinician tells you otherwise.

Choice B — Correct

She flags the discrepancy, holds the medication, and contacts the attending physician before proceeding. She documents the inconsistency in the EHR.

Holding the medication, contacting the attending, and documenting the discrepancy protects Mr. Alderman, creates a clear record of clinical reasoning, and flags a data issue the AI system may repeat with other patients. You applied Step 1 of the verification checklist exactly as intended. That's what human oversight looks like in practice.

Choice C — Incorrect

She ignores the AI summary entirely and goes with what's in the EHR, but doesn't document that she identified a discrepancy.

Using the correct EHR dosage protects Mr. Alderman in this moment — but failing to document the discrepancy means the issue goes unreported. The next nurse who sees that summary may not catch the same error. Documentation is not optional, even when your instinct is right.

Section 4 · 5–6 min

Scenario 2: The PHI Risk

Scene Setup

It's 11:30 AM. Maya is reviewing discharge paperwork for Mr. Alderman, who is being released later this afternoon. The AI has generated a discharge summary to be sent to his primary care physician.

The document looks good at first glance. Diagnoses accurate. Medications reflect the corrected warfarin dosage. Follow-up instructions clear and complete. Then she notices something in the third paragraph.

Embedded in the care narrative are lab results and a diagnosis that don't belong to Mr. Alderman. The patient in room 412 — his neighbor for the past two days — shares the same attending physician. The AI pulled from both records and combined them into a single document.

Mr. Alderman's discharge summary contains another patient's protected health information.

Decision point: What does Maya do?

Choice A — Incorrect

She's already running late, and the error is small. She sends the summary as is — the PCP will probably recognize that the information doesn't belong and disregard it.

This is a HIPAA violation — and assuming the PCP will sort it out doesn't reduce that risk. Sending a document containing another patient's PHI to an unauthorized recipient is a reportable breach regardless of how small the error seems. Time pressure is never a justification for a compliance violation.

Choice B — Correct

She stops the send, flags the document to her supervisor and the compliance team, documents the incident in the EHR, and waits for guidance before proceeding.

Stopping the send prevents the breach from becoming a disclosure. Reporting it ensures the incident is investigated and documented as required under HIPAA. Flagging it means the AI system's error gets investigated — so it doesn't happen again with a different patient and a different nurse who doesn't catch it. Your thoroughness today protects more than Mr. Alderman.

Choice C — Incorrect

She manually deletes the other patient's information, confirms the rest of the summary is accurate, and sends the corrected version without reporting the incident.

This comes from the right instinct — but fixing it without reporting it is not enough. Manually correcting the document doesn't create a record of the breach, notify the compliance team as required under HIPAA, or flag the AI system error for investigation. Under HIPAA, a breach identified and corrected before disclosure still requires reporting through proper channels. Good intentions are not a substitute for the process.

Section 5 · 3–4 min

Knowledge Check

Three questions applying the verification checklist to new situations, mapping directly to the learning objectives.

Question 1 — Multiple Choice

Maya receives an AI-generated allergy list for a new admission. The list shows no known drug allergies. During her intake assessment, the patient tells Maya she had a severe reaction to penicillin two years ago that sent her to the emergency room.

What should Maya do first?

Trust the AI-generated list — it pulls from the full medical record and is likely more complete than the patient's recall.
Document the patient-reported allergy in the EHR immediately, flag the discrepancy, and notify the attending physician before any medications are administered.
Make a note of the patient's report and plan to update the record after her other admissions are complete.
Ask the patient if she's sure about the reaction before making any changes to the official record.
Why B: Patient-reported information is always clinically significant, especially for allergies. If the penicillin reaction was never recorded, the AI has no way of knowing about it. A patient's direct report is primary source information. Delaying documentation while managing other admissions creates serious risk; asking for confirmation before acting introduces unnecessary delay.
Question 2 — Multiple Choice (identify what is NOT acceptable)

A nurse on Maya's unit receives an AI-generated care summary that contains what appears to be lab results for a patient who was discharged last week. The current patient in that room has a similar name.

Which of the following is NOT an acceptable response?

Stop and cross-reference the summary against the current patient's EHR before taking any action.
Assume the AI pulled the correct record because the names are similar and the document looks complete.
Flag the document to the supervisor and compliance team, and document the potential PHI concern before proceeding.
Hold all clinical actions based on the summary until the discrepancy is resolved by a qualified clinician.
Why B is the wrong answer: Similar names and a complete-looking document are not sufficient grounds to assume accuracy. AI tools can pull from incorrect records when patient names, room numbers, or identifiers are similar. Proceeding without verification puts the current patient at risk and potentially exposes a previous patient's PHI to an unauthorized workflow.
Question 3 — Scenario-Based

Review the following AI-generated medication summary for Eleanor Voss, 68, admitted for management of Type 2 diabetes and hypertension.

AI SummaryCurrent EHR
Metformin 1000mg twice dailyMetformin 1000mg twice daily
Lisinopril 10mg once dailyLisinopril 10mg once daily
Atorvastatin 40mg once dailyAtorvastatin 40mg once daily
Warfarin 5mg daily — last INR 2.4, checked 14 months agoNo warfarin on active list · No INR results

Which of the following best describes the appropriate next step?

The AI summary and EHR mostly match — the warfarin discrepancy is minor and the INR is within therapeutic range, so it's safe to proceed.
Flag the warfarin entry as a significant discrepancy. An anticoagulant appearing in an AI summary but not in the active medication list requires immediate clarification from the attending physician before any clinical action is taken.
Add warfarin to the active medication list based on the AI summary since it appears to have a documented clinical history behind it.
Disregard the AI summary entirely and rely only on the EHR going forward.
Why B: A discrepancy involving an anticoagulant is never minor — warfarin has a narrow therapeutic window. The INR being checked 14 months ago makes this more concerning, not less. Never add medications to an active record based solely on an AI summary. But discarding the summary entirely is also wrong: it surfaced a potential clinical concern worth investigating.
Section 6 · 2–3 min

Summary: What You're Taking Back to the Floor

Closing Scene

Maya's shift is winding down. Mr. Alderman is safely discharged, his corrected warfarin dosage documented and communicated to his primary care physician. The PHI incident is reported, documented, and in the hands of the compliance team.

She ran behind today. Her patients were safe. That's the job.

AI-generated clinical content is already part of your workflow, whether you've noticed it or not. It's designed to support you, not replace you. And it requires your review every single time — not because the technology is bad, but because no algorithm can account for what you know, what you observed, and what your patient told you this morning.

  • AI is a tool, not a decision-maker. Every AI-generated document requires a qualified human review before it becomes part of patient care.
  • When in doubt, report it. Fixing an error quietly is not the same as reporting it correctly. A compliance flag filed today prevents a patient safety incident tomorrow.
  • Your instincts are an asset. Years of clinical experience, direct patient observation, and the conversation you had at the bedside this morning are things no algorithm can replicate. Use them.
Build Specifications

Design Notes for Rise

  • At course end: capture learner feedback with a blank field — directed questions, open response, or both
  • Use Rise's scenario block for both decision points — handles branching cleanly without Storyline
  • Keep the visual tone calm and clinical — avoid overly bright or gamified aesthetics for this audience
  • Use a consistent character image for Maya throughout to build familiarity
  • Write all feedback in a coaching tone, not a corrective one — Maya is competent, she just needs guidance
  • Build the verification checklist as a standalone accordion or tab block so it can be extracted as a job aid — two portfolio artifacts from one build
Primary Federal Sources

U.S. Department of Health and Human Services

All sources below are official publications from HHS and its Office for Civil Rights (OCR). URLs verified as of May 2026.

Verify all URLs remain active prior to module deployment.

Recommended for SME Review

Additional Sources

Not cited directly in the module, but recommended for SME review and further validation of clinical and technical content.

Office of the National Coordinator for Health Information Technology (ONC)Health IT topics including AI, EHR integration, and interoperability.
healthit.gov
American Nurses Association (ANA)Position statements and resources on AI and nursing practice.
nursingworld.org
National Institute of Standards and Technology (NIST)AI Risk Management Framework (AI RMF 1.0).
nist.gov — AI RMF 1.0 (PDF)
The Joint CommissionStandards and resources related to clinical documentation, patient safety, and healthcare technology.
jointcommission.org
Transparency

Note on AI-Assisted Content Development

This module was developed with the assistance of AI tools as part of an intentional instructional design workflow. AI was used to generate initial content drafts, scenario frameworks, and feedback language. All content was reviewed, revised, and validated by the instructional designer before finalization.

Clinical accuracy, compliance language, and scenario realism should be validated by qualified subject matter experts — clinical, compliance, and health IT — before live deployment. See the SME Review tab for the recommended review protocol.

This documentation reflects the instructional designer's professional commitment to accuracy, transparency, and responsible AI use in healthcare learning design.

Document prepared May 2026 · Trust But Verify: Using AI Safely in Clinical Settings · Vanessa Schill, ATD Master Trainer

Case Study Addendum

Why SME Review Matters for This Module

AI literacy training for clinical staff touches two domains where getting it wrong has real consequences — healthcare compliance and emerging technology. Instructionally sound content that is clinically inaccurate puts learners at risk. Compliance language that is even slightly outdated exposes organizations to liability. And in acute care settings, the patients at the end of that chain are real people.

SME review was never optional here. It was a design requirement. The question is never whether healthcare content needs expert validation — it always does. The question is how to structure that process so it actually strengthens the content.

Reviewer Roles

Who Should Review This Module

Three separate reviewers ensure the training is evaluated through three distinct lenses. None of them are interchangeable.

Clinical SME

A registered nurse or nurse educator with current acute care experience. This person confirms whether Maya's decision points reflect what nurses actually face on a med-surg unit, whether the medication details hold up clinically, and whether the verification checklist is achievable under real working conditions — not just ideal ones.

The right candidate is someone currently working in, or closely adjacent to, a hospital environment that already uses AI-assisted documentation tools. Clinical experience that predates EHR integration is useful context, but it is not sufficient for validating this particular content.

Compliance SME

A HIPAA compliance officer or healthcare privacy attorney. Their job is to confirm that all PHI guidance, breach notification language, and compliance requirements are accurate and current — and to catch anything that overstates or understates legal obligations, even unintentionally. In compliance training, the difference between "must" and "should" matters.

The right candidate has current working knowledge of OCR enforcement priorities, not just familiarity with the statute. Regulations evolve. The reviewer should reflect where the guidance actually stands today.

AI and Health IT SME

A health informatics professional or clinical informaticist with hands-on EHR implementation experience. This reviewer validates the technical accuracy of how AI-generated content is described — how it is generated, where errors typically originate, and how it behaves in clinical workflows. They also confirm that the scenarios are technically realistic, not just plausible-sounding.

This is the reviewer most ID teams overlook. In my experience, it is often the most valuable one. A scenario can be clinically accurate and legally sound and still lose learner trust if the technology doesn't behave the way nurses actually see it behave in their EHR.

Process

The SME Review Process

1

Prepare the Review Package

Before reaching out to any SME, I put together a structured review package: complete module content, the design document, learning objectives, and a review guide that tells each reviewer exactly what I need from them.

That last piece is critical. Without clear direction, a clinical SME will default to editing for tone and clarity. A compliance SME will flag language they personally prefer rather than language that creates legal risk. Each reviewer gets explicit guidance on what they are evaluating — and explicit guidance on the difference between a critical error and a suggestion for improvement.

The review guide for this module includes a brief overview of purpose and audience, the three learning objectives, domain-specific review questions for each SME, a clear timeline and feedback format, and instructions for prioritizing feedback by severity.

2

Conduct Individual Reviews

Each SME reviews their package independently before we talk. Written feedback is a useful starting point. The debrief conversation is where the real value happens.

One question I ask every reviewer without exception: Is there anything in this module that could cause harm if a learner acted on it? That question focuses the conversation quickly and surfaces the concerns that matter most.

3

Reconcile and Revise

After all three reviews are complete, I reconcile the feedback. Not all of it will be compatible — a clinical SME may push for scenario language that a compliance SME finds too ambiguous for a training context. My job is to find the version that is both clinically realistic and compliance-accurate, and to document the reasoning behind every revision.

A revision log throughout demonstrates rigor, supports future updates, and protects the designer professionally if the content is ever questioned after deployment.

4

Final Sign-Off

Written confirmation from each SME that the content within their domain is accurate and appropriate for deployment. An email is enough. What matters is that it is dated and retained with the project documentation.

If you are building this as a portfolio piece and a formal review is not feasible, document the process you would follow for a live deployment and be transparent about the context. That kind of honesty reads well to hiring managers in healthcare tech — it shows you understand the stakes of the work even when the current project is a demonstration.

Design Intent

What This Process Signals

Most instructional designers building healthcare compliance training engage a subject matter expert, collect feedback, and revise. That works for low-stakes content. For training that directly shapes how clinical staff interact with AI tools in patient care settings, it is not enough.

A three-party review with distinct roles, structured guidance, and documented sign-off signals something specific: that I understand the difference between instructionally sound and clinically safe, and that I know how to close that gap through structured collaboration.

Document prepared May 2026 · Trust But Verify: Using AI Safely in Clinical Settings · Vanessa Schill, ATD Master Trainer