How to Create an AI Governance Policy for Your Clinic in 2026
If your practice uses AI tools — any AI tools, including ambient scribes, clinical decision support, patient communication assistants, or even ChatGPT for administrative tasks — and you don't have a written AI governance policy, you're exposed.
Dr. Sajad Zalzala
2026-04-23
How to Create an AI Governance Policy for Your Clinic in 2026
If your practice uses AI tools — any AI tools, including ambient scribes, clinical decision support, patient communication assistants, or even ChatGPT for administrative tasks — and you don't have a written AI governance policy, you're exposed.
Not theoretically exposed. Practically, legally, financially exposed.
I'm a physician licensed in all 50 states with a computer science degree. I've built software products, I run a telemedicine company, and I've spent the last year helping physicians integrate AI into clinical workflows. The single most common gap I see isn't technical — it's governance. Physicians are adopting tools faster than they're adopting the policies those tools require.
This article gives you the complete framework. By the end, you'll have a five-component AI governance policy you can implement this month.
Why You Need This Now
Three developments have made AI governance urgent in 2026:
1. The liability landscape shifted. Bernstein et al.'s January 2026 study in Nature Health found that 74.7% of mock jurors attributed negligence to physicians who used AI without independent review. Malpractice carriers are starting to ask about AI use in renewal applications. Having a policy is becoming a condition of coverage.
2. State medical boards are acting. California, New York, Texas, and Washington have issued guidance on AI use in clinical practice. While no state has passed comprehensive AI-in-medicine legislation yet, the regulatory trajectory is clear: documentation requirements are coming, and practices with existing policies will be ahead of the curve.
3. Your staff is already using AI. The Doximity 2026 Physician Survey (n=3,151) found that 94% of physicians are using or interested in AI. But it's not just physicians — your nurses, MAs, billing staff, and front desk are using ChatGPT, AI scheduling tools, and AI-generated patient communications. Without a policy, you have no visibility into what's happening with your patients' data.
The 5 Components of a Clinical AI Governance Policy
Component 1: Tool Approval Process
Every AI tool that touches patient data — or could touch patient data — must go through a formal approval process before use. "Formal" doesn't mean a committee of 12 meeting quarterly. It means a checklist that someone with authority completes and signs.
The approval checklist:
- •[ ] Tool name, vendor, version
- •[ ] Purpose (documentation, clinical decision support, administrative, patient communication)
- •[ ] Data classification: Does this tool process PHI? PII? De-identified data only?
- •[ ] BAA status: Is a Business Associate Agreement executed? (If PHI is involved and no BAA exists, the tool is rejected. Full stop.)
- •[ ] Security certification: SOC 2 Type II? HITRUST? ISO 27001?
- •[ ] Data training: Does the vendor use input data for model training? (Must be "no" for any tool processing PHI)
- •[ ] EHR integration method: Direct write, API, copy-paste, manual entry?
- •[ ] Cost: Per-provider or per-practice pricing?
- •[ ] Approved by: Name, title, date
Who approves: Designate one person. In a solo practice, it's you. In a group, it's the managing partner, compliance officer, or IT director. The point is accountability — one person owns the list of approved tools.
The approved tools list: Maintain a living document. Post it where providers can see it. Update it when tools are added, removed, or change their terms. I recommend reviewing the list quarterly.
Critical rule: shadow AI. Any tool not on the approved list is prohibited for use with practice data. This includes free tools that staff download on their personal phones. "I didn't know" is not a defense under HIPAA. Your policy needs to make this explicit.
Component 2: Documentation Requirements
When AI generates or assists in creating clinical content, your documentation must reflect that. This isn't about transparency for its own sake — it's about legal defensibility and clinical integrity.
What to document:
- •In the medical record:
- •That AI was used in generating the documentation (e.g., "Note generated with AI-assisted ambient documentation, reviewed and edited by [physician name]")
- •Any modifications made to the AI output
- •Clinical reasoning that supports the assessment and plan, independent of the AI suggestion
- •In your practice records (not the patient chart):
- •Which tool was used
- •Date and encounter type
- •Any errors or hallucinations identified during review
- •Time saved (optional, but useful for ROI analysis)
The review protocol:
Every AI-generated clinical document must be reviewed by the signing provider before it enters the medical record. This is non-negotiable. The Bernstein study demonstrated that jurors expect independent review, and signing an unreviewed AI note is functionally identical to signing a note you didn't read — except with the added risk that an algorithm wrote it.
I recommend a three-tier review approach:
- •Routine encounters (well visits, stable chronic disease follow-up): Quick review for accuracy, edit as needed, sign.
- •Complex encounters (new diagnoses, multiple active problems, medication changes): Detailed review of every assessment and plan element. Verify that the AI captured the clinical reasoning, not just the facts.
- •High-risk encounters (controlled substance prescriptions, psychiatric evaluations, disability assessments): Manual documentation preferred. If AI is used, line-by-line review with explicit documentation of independent clinical judgment.
Component 3: Incident Response
What happens when AI makes a mistake that affects patient care? You need a plan before it happens.
- •Define what constitutes an AI incident:
- •AI-generated note contains a factual error that was signed without correction (e.g., wrong medication, wrong allergy, fabricated history element)
- •AI clinical decision support provides a recommendation that contradicts standard of care, and it's followed without independent verification
- •PHI is entered into a non-approved AI tool
- •AI tool suffers a data breach
The response protocol:
1. Identify and contain. Who found the error? What's the scope? Is the erroneous information in the patient's chart? Has it affected clinical decisions?
2. Correct the record. Amend the chart using your EHR's amendment process. Do not delete — amend with documentation of what was incorrect and why.
3. Notify. If the error affected clinical care, the patient should be informed consistent with your state's disclosure requirements. If PHI was breached, follow HIPAA breach notification rules (60-day window for individual notification, HHS notification if 500+ records).
4. Root cause analysis. Was this a tool failure, a process failure, or a human failure? Did the provider review the note? Was the tool configured correctly? Is this a known limitation of the tool?
5. Policy update. Does this incident reveal a gap in your governance policy? Update accordingly.
Document everything. The incident, the response, the root cause, the policy update. This documentation is your evidence that you have a functioning governance framework — which matters when regulators, malpractice carriers, or plaintiff attorneys come asking.
Component 4: Staff Training
A policy that nobody knows about protects nobody. Training must be:
Initial: Every new employee, provider, and contractor receives AI governance training during onboarding. Cover the approved tools list, the documentation requirements, the prohibition on shadow AI, and the incident reporting process.
Annual: Yearly refresher that covers policy updates, new tools added or removed, incidents from the past year (anonymized), and emerging regulatory requirements.
- •Role-specific:
- •Providers: Review protocol, documentation requirements, clinical liability implications
- •Clinical staff (RNs, MAs): Approved tools for their role, PHI handling, when to escalate AI-related questions
- •Administrative staff: Approved tools for billing, scheduling, patient communication. Emphasis on the shadow AI prohibition — this is where most violations occur.
- •IT/technical staff: Tool configuration, security requirements, BAA management, audit log review
Training documentation: Keep attendance records and signed acknowledgments. These are your evidence of a "reasonable effort" to ensure compliance — a standard that matters in enforcement actions.
Component 5: Patient Disclosure
This is the newest and most evolving component. As of April 2026, there is no federal requirement to disclose AI use to patients. However:
- •Several states are considering disclosure requirements
- •CMS has indicated interest in AI transparency for Medicare/Medicaid encounters
- •The AMA's AI principles recommend transparency about AI use in clinical care
- •Malpractice carriers increasingly view proactive disclosure as risk-reducing
My recommendation: Get ahead of the mandate. Add a simple AI disclosure to your patient intake or consent process:
*"Our practice uses artificial intelligence tools to assist with clinical documentation and decision support. All AI-generated content is reviewed and approved by your treating physician. Your protected health information is only processed by HIPAA-compliant, approved tools. If you have questions about our use of AI, please ask your provider."*
This isn't legally required in most jurisdictions yet. But it's coming, and practices that implement it now will avoid the scramble when it becomes mandatory. It also builds patient trust — most patients are fine with AI as long as their doctor is still making the decisions.
Putting It All Together
Your AI governance policy doesn't need to be a 50-page document. For a small practice, a well-structured 3-5 page policy covering all five components is sufficient. Here's the structure:
- •Page 1: Scope and Definitions
- •What this policy covers (all AI tools used in the practice)
- •Key definitions (AI tool, PHI, BAA, approved tools list)
- •Effective date and review schedule (annual)
- •Page 2: Tool Approval and Approved Tools List
- •Approval checklist
- •Current approved tools with BAA status
- •Shadow AI prohibition
- •Page 3: Documentation and Review
- •AI-assisted documentation notation standards
- •Three-tier review protocol
- •Signing requirements
- •Page 4: Incidents, Training, and Disclosure
- •Incident definition and response protocol
- •Training schedule and requirements
- •Patient disclosure language
- •Page 5: Signatures
- •Practice leadership sign-off
- •Annual review acknowledgment
The Malpractice Dimension
I want to address this directly because it's the question I get most often: "Will using AI increase my malpractice risk?"
The answer is nuanced. Using AI without governance increases your risk. Using AI with governance may actually decrease it.
Here's why: if you have a written policy, trained staff, documented review processes, and an incident response plan, you've demonstrated a standard of care that most practices haven't achieved yet. When — not if — an AI-related adverse event occurs in your practice, the difference between "we had a policy and followed it" and "we were just kind of using whatever" is the difference between a defensible case and a devastating one.
The Bernstein study tells us what jurors think. Give them evidence that you were thoughtful, systematic, and professional about AI integration. That's your best defense.
Next Steps
1. Download the template. I've created a free, editable AI governance policy template at [practicefrontier.com](https://practicefrontier.com). Fill in your practice name, your approved tools, and your contact information. You'll have a working policy in under an hour.
2. Audit your current state. Before you implement the policy, find out what's actually happening. What tools are people using? Is PHI going into non-approved tools? You can't govern what you haven't inventoried.
3. Set a deadline. Pick a date — 30 days from now — by which the policy is finalized, the approved tools list is posted, and initial training is completed. Without a deadline, governance becomes "something we'll get to eventually."
4. Need help? If you want guidance tailored to your practice size, specialty, and EHR, I offer consulting through Practice Frontier. Book a session at [practicefrontier.com/consulting](https://practicefrontier.com/consulting).
AI governance isn't bureaucracy. It's the operating system that lets you use powerful tools safely. Build it once, maintain it quarterly, and you'll be ahead of 90% of practices in the country.
*Dr. Sajad Zalzala is a physician licensed in all 50 states, the founder of AgelessRx, and the creator of Practice Frontier, an AI education platform for physicians. He holds degrees in both medicine and computer science.*