Order of Operations: The Foundation Risk Healthcare AI Is Running Past

By Sean Martin, CISSP

Lens Four — Where business, innovation, and messaging come into focus


March 21, 2026

Listen to this article, read by TAPE9

Watch the video summary: Why Healthcare AI Fails Before It Even Starts

The Hidden Risk Killing Healthcare AI [Foundation Crisis]

Healthcare AI ambition is moving faster than data infrastructure can support it. The real risk isn't in the technology—it's in the foundation.

Healthcare organizations rushing into AI without proper data governance face systemic risks that compound with every patient interaction. The convergence of identity management, data integrity protocols, and policy frameworks creates the load-bearing structure that successful healthcare AI requires.

Watch the video summary ▶ https://youtu.be/4sEcO_QMskI


I look at cybersecurity and technology through three lenses — how organizations are running their programs and connecting security to real business outcomes, where market innovation is changing what’s possible, and how the language around technology shapes what gets funded and what gets deferred. Right now, all three lenses are landing on the same problem in healthcare — and the fact that they’re landing together is the signal worth paying attention to.

The program lens shows organizations approving AI deployments on foundations that aren’t ready: identity layers with known gaps, vendor integrations running capabilities that procurement never evaluated, supply chain dependencies that haven’t been stress-tested. The market lens shows vendors, investors, and a $1.7 trillion federal policy agenda all accelerating the pressure to deploy faster. And the messaging lens shows a vocabulary — “transformation,” “scale,” “pilot to production” — that is doing more work to describe ambition than it is to describe sequence. When the language of readiness and the language of ambition stop meaning the same thing, the gap between them is where the risk lives. In healthcare, the patient is sitting in that gap — sometimes willingly, having chosen the wearable, the app, the proactive care model. Sometimes with no choice at all, simply receiving care in a system where AI is already making decisions about them. Either way, they’re in the rapid. The question is whether anyone upstream has checked the water.


LENS ONE — BUSINESS PROGRAMS

What is the actual state of the foundation healthcare AI is being built on?

The program data says most organizations are deploying AI ahead of the readiness their own frameworks would require.

There is a framework for this. Jon McNeill — who scaled Tesla and Lyft before turning it into a repeatable methodology — calls it “the algorithm”: question every requirement, remove unnecessary steps, simplify broken workflows, and then apply technology as the accelerant for the repaired process.1 John Halamka’s work at Mayo Clinic Platform is the healthcare proof of concept — platform-based AI produces reliable clinical insights when the data underneath it is governed, consistent, and trustworthy.1 The framework is well understood. The sequence is not being followed at scale.

A survey of 2,041 healthcare leaders across 90 countries conducted by the Digital Medicine Society and Google for Health found that 87% of executives cited lack of guidelines as a moderately or very important barrier to AI adoption, and 88% cited resource allocation.2 A separate DiMe analysis found that 82% of more than 230 health systems have limited or no governance processes for AI in place.3 These are not organizations that haven’t heard the argument for AI governance. They are organizations that have heard it and haven’t yet built it — while the deployments proceed.

The workforce data makes the governance gap visible at the clinical level. A Wolters Kluwer survey of 518 healthcare providers and administrators found that 58% of frontline staff had used unsanctioned AI tools for work at least monthly, with the primary drivers being faster workflows and the absence of any approved alternative.4 That is not a workforce compliance failure. It is a program design failure. Organizations that have announced AI strategies have not built the governed AI infrastructure their clinical staff needs to do the work. The staff found their own solution. The organization retained the liability.

The vendor trust gap is the harder version of the same problem — harder because it arrives through a channel organizations already trust. Trusted vendors are adding AI capabilities to products already deployed inside health systems, after contracts are signed, after integrations are built, after due diligence has closed. As Jason Kor of HITRUST described in a conversation recorded for the Redefining CyberSecurity Podcast, most procurement processes aren’t built to close this gap — and most health systems have no mechanism to detect when it happens.5 In a general enterprise context, an unevaluated feature is a risk management problem. In a clinical context, where that AI is helping a provider determine a treatment path, it is a patient safety problem.

The supply chain failure mode arrived in concrete form when the Stryker attack became public — a nation-state operation that created a live disruption for hospitals depending on Stryker products and services to function.6 The hospitals were not breached. Their supplier was. As Ryan Patrick of HITRUST analyzed in a post-incident conversation recorded for the Redefining CyberSecurity Podcast, third-party-related breaches have doubled in the last twelve months, and availability of services has moved into the same risk tier as confidentiality of data.6 That shift matters specifically in the AI context: a system operating on corrupted, incomplete, or unavailable data does not produce a visible error. It produces a confident wrong answer. The CIA triad — confidentiality, integrity, availability — exists precisely because all three pillars matter. Healthcare’s AI programs are being designed as if only one of them does.


BETWEEN THE LENSES

Who owns the data the AI is running on?

Every stakeholder in healthcare has a claim on the patient’s data. The patient is rarely the one who controls it.

The provider collected it. The insurer paid for the encounter that generated it. The vendor’s platform stored it. The device manufacturer’s hardware captured it. The government program funded the care. And the patient — whose body produced all of it — typically has the least visibility into where it goes and what it’s used for.

In practice, ownership is asserted by whoever controls access. That is rarely the patient.

Vendors are not passive custodians of that data. The platforms running inside health systems are learning from the data flowing through them — using provider workflows, patient interactions, and claims patterns to train models, refine algorithms, and build capabilities that become competitive advantages. That can create genuine value: better inference, smarter defaults, more accurate clinical decision support. But it is happening largely without explicit authorization from the health system, without visibility to the patient, and without an audit trail that would tell anyone what the data is actually being used to build. As the HIPAA Journal has documented, the arrival of AI in trusted vendor products often comes via notification — a letter or email explaining that AI will now be part of the service — without a meaningful mechanism for the health system to evaluate what that means for its patients’ data or its own liability.7

TEFCA — the Trusted Exchange Framework and Common Agreement, now operational with eleven Qualified Health Information Networks serving as the national exchange backbone — defines six permitted exchange purposes: treatment, payment, healthcare operations, public health, government benefits determination, and individual access.8 What it does not define is who owns the data once a vendor operating as a QHIN participant receives it, processes it, and builds on top of it. The interoperability agenda moves data across systems. It does not move the ownership rights with it. The vendor that adds AI to an integrated product after the contract is signed is making a decision about data use that the health system never authorized and the patient never knew was on the table.

The value and the risk are running in the same data flows. The accountability structure has not caught up to either one.


LENS TWO — INNOVATION AND MARKET SHIFTS

What is the policy agenda requiring — and is the infrastructure positioned to absorb it?

The CMS agenda is directionally correct and technically demanding. The data infrastructure it requires is still mid-build.

The Centers for Medicare & Medicaid Services has put a $1.7 trillion, 160-million-American policy agenda on the table: the CMS Interoperability Framework to break down data silos, AI-powered fraud and waste elimination, and a patient-provider partnership model built on unprecedented data access and transparent pricing.9 CMS Administrator Dr. Mehmet Oz, alongside Amy Gleason of the U.S. DOGE Service and Kim Brandt, CMS Deputy Administrator and COO, have made the direction clear. What the agenda requires technically is work that most of the sector is still mid-stream on.

Fraud detection at CMS scale requires claims data that is accurate. TEFCA is moving data across systems at a scale that would have been technically impossible five years ago.8 What it does not do is repair the identity errors embedded in that data before it starts moving. A record with a mismatched patient identifier does not become accurate because it now travels faster and farther. Patient data access at the scale CMS is describing requires accurate patient matching across every system that holds a record — which is precisely the identity problem health IT has been managing imperfectly for two decades. The policy is writing a check. The infrastructure is still mid-build on the account it is drawing from.

The implementation picture is more fractured than the federal agenda implies. A cross-government analysis of digital health transformation across federal, state, and tribal systems — including CMS, the VA’s Office of Information and Technology, and the Indian Health Service — makes the coordination problem visible: modernization is underway at every level, but it is happening in parallel, not in partnership.10 The communities where the coordination gap is widest — rural, tribal, underserved — are the same communities where infrastructure deficits are deepest and the consequences of data errors are most immediate. The policy agenda reaches them last. The risks reach them first.

Sumbul Ahmad Desai at Apple articulated the consumer health version of the same argument: wearables are enabling a genuine shift from reactive to proactive care models, with patients owning their health data and feeding it into personalized care plans and clinical research.11 Every part of that model — the AI inference, the clinical integration, the personalized care pathway — is downstream of the identity and data integrity layer. More data moving faster into a poorly governed infrastructure does not improve patient outcomes. It amplifies the underlying problem with a more capable interface and a faster clock.

Identity is the load-bearing wall. Everything built on top of it inherits whatever errors are embedded in it. That is not an infrastructure opinion. It is a program risk calculation.


LENS THREE — LANGUAGE, MESSAGING AND MARKET NARRATIVE

How is the market narrating this — and what is the framing leaving out?

The language of AI transformation is doing real work. Some of it is covering the sequencing problem it should be naming.

Healthcare’s AI conversation has a vocabulary problem. “Transformation” implies a completed state. “Deployment” implies the hard work is behind the organization rather than in front of it. “Pilot to production” frames the move to scale as an achievement rather than a risk event. The investor community is hearing a version of the market that emphasizes capital efficiency, proof of value, and speed to scale — the venture logic of moving fast.12 That logic runs directly into an operational reality where health systems are simultaneously trying to modernize legacy infrastructure, close identity gaps, govern AI for the first time, and resolve data ownership questions that have been deferred for years. The language is not lying. It is selecting. And what it is not selecting for is the sequence.

The Zero Trust conversation in healthcare is one place where the language is starting to catch up to the risk. Security practitioners who have been framing Zero Trust Architecture and identity-based access controls as ransomware defenses are now framing them as the infrastructure conditions that make trustworthy AI deployment possible in the first place.13 That reframe is significant. It connects the security program to the AI program in a way that makes both more defensible — and makes accountability clearer. If the identity layer is ungoverned, the AI program built on top of it is ungoverned. The CISO and the CIO share that exposure with the business leader who approved the deployment timeline.

The vendor trust gap requires the most scrutiny at the market level. Vendors have significant commercial incentive to describe their AI capabilities in terms of what is possible rather than what is governed. The quiet addition of AI features to integrated products is partly a product velocity decision and partly a market narrative decision: if the feature is already deployed, the conversation shifts from “should we evaluate this?” to “how do we govern what’s already running?” That is a different conversation with a different power dynamic. Health system leadership approved the original vendor relationship. They are accountable for what that relationship is now delivering into their clinical environment — whether they were told about it or not. And if a patient outcome suffers because of a capability that was never evaluated, the accountability chain does not end at the vendor’s door. It starts there and runs back through every decision that allowed the deployment to happen without scrutiny.


THE FOURTH LENS

Healthcare’s AI ambition and its data infrastructure are moving at different speeds — and the patient is where those speeds collide.

Here is the pattern, viewed across all three lenses at once.

The program layer shows organizations approving AI on foundations they know are incomplete — identity gaps unresolved, vendor integrations unevaluated, supply chain dependencies untested, data ownership questions deferred. The decision to proceed is not ignorance. It is a sequence choice: move the AI forward and address the foundation in parallel. Any program manager who has shipped something complex knows what happens when you run critical-path items in parallel that should be sequential. The schedule looks faster. The risk does not go away. It defers — and it compounds.

The market layer shows vendors, investors, and the policy agenda all accelerating the pressure to deploy. The vendor that quietly ships AI into a trusted product is making a sequence choice too — ship first, seek approval if asked. The CMS agenda sets a policy clock that does not wait for the identity infrastructure to catch up. TEFCA is expanding the reach of that clock, moving data nationally at speed while the governance layer is still being assembled. The investor asking whether the product is differentiated is not asking whether the foundation is ready. These are not the same question. They are rarely in the same conversation.

The messaging layer shows a vocabulary that has optimized for ambition and is underweighted on sequence. “Transformation” and “scale” are the words doing the work. “Prerequisites,” “dependencies,” and “order of operations” are not in the same sentence as the AI roadmap announcement. When the vocabulary of ambition is doing more work than the vocabulary of readiness, the accountability structure gets blurry — and in healthcare, blurry accountability has a patient sitting in the middle of it.

None of this is an argument against ambition. Innovation in healthcare is not optional — the status quo has its own costs, and the potential of AI to improve outcomes, reduce burden, and reach underserved populations is real and worth pursuing hard. The argument is about discipline. A-to-Z is not just a start and an end. It is every dependency in between. Every ambiguity that needs a decision before it becomes a failure. Every fragility that looks like a detail until it becomes the reason the whole thing stops. The organizations that will get this right are the ones that can hold the ambition and the sequence at the same time — and that require their vendors to hold it too.

Healthcare AI is a complex program running on an incomplete foundation, with contested data ownership, a national exchange infrastructure still finding its governance footing, accelerating external pressure, and a patient sitting in the middle of it. Some of those patients chose to be there — the wearable, the app, the engaged care model. Others are simply in the system, receiving care, with no awareness that the rapid has already begun. Either way, the question is the same: does anyone upstream know what’s in the water — and who is accountable if they didn’t check?

The conversations with Jason Kor and Ryan Patrick of HITRUST were recorded for the Redefining CyberSecurity Podcast as part of ITSPmagazine’s coverage surrounding HIMSS26. Explore the full coverage at itspmagazine.com and connect at seanmartin.com.


References

  1. “Opening Keynote: Jon McNeill and John Halamka, MD, MS,” HIMSS Global Health Conference & Exhibition 2026, Session #1 — app.himssconference.com — Session #1
  2. Digital Medicine Society and Google for Health, “3 Key Insights for the 2026 Health AI Horizon,” survey of 2,041 healthcare leaders across 90 countries — dimesociety.org
  3. Digital Medicine Society, “Operationalizing AI Governance in Healthcare,” survey of 230+ health systems — dimesociety.org
  4. Wolters Kluwer Shadow AI Survey, 518 healthcare providers and administrators (December 2025) — wolterskluwer.com — HIMSS 2026: Trusted Clinical AI
  5. “Tackling Third-Party Risk and AI Security in Healthcare,” Brand Spotlight with Jason Kor, Principal, HITRUST, Redefining CyberSecurity Podcast — youtu.be/EgrdPV8L2Qc | Jason Kor’s HIMSS26 session (CS03): app.himssconference.com — Session CS03
  6. “HIMSS Recap with Ryan Patrick,” EVP TPRM Customer Solutions, HITRUST, Redefining CyberSecurity Podcast — youtu.be/V__prVJThn4
  7. “When AI Technology and HIPAA Collide,” HIPAA Journal — hipaajournal.com
  8. Trusted Exchange Framework and Common Agreement (TEFCA), Assistant Secretary for Technology Policy / ONC — healthit.gov/policy/tefca
  9. “A Revolutionary Vision for American Healthcare Transformation: CMS’s Roadmap for Now and the Future,” Dr. Mehmet Oz, Amy Gleason, Kim Brandt, HIMSS26, Session #154 — app.himssconference.com — Session #154
  10. “Cross-Government Digital Health Transformation: Lessons from Federal, State, and Tribal Systems,” HIMSS26, Session #157 — app.himssconference.com — Session #157
  11. “Scientific Excellence Meets Digital Innovation: Human-Centered Healthcare,” Sumbul Ahmad Desai, Vice President of Health and Fitness, Apple; Clinical Associate Professor, Stanford, HIMSS26, Session #77 — app.himssconference.com — Session #77
  12. “The Fix: What Healthcare Really Needs from Innovation and Investment,” Daymond John, Emerge Experience Keynote, HIMSS26, Session EMG-1 — app.himssconference.com — Session EMG-1
  13. HIMSS Global Health Conference & Exhibition 2026 — cybersecurity track — himssconference.com

About the Author

Sean Martin is a cybersecurity market analyst, content strategist, and advisor with 30+ years across engineering, product development, marketing, and media. Co-founder of ITSPmagazine and Studio C60, host of the Redefining CyberSecurity Podcast and the Music Evolves Podcast. Sean works with CISOs and security leaders, vendors and service providers, go-to-market and marketing teams, and analyst firms to connect technology operations and cybersecurity programs to business outcomes. Connect at seanmartin.com.

Subscribe to Lens Four — Where business, innovation, and messaging come into focus.


Topics Covered in This Analysis

healthcare AI governance, AI deployment readiness, data foundation healthcare, order of operations AI, healthcare data quality, identity management healthcare, patient identity matching, FHIR interoperability, TEFCA, health information exchange, QHINs, national health data exchange, healthcare interoperability, digital health transformation, Shadow AI healthcare, vendor trust gap, vendor AI risk, patient data ownership, AI agent identity, Zero Trust healthcare, supply chain resilience healthcare, third-party risk management, TPRM healthcare, Stryker cyberattack, nation-state healthcare attack, CIA triad healthcare, data integrity AI, healthcare cybersecurity, patient data security, CMS interoperability framework, patient data access, healthcare data transparency, federal health IT, tribal health IT, Indian Health Service, government digital health, accountability healthcare AI, clinical AI governance, agentic AI healthcare, program management healthcare AI, AI prerequisites healthcare, Jon McNeill, John Halamka, Mayo Clinic Platform, Sumbul Ahmad Desai, Apple Health, wearables healthcare, Daymond John, Dr. Mehmet Oz, Amy Gleason, Kim Brandt, DOGE healthcare, HITRUST, Jason Kor, Ryan Patrick, Wolters Kluwer, Digital Medicine Society, DiMe, Google for Health, healthcare workforce governance, HIPAA AI compliance, Sean Martin, Redefining CyberSecurity Podcast, ITSPmagazine, HIMSS26, HIMSS Global Health Conference 2026


Frequently Asked Questions

Q: What does “order of operations” mean in the context of healthcare AI?
It means the sequence in which things need to happen before AI can be deployed safely and effectively. Identity infrastructure first. Data governance before data movement. Vendor evaluation before vendor integration. Supply chain resilience before supply chain dependency. Healthcare organizations are frequently running these steps in parallel — or skipping them entirely — because the pressure to deploy is faster than the discipline to sequence. That is the risk this analysis is describing.

Q: What is the vendor trust gap and why does it matter in clinical settings?
The vendor trust gap is the window between when a health system establishes a trusted relationship with a vendor and when that vendor adds AI capabilities to its integrated products — without the health system’s knowledge or evaluation. Most procurement processes weren’t built to catch this. In a general enterprise context, it is a risk management problem. In a clinical context, where that AI may be influencing treatment decisions, it is a patient safety problem. Health system leadership is accountable for what trusted vendor relationships are delivering into their clinical environment, whether they were informed about it or not.

Q: What is TEFCA and how does it relate to data ownership in healthcare?
TEFCA — the Trusted Exchange Framework and Common Agreement — is the national health information exchange backbone, now operational with eleven Qualified Health Information Networks (QHINs). It defines the purposes for which health data can be exchanged nationally: treatment, payment, healthcare operations, public health, government benefits, and individual access. What it does not define is who owns the data once a vendor operating as a QHIN participant receives it and builds on top of it. TEFCA solves the data movement problem. It does not solve the data ownership problem.

Q: Who is actually accountable when AI produces a wrong clinical output?
The accountability chain starts with the health system leadership that approved the deployment timeline — including any dependencies that weren’t resolved before go-live. But it doesn’t end there. Vendors that ship unevaluated AI into trusted integrations carry exposure when that AI contributes to a harmful outcome. The question this analysis leaves open is not whether accountability exists — it does. The question is whether the organizations making deployment decisions today understand where accountability lands when the sequence is wrong.

Q: Is this analysis arguing against AI adoption in healthcare?
No. The argument is about discipline, not speed. Innovation in healthcare is not optional — the status quo has its own costs, and the potential of AI to improve outcomes and reach underserved populations is real and worth pursuing. The argument is that A-to-Z requires every step in between — every dependency, ambiguity, and fragility identified and addressed before it becomes a failure. The organizations that will get this right are the ones that can hold the ambition and the sequence at the same time.

Next
Next

Task by Task: The Workflows We're Handing to AI — One Decision at a Time