Jul 3
2025
Can AI Coexist With HIPAA? How Collaboration Can Resolve the Tech-Compliance Conundrum

By John Murray, senior director, SAP.
From the daybreak of the Web to the arrival of digital well being data, the healthcare business traditionally has been sluggish to embrace new applied sciences and the enhancements they’ll carry. One motive is the perceived dangers related to these applied sciences. One other is the perceived prices of implementing them.
The rise of cloud computing and synthetic intelligence presents healthcare suppliers — conventional ones like hospitals and well being methods, together with medical gadget suppliers and different entities that meet the “supplier” definition — presents the business with an analogous tech conundrum. As new gamers be a part of extra typical suppliers in reshaping the affected person care ecosystem, alternatives abound for them to leverage the cloud, AI and different instruments to reinvent healthcare enterprise processes, companies and the affected person expertise.
However with these upside alternatives come potential new dangers and prices, together with compliance challenges with HIPAA, a legislation that doesn’t readily reconcile with applied sciences like AI or cloud computing, which weren’t round when it was promulgated, nor with the rising variety of entities now outlined as affected person care suppliers.
For this rising class of suppliers, the functions for AI and different clever applied sciences are certainly promising, for issues like predicting sure elevated dangers for sufferers, diagnosing points and recommending remedies. Generative AI (genAI) copilots pushed by giant language fashions may assist decision-making about diagnoses and coverings. GenAI additionally reveals nice promise for enhancing clinician and scientific productiveness. As versatile as it’s, AI additionally can assist firms handle their compliance obligations — and the info required to satisfy them — throughout a number of jurisdictions.
What’s extra, AI reveals potential for connecting affected person well being with advertising, the place, for instance, based mostly on an evaluation of affected person information, AI-powered capabilities serve procuring checklist suggestions to sufferers for nutritional vitamins, dietary supplements, over-the-counter medicines, and many others., once they’re in-store or procuring on-line. This clever health-based advertising appears like a extremely promising frontier for firms that may get it proper.
Danger and reward
AI’s big potential clearly isn’t misplaced on healthcare firms. In a 2024 survey of 100 upper-level U.S. healthcare execs performed by McKinsey, 72% of respondents mentioned their organizations are both already utilizing genAI instruments or are testing them. One other 17% mentioned they have been planning to pursue genAI proof of ideas. And now their AI investments have begun to repay. About 60% of those that have carried out gen AI options are both already seeing a optimistic ROI or count on to.
This rising embrace of AI and cloud computing introduces an entire new set of points, dangers and obligations that healthcare suppliers — and their regulators — should ponder. Guaranteeing affected person privateness and information safety in compliance with HIPAA is probably essentially the most urgent of these points. As a result of HIPAA grew to become legislation in 1996, nicely earlier than Amazon, Google, the cloud and AI entered the tech mainstream, and nicely earlier than medical gadget firms, insurers and the Walmarts of the world have been offering some type of care on to sufferers, its provisions aren’t outfitted to discern how compliance obligations and legal responsibility needs to be shared among the many varied events that now contact affected person information, together with coated entities and their enterprise associates. Because the definition of “supplier” adjustments, firms in lots of extra industries now might contact affected person information indirectly.
The growing use of AI by affected person care suppliers brings new classes of related entities into the compliance combine. That features the hyperscalers that host the cloud-based AI capabilities and huge language fashions suppliers are utilizing, the software program/tech firms that construct and promote these methods, and the system integrators which can be serving to suppliers implement them. Who’s chargeable for an information breach? Who owns the chance related to defending affected person data on this broader care ecosystem? It’s a true authorized quagmire with few clear solutions.
The notion of AI as an untested know-how (a minimum of in a healthcare context) can also be a part of the chance equation. How you can handle potential bias and hallucination danger in giant language fashions, for instance? The price of implementing cloud-based AI and different tech infrastructure, and inside resistance to embracing these new applied sciences, additionally issue into that equation.
Maximizing tech’s potential
A 2023 article within the Harvard Enterprise Evaluation contends that implementing cloud-based AI capabilities in a means that’s compliant would require intensive cooperation amongst stakeholders throughout the healthcare panorama. “Payers, well being methods, and suppliers want to come back to a standard understanding about when it’s applicable to make use of an AI utility, the way it needs to be used, and the way potential unwanted effects shall be recognized and mitigated.”
That’s a mandatory and worthwhile endeavor, the article’s writer concludes. “It will be sadly ironic if the U.S. well being sector lagged in reaping the advantages of this transformative new know-how.”
The problem right here is a big one: establishing extensively accepted practices, requirements and guardrails round cloud computing and AI so regulation can catch as much as and maintain tempo with know-how and the moral and safety points it raises, in addition to with the shifting affected person care ecosystem.
Essentially the most viable car for doing so, a minimum of right here within the U.S., might be to determine some type of broad stakeholder consortium, maybe led by the U.S. authorities (the FDA and/or HHS, for instance), and together with medical schools/boards, together with coated entities and their enterprise associates underneath HIPAA. The purpose: develop consensus about how the obligations and liabilities related to HIPAA shall be divided and executed within the AI period.
A broader embrace of the cloud and AI inside the affected person care ecosystem will increase the universe of coated entities and enterprise associates that doubtless shall be touching or a minimum of have some position, direct or oblique, within the dealing with of affected person information. That in flip necessitates formation of enterprise networks, inside which information can move unimpeded, transparently and securely between related entities within the affected person care ecosystem.
So, as an example, within the case of cell and gene therapies, a enterprise community would allow the assorted stakeholders dealing with a affected person’s therapy, from drawing a blood pattern to producing, delivering and administering the precise remedy, to securely hook up with share and analyze data in a well timed and compliant method to yield the absolute best affected person end result. Every member of the worth chain thus should have the safety and data-management capabilities in place to viably take part in such a community. This similar idea would additionally apply to scientific networks.
As daunting as a few of this will sound, know-how like AI won’t stand nonetheless. So neither ought to members of the affected person care worth chain in laying the mandatory groundwork — requirements, networks, and many others. — to take full benefit of clever applied sciences in a means that’s compliant, worthwhile and most significantly, useful for sufferers.
