Are AI Applications HIPAA Complaint?

Artificial intelligence (AI) is revolutionizing healthcare, offering many opportunities to improve patient care, streamline operations, and support medical professionals. However, as exciting as these advancements are, they raise important questions about data privacy and compliance. Are AI applications truly HIPAA-compliant?
Below, we’ll address the pros and cons of AI in healthcare, as well as the complexities of ensuring HIPAA compliance with the use of AI applications.
The Potential of AI in Healthcare
Before we dive into the current challenges in leveraging artificial intelligence in the healthcare industry, let’s discuss why this class of tools is so tempting in the first place.
The analytic potential of AI has – in theory – transformative potential when applied to the flaws in and challenges of healthcare.
Potential use cases of artificial or augmented intelligence in medicine include:
- Accelerating and improving the diagnostic process
- Highlighting patterns, trends, and anomalies in massive quantities of data
- Increasing early detection of disease
- Speeding drug development through molecular modeling and analysis
- Making it easier to share important data between systems and departments
- Automating administrative tasks to save time and smooth the patient experience
- Closing the language gap between doctors and patients
In some cases, the promise of this technology for the practice of medicine is already being fulfilled. Chatbots are already in use in limited capacities for tasks like appointment scheduling and prescription refills. AI-augmented surgical robots are used in hospitals worldwide to improve precision and reduce complications. Deep learning models are increasing the accuracy and early detection of anomalies in medical imaging.
We’re living in an exciting time, but as yet, serious obstacles remain for more extensive use of artificial intelligence in healthcare.
The Risks and Dangers of AI in Healthcare
This Forbes article provides a fairly comprehensive summary of the potential threats associated with the projected growth and prevalence of AI technologies.
It outlines several key risks, including the:
- Lack of transparency in AI decision-making processes
- Amplification of societal biases from training data
- Privacy and security concerns stemming from extensive personal data collection and analysis
- Accelerated spread of misinformation and public manipulation
- Job displacement due to AI automation
- Advanced security threats posed by misuse
- Exacerbated inequality based on access to AI
- Entrenchment of power concentrated in a few large corporations
- Inability of legal and regulatory systems to keep pace with rapid AI innovation
- Struggle of coding ethics and values into AI systems
- Harmful consequences of an AI arms race
- Overreliance on AI leading to a loss of human skills
- Loss of human connection
- Unpredictable, unintended consequences of AI decision-making
- Existential threat posed by Artificial General Intelligence (AGI) surpassing human intelligence
Forbes’ list represents the general, society-wide problems posed by AI, but they give us a preview of healthcare-specific concerns for the application of AI.
While some of these threats – like the question of AI’s ability to replace doctors – are still a distant concern, others have already manifested, demanding a solution. For example, we’re aware of inherited bias in medical algorithms, and thanks to AI’s ability to hallucinate facts and evidence, misinformation is a huge concern for expanding the use of chatbots in healthcare beyond administrative tasks. Also, the “black-box” nature of your AI decision process is extremely problematic when its decisions mean life or death.
But by far, the most immediate and intractable barrier to the use of AI in medicine is the problem of data privacy and security. Thanks to HIPAA, any AI innovation not specifically designed for healthcare applications is an absolute non-starter.
HIPAA’s Core Principles for Patient Data Protection
Let’s take a step back to review HIPAA’s requirements for patient data protection.
The Health Insurance Portability and Accountability Act (HIPAA) is a pivotal piece of regulatory legislation in the United States. Its main claim to fame is protecting confidential patient data, though it contains several administrative provisions that serve other purposes. It is a cornerstone of the U.S. healthcare system, aimed at protecting patient data privacy and establishing overall healthcare information security.
Core requirements for HIPAA-bound parties that relate to data privacy and security include:
- Patients have the right to access their medical records, request changes, and restrict certain disclosures of protected health information (PHI).
- Parties bound by HIPAA should only disclose the minimum amount of PHI necessary to achieve a task.
- Access to PHI must be strictly limited only to authorized parties with the right credentials.
- Physical patient records must be safeguarded with secure storage and access controls.
- Electronic PHI must be safeguarded with firewalls, encryption, and audit trails.
- Healthcare providers must notify affected patients of any data breach involving their PHI.
- Healthcare providers must establish administrative policies and procedures to manage access controls, workforce training, and data privacy practices.
Any third-party services – referred to by HIPAA as Business Associates (BAs) – must be properly vetted by a covered entity to ensure that HIPAA standards will be met. The two parties also must enter into a contract – a Business Associate Agreement (BAA) – that specifies how PHI will be protected by them both in the course of doing business.
Is AI HIPAA Compliant?
There are mutual barriers to HIPAA compliance when using AI: AI (for the most part) isn’t designed to comply with HIPAA, and HIPAA is certainly not designed to account for AI.
The biotech department at Harvard Law has already publicly labeled HIPAA as outdated and inadequate when it comes to AI-related privacy concerns. While HIPAA regulations were updated to account for electronic health records by the HITECH Act in 2009, AI wasn’t even close to being a player at the time.
Certain AI applications have been designed for the healthcare industry and take HIPAA compliance into account. However, forms of generative AI targeted at the general public do not – nor do they intend to. ChatGPT, for example, will not sign BA Agreements and has no intention of changing its policy.
It’s not pure stubbornness – with generative AI in its infancy, there are just a lot of challenges and unanswered questions about how to pull off HIPAA compliance while optimizing AI functionality.
Complexities in Ensuring AI Applications Adhere to HIPAA
One of the primary reasons behind the complexity of HIPAA compliance in AI applications is the vast amount of data that algorithms require for efficient performance, sometimes referred to as Volume, Variety, and Velocity.
An AI system may need to access multiple health records and various sources of personal information to accomplish complex analytic goals, and all of that data would have to be handled and processed in a manner that adheres to the stringent standards set forth by HIPAA. Currently, AI technologies can make it difficult to track and control the flow and usage of PHI.
User consent is another critical factor in determining how patient data may be used within AI systems and ensuring HIPAA compliance. Obtaining informed consent not only grants healthcare organizations the required permissions to utilize patient data but also builds trust between the healthcare provider and the patient.
Informed consent involves clearly communicating the purpose and scope of data usage, explaining the benefits and risks involved, and empowering patients to make informed decisions regarding their PHI. At this time, scope, benefits, and risks are debatable and evolving, making it a complicated conversation.
Finally, AI technologies' development and ownership by private entities – whose primary goal is often monetizing the value of that data – introduces a whole new set of challenges for privacy and data protection.
How To Stay HIPAA Compliant in the Age of AI
All of that sounds pretty discouraging – does it mean that generative AI simply can’t be used in the healthcare ecosystem?
No. Or, not necessarily. But any HIPAA-bound organization needs to take a careful look at any AI technologies before putting them to use. You need to perform risk assessments, ensure the use of strong encryption, and exercise transparency.
Perform Risk Assessments
Risk assessments play a vital role in ensuring that artificial intelligence (AI) applications comply with the rules imposed by the Health Insurance Portability and Accountability Act (HIPAA). These assessments involve a comprehensive analysis of an AI system's data handling procedures, security measures, and potential vulnerabilities.
By identifying and addressing areas of risk, healthcare organizations can enhance the privacy and security of their AI applications, adhering more closely to the principles of HIPAA. It further helps in understanding the flow of the patient's personal health information (PHI) across AI applications, which can be particularly challenging given the evolving nature and complexity of AI technologies.
Beyond that, the use of risk assessments can facilitate the timely detection and resolution of potential HIPAA violations before they escalate into more significant compliance issues. For example, a thorough assessment might reveal data security weaknesses that could expose patient data to unauthorized access or potential breaches.
In such cases, necessary safeguards, such as encryption or user access controls, can be implemented. Plus, ongoing risk assessments are a requirement under the HIPAA Security Rule, which mandates that healthcare organizations regularly evaluate their security controls and policies to align with the ever-evolving landscape of health information technology and threats.
Ensure Strong Encryption
Strong encryption is crucial for AI applications in the healthcare sector that handle a vast amount of sensitive electronic PHI. Encryption converts the original form of data (plaintext) into an unreadable format (cipher text), only accessible through a deciphering process (decryption) using an encryption key.
By encrypting patient data, both in transit and at rest, healthcare organizations protect against potential breaches. Encryption also mitigates the damage of a potential attack, should malicious actors gain access to the system's infrastructure. This is why HIPAA’s Security Rule strongly encourages robust and up-to-date encryption algorithms and keys.
Implementation of end-to-end encryption tools and methods, such as Public Key Infrastructure (PKI) or Secure Sockets Layer (SSL), can provide an additional layer of security.
Strong encryption, in combination with other safeguarding measures, is integral in creating a HIPAA-compliant secure environment in AI applications handling sensitive healthcare data.
Exercise Transparency
HIPAA's Notice of Privacy Practices (NPP) requirement mandates organizations to give clear, detailed information on how patient data is utilized and protected.
As AI becomes more ingrained in healthcare operations, healthcare entities will have to disclose how these systems use, store, and protect patient data. Transparency also must extend to the use of AI in analyzing and processing information. Healthcare providers will have to disclose the extent to which AI influences medical decisions and the mechanisms in place to safeguard against errors or privacy breaches.
Thorough documentation of these processes and open communication about AI's role can ensure appropriate use, continued alignment with HIPAA regulations, and maintenance of patient trust.
Conclusion
While AI offers transformative potential for healthcare, any organizations developing or using AI in healthcare should work closely with legal and compliance teams to ensure their applications meet HIPAA requirements. It’s also important to keep up with updates to HIPAA and related laws.
Having an organizational strategy for AI and HIPAA compliance is one thing, but if your staff are using non-compliant tools without permission, you’re still in hot water. That’s why regular and thorough HIPAA compliance training is crucial to avoiding fines, jail time, and erosion of public trust.
As an online training provider with over 20 years of experience educating the workforce in regulatory compliance, we offer a full catalog of HIPAA courses tailored to the needs of various healthcare roles. Employees will be able to study at their own pace, whenever and wherever’s best. This ensures they have the freedom to focus on critical training topics.
Enroll today to get started!