Overcoming GenAI challenges in healthcare cybersecurity
In this Help Net Security interview, Assaf Mischari, Managing Partner, Team8 Health, discusses the risks associated with GenAI healthcare innovations and their impact on patient privacy.
What are the key cybersecurity challenges in healthcare in the context of GenAI, and how can they be effectively addressed?
The healthcare industry faces many of the same challenges other industries face in the wake of emerging technologies with subtle differences that need to be considered and addressed.
For example, differences between the fundamental data points we want to keep private. Comparing PII to PHI shows that PII has a broader scope, and is less regulated, it’s handled and accessed by a more comprehensive range of organizations and is (at this point) easier to monetize. However, PHI is richer in content and could be used more effectively for phishing and medical fraud.
Health providers are also lagging when it comes to modern infrastructure and cyber security measures. This combination makes PHI easier for attackers to target.
When we look at how AI models are developed, many historical and societal biases might be reflected when it comes to race, ethnicity, and gender. Algorithmic fairness in healthcare is critical because AI models’ decisions can directly impact patient health, treatment recommendations, and overall well-being. Biased or unfair models in healthcare can lead to misdiagnosis and improper treatment, making the consequences of algorithmic bias more severe.
How do you see GenAI transforming healthcare operations and patient care, especially regarding efficiency and decision-making?
GenAI will have a profound impact on healthcare professionals. We will see the administrative burdens that often prevent health professionals from working at “the top of their license” alleviated by adopting these tools.
For example, AI-powered tools can automate the data entry, extraction, and analysis of electronic health records (EHR). NLP techniques can extract relevant information from unstructured clinical notes and populate structured fields in EHR. With predictive analytics, healthcare professionals can anticipate patient flow, staffing needs, and resource utilization, enabling proactive decision-making and resource allocation. Appointment scheduling and reminders, prior authorization and claims processing, and workflow optimization are all ripe for digital transformation.
GenAI will ultimately be the preferred method for diagnosis because AI algorithms can analyze vast amounts of patient data, including medical records, imaging scans, and lab results. Machine learning algorithms can identify subtle patterns and correlations in data that may be difficult for humans to detect and AI models can provide consistent and objective diagnoses based on the input data, reducing the potential for human bias.
It is important to recognize that at this point, humans still have the advantage of context when it comes to data collection when meeting a patient vs analysis on pure data. This will likely hint that the future of diagnosis will be collaborative.
Given the sensitivity of healthcare data, what measures should be in place to ensure that GenAI innovations do not compromise patient privacy?
While PHI is largely unstructured, many measures could be used to reduce the risk while allowing innovation, for example, data anonymization and de-identification of both PII and PHI will be needed.
Anonymization and de-identification (data masking, tokenization, or encryption) techniques that remove personally identifiable information (PII) from healthcare data before its use for GenAI training will help to ensure compliance with privacy regulations such as HIPAA and GDPR and lead to greater adoption and trust of GenAI tools in healthcare.
Improved privacy-preserving techniques that allow for the training of GenAI models without directly accessing or sharing raw patient data, should be utilized. For example, federated learning, differential privacy, and homomorphic encryption all offer security benefits.
And it should go without saying that secure data storage and access controls are non-negotiables. Storing healthcare data in secure, encrypted databases with strict access controls and strict authentication methods, such as multi-factor authentication (MFA) is necessary.
Can you discuss GenAI’s ethical implications in healthcare, particularly regarding data bias and ensuring equitable treatment outcomes?
Historically, healthcare data has many built-in biases when it comes to race, ethnicity, and gender but bias in GenAI could result from bias in the training dataset, feature selection, data collection, labeling process, or even the model architecture itself.
The decision-making process of GenAI models should be transparent and explainable to healthcare providers and patients. Black-box models lacking interpretability can hinder trust and accountability, making identifying and addressing biases or errors challenging. Developing explainable AI techniques and providing clear explanations for GenAI-generated recommendations can foster trust and enable informed decision-making.
How do you think the current regulatory frameworks need to evolve to accommodate GenAI’s rapid advancement in healthcare?
The healthcare industry has taken a massive leap, but not its first. We experienced a similar leap in magnitude years ago by introducing clinical trials for devices and drugs. We might require a more robust infrastructure for standardized “ML clinical sites” that are not dependent on data collected by vendors. For example, a controlled sandbox with data sets that have been vetted for bias and tests for transparency and explainability will accelerate the creation of ML models and boost overall quality.
What steps should healthcare organizations take to build and maintain public trust using GenAI technologies?
In my opinion, healthcare organizations need to be transparent regarding their GenAI usage and the various implications adopting this technology may have for patients. Prioritizing patient safety and privacy and continuously monitoring, improving, and addressing AI concerns responsively is a must.