AI has been revolutionary for healthcare. However, as is often the case with innovative technology, there are privacy concerns when AI is applied to such a highly regulated industry.
Companies can benefit greatly from AI, but there are some risks, too. What if the technology isn’t managed properly? What if cybercriminals hack and compromise sensitive data? And what about the ethical concerns?
These security-related questions are keeping business leaders up at night. Indeed, enterprises recognize the importance of ensuring they have the right security measures. According to our research on digital transformation and productive development, 59% of companies plan to improve their cybersecurity posture this year (2024), compared to the 32% that said they would improve it in 2023.
Growing use of AI in healthcare
Healthcare has transformed in recent years. Today, patients can receive the treatment they need more efficiently. One reason for this is the application of AI.
According to IBM, the use of AI has allowed healthcare professionals to do the following:
- Improve workflow processes
- Create virtual nurses and assistants
- Reduce errors in treatment
- Reduce the need for invasive surgeries
- Highlight potential fraud
Given the machine learning models AI algorithms are built on, it’s no surprise that they can identify patterns and trends more efficiently than humans, allowing doctors to make timely and educated diagnoses for their patients.
As AI technology continues to improve, healthcare will continue to transform as well. What might not be possible today could be achieved in the near future. With so much potential to offer patients the very best care, it’s no wonder there is a lot of excitement about AI’s powerful capabilities.
How is AI a threat to privacy?
Unfortunately, along with the enthusiasm, there are privacy concerns with AI in healthcare. AI has become a target for cybercriminals, and cybersecurity threats pose a danger to businesses and individuals, especially if their data is stolen.
When it comes to healthcare providers, it’s only right to expect them to protect data fiercely. Patient confidentiality is something that we all want. In a world where security threats are constantly shifting, it’s no surprise that the data privacy concerns about AI and healthcare are serious.
Examples of data privacy concerns
A 2022 study by the American Medical Association reports that 75% of patients are concerned about protecting the privacy of personal health data, and only 20% claim to know about the companies and individuals with access to their data.
For many, the main concern is how sensitive patient information is handled. AI-collected, stored, and used data will contain information about a person’s identity (name, address, date of birth), physical details (height, weight, and other identifiable features), and health history. If this data isn’t protected, it can be used in a harmful way.
Major pharmaceutical companies have been adopting approaches to combat these concerns. For example, one has used security assessments to identify and address existing risks and vulnerabilities. This could include looking at areas where passwords aren’t stored correctly and data is vulnerable. Ultimately, the goal is to alleviate patients’ concerns and provide them with the best possible healthcare experience.
Potential legal and ethical challenges
There are several examples of data privacy concerns with AI in healthcare, including legal and ethical challenges.
As noted by our Director of Security William Reyor, AI technology can pose a risk to privacy and compliance within regulatory frameworks. For example, The Health Insurance Portability and Accountability Act of 1996 (HIPAA) establishes the standards for health data privacy and security. It regulates the use and disclosure of protected health information (PHI) while creating programs to control fraud and abuse.
Since the introduction and prevalence of AI in healthcare, there have been some glaring loopholes and grey areas that have created concerns. HIPAA compliance is only required by certain types of organizations. Coverage is dependent on specific activities that are conducted, not just solely based on the profession.
Given the capabilities of AI, several ethical considerations and legal challenges have been raised. This has happened as laws and regulations are unable to keep up with the evolution of the technology. In fact, the above policy has been described as being “obsolete” in a study conducted by the University of California, Berkeley.
Anil Aswani, the engineer who led the study, also suggested that there could be a temptation for businesses not covered by HIPAA to use AI illegally. Anil highlights how companies could discriminate against people based on data that is collected.
Examples of legal and ethical concerns
Answani’s reasoning rings true, and privacy concerns with AI are already well documented.
Lexalytics has compiled some scenarios of AI being used by companies that aren’t covered by HIPAA regulations. They highlighted the suicide detection algorithm that the world’s biggest social media company rolled out in 2017. The site used AI to gather data and information to try and predict a user’s mental state in the likelihood an attempted suicide might occur. Some have raised ethical concerns because the social media company used collected without affirmative consent.
Elsewhere, there have been legal and ethical concerns regarding the selling of data. Sites that allow users to investigate their genetics and family history, for example, also don’t fall under the jurisdiction of HIPAA. As a result, these sites can sell this information to companies that might use the data to research and develop new products.
AI bias in healthcare
In addition to the many privacy, legal, and ethical concerns with AI, it’s also important to recognize AI bias.
IBM refers to AI bias as the occurrence of biased results due to human biases that skew the original training data or AI algorithm. According to IBM, this leads to “distorted outputs and potentially harmful outcomes.”
Regarding healthcare, AI bias can occur when a particular demographic is underrepresented in data. The level of accuracy surrounding results could be lower for one demographic because not enough information is available or has been collected.
This could pose problems for the healthcare industry, as it could lead to misdiagnosis, mistreatment, or other potential harm to patients. This is perhaps why at least 55% of medical professionals believe the healthcare industry is not ready for AI.
How can healthcare companies combat AI privacy concerns?
With the numerous examples of data privacy concerns with AI, it’s clear that companies across the healthcare industry need to maintain compliance and protect their customers.
There are many advantages when companies introduce AI to the digital product development process. Nonetheless, companies should adopt the following best practices if they want to mitigate risks.
GRC – governance, risk management, and compliance
When mitigating data privacy concerns with AI and healthcare, a strategic approach to Governance, Risk Management, and Compliance (GRC) is essential.
- Governance refers to setting rules and norms that align with AI without overstepping boundaries.
- Risk management is about being proactive in identifying potential AI risks early and acting quickly to eliminate them.
- Compliance means sticking to laws that have been implemented to regulate AI, such as GDPR and the EU AI Act. These have both recently been tightened due to ethical and legal concerns regarding the technology and what it can potentially put at risk.
Transparency
One of the best ways fears and concerns about data privacy can be eased is through transparency. As data is continually absorbed and used in new ways, patients should know how data is being used and stored. Organizations across the healthcare industry should be looking to disclose this information to build trust.
Protection of data
The healthcare industry should already be using the latest security safeguards given the sensitivity of the data. Organizations should conduct an audit to strengthen their security posture and ensure they have the right protections to safeguard against attacks.
Several controls should be implemented, including:
- Firewalls
- Physical barriers
- Access control
- Log monitoring
- Incident alerting
- Internal/external reviews and audits
These should all be frequently checked and modernized to guarantee the required level of protection is in place.
Staff training
Another way to effectively manage AI risk is through staff training. Properly training healthcare professionals on the latest AI developments and processes can help maximize data security and safety. Staff will have a greater understanding of the importance of safeguarding the data that is compiled, which can mitigate future attacks.
WHO: An example of AI governance in healthcare
The World Health Organization (WHO) has researched the concerns of AI within the healthcare sector. They spent 18 months with leading experts across various fields, such as ethics, digital technology, law, and human rights to create a report identifying the privacy challenges that stem from using AI.
WHO’s Ethics & Governance of Artificial Intelligence for Health report outlines the following six principles to ensure the use of AI works to the public’s benefit:
- Protecting autonomy
- Promoting human safety and well-being
- Ensuring transparency
- Fostering accountability
- Ensuring equity
- Promoting tools that are responsive and sustainable
In the report, the WHO also recommended solutions and methods for AI to be maximized to its full potential while holding professionals within the healthcare industry accountable and responsive.
The future of AI in healthcare
It’s important to remember that AI is still in the process of learning and that AI is not a person and lacks certain characteristics inherent in humans.
Humans are sensitive to ethics and morality, while machines are not. So, humans need to have oversight to make sure AI is behaving in ways that are deemed ethical. This will increase trust in the technology and help to alleviate concerns.
Humans need to be responsible for how AI is used across healthcare and what the data is used for. To do this, they can:
- Make sure users know how their data is being used
- Implement safeguards
- Follow legal frameworks
While it is clear that there are benefits to AI, there are significant challenges to overcome. Regarding AI’s impact on healthcare, Scott Snyder, Chief Digital Officer at EVERSANA, sums it up well: “While I am super optimistic about the promise of AI, we still have a lot to learn.” The time is now for healthcare executives to lead the way.
To learn more about how your organization can strengthen security defenses, get in touch today.
Modus Create
Related Posts
-
Riding into the AI-driven future: Strategies to mitigate cybersecurity risks
As executives embrace AI, they must ensure critical aspects of cybersecurity and compliance aren't overlooked.
-
Conversations with Chief Innovators Ep 8: EVERSANA
Scott Snyder, Chief Digital Officer at EVERSANA, talks about AI's impact on the workplace with…