New research monitoring the progress and implementation of large language models and generative artificial intelligence offers a warning for healthcare IT security teams to adjust their detection and response priorities.
WHY IT MATTERS
With innovation, sensitive data being shared with artificial intelligence requires organizations to jump on the new cybersecurity challenges, according to researchers at Trustwave SpiderLabs.
While many are making use of the huge benefits posed by AI and machine learning models for an array of clinical and operational use cases, healthcare organizations also need to be educated about – and vigilant against – the cybersecurity threats AI can pose.
To increase the speed of incident response and enable more efficient threat detection, Trustwave recommends hospitals and health systems transform security operation centers so they are not outgunned by cyber-gangs near and far.
“Failing to do so is like bringing a knife to a gunfight,” said a representative from Avertium, a Phoenix-based cybersecurity vendor, in an email to Healthcare IT News.
In Trustwave’s recent report, the firm provides an analysis of attack flows specific to the healthcare sector.
With more than 28.5 million healthcare records breached in 2022 – a significant increase from 21.1 million in 2019, according to the U.S. Department of Health and Human Services – it behooves cybersecurity teams to stay in the know on AI developments within a healthcare organization.
The Cybersecurity in the Healthcare Industry report offers information about their health-sector research, attack-vector flow diagrams and mitigation measures.
Phishing, which is still the most commonly exploited method for gaining an initial foothold in an organization, according to the report, has had a significant assist from the expanded use of LLM technologies, such as GPT-4, that make it easier for bad actors to create highly personalized and targeted messages that are more compelling and harder to detect.
“Over the last year our team flagged both Emotet and Qakbot as the most common trend amongst phishing attacks targeting healthcare organizations,” the Trustwave researchers said.
“Based on observations to date, Trustwave sees the primary areas of concern are the increased speed and quality that phishing emails can be drafted and exploit code can be enhanced.”
Generative AI can eliminate the grammatical and spelling errors that can help make phishing scams and malicious emails tougher for employees to recognize attempts, and could result in a higher likelihood that a bad actor can breach a health system’s network infrastructure.
Trustwave said that a common lure in phishing emails is impersonating a medical device and medical equipment quote and payment processing communication.
Coupled with lots of endpoints – there’s more third-party vendors and Internet of Things devices in healthcare than ever – the vulnerability of the industry’s infrastructure is generally much higher.
The report also addresses other common threat tactics, exploitations of vulnerabilities and existing tools, stolen web shells, malware and more.
THE LARGER TREND
Organizations need to balance AI and automation innovations with improved cyber readiness.
Recently, 11 million HCA healthcare patients in the United States had their data exposed in a breach of the provider’s network infrastructure.
“This appears to be a theft from an external storage location exclusively used to automate the formatting of email messages,” HCA officials said in a recent announcement about the apparent data theft.
The health system, like so many others, is prioritizing its digital transformation efforts in ways that make it a shinier target for those that see the opportunities AI provides to attack critical infrastructure like healthcare.
AI can help healthcare move faster in its transformation, according to Tom Lawry, author and managing director of Second Century Technology, who hosted a panel on driving healthcare AI at scale at HIMSS23.
Albert Marinez, chief analytics officer at Intermountain Healthcare, said that health system is using AI to better understand patient flows and discharge barriers, because it is an opportunity.
Save the organization $15-$20 million.
ON THE RECORD
“Consider instituting an internal AI Infosec working group across relevant teams – like legal, privacy, IT, etc. – to deal with governance and data sharing guidelines,” said Trustwave researchers in their report.
Andrea Fox is senior editor of Healthcare IT News.
Email: [email protected]
Healthcare IT News is a HIMSS Media publication.
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : Healthcare IT News – https://www.healthcareitnews.com/news/warning-cybercriminals-have-more-weapons-ai