Cybercriminals can use AI for scams, more potent malware: Trend Micro | ABS-CBN
ADVERTISEMENT

Welcome, Kapamilya! We use cookies to improve your browsing experience. Continuing to use this site means you agree to our use of cookies. Tell me more!
Cybercriminals can use AI for scams, more potent malware: Trend Micro
Cybercriminals can use AI for scams, more potent malware: Trend Micro
Art Fuentes,
ABS-CBN News
Published Nov 20, 2023 09:52 AM PHT

MANILA - The rise of generative AI tools like ChatGPT has opened new opportunities for cybercriminals through the faster production of more potent malware, as well as new scams, according to experts from cybersecurity firm Trend Micro.
MANILA - The rise of generative AI tools like ChatGPT has opened new opportunities for cybercriminals through the faster production of more potent malware, as well as new scams, according to experts from cybersecurity firm Trend Micro.
Vincenzo Ciancaglini, Trend Micro Senior Threat Researcher said cybercriminals can leverage large language models (LLMs), more commonly known as generative AI, to write polymorphic malware.
Vincenzo Ciancaglini, Trend Micro Senior Threat Researcher said cybercriminals can leverage large language models (LLMs), more commonly known as generative AI, to write polymorphic malware.
These are malware that can change their appearance and behavior to avoid detection.
These are malware that can change their appearance and behavior to avoid detection.
Ciancaglini pointed out that while LLMs like ChatGPT have security controls and boundaries that make them reject prompts to write malicious code, it was easy to bypass these controls because generative AI can be “naive.”
Ciancaglini pointed out that while LLMs like ChatGPT have security controls and boundaries that make them reject prompts to write malicious code, it was easy to bypass these controls because generative AI can be “naive.”
ADVERTISEMENT
“It’s really funny because the technique of the principals behind it is pretty much what you would use to get your 4-year old to eat his vegetables,” Ciancaglini said during Decode, a cybersecurity conference in Manila held by Trend Micro.
“It’s really funny because the technique of the principals behind it is pretty much what you would use to get your 4-year old to eat his vegetables,” Ciancaglini said during Decode, a cybersecurity conference in Manila held by Trend Micro.
He said this was demonstrated by researchers who asked ChatGPT to read captcha, which is used by websites to guard against bots. Ciancaglini said researchers tricked ChatGPT into breaking its rule not to read captcha, by inserting the captcha test into a photo, and then asking ChatGPT to “read” this.
He said this was demonstrated by researchers who asked ChatGPT to read captcha, which is used by websites to guard against bots. Ciancaglini said researchers tricked ChatGPT into breaking its rule not to read captcha, by inserting the captcha test into a photo, and then asking ChatGPT to “read” this.
Generative AI has also reduced the entry barrier for cybercrime as less tech-savvy criminals could use ChatGPT to develop malicious tools and gain technical capabilities.
Generative AI has also reduced the entry barrier for cybercrime as less tech-savvy criminals could use ChatGPT to develop malicious tools and gain technical capabilities.
Even language is also no longer a barrier for scammers as LLMs allow easy translation from one language to another.
Even language is also no longer a barrier for scammers as LLMs allow easy translation from one language to another.
“Now your Nigerian king that writes you an email that’ ‘I’m your friend and I’m looking for someone to give $20 million’ can do it in proper English and it’s harder to spot,“ Ciancgalini added.
“Now your Nigerian king that writes you an email that’ ‘I’m your friend and I’m looking for someone to give $20 million’ can do it in proper English and it’s harder to spot,“ Ciancgalini added.
ADVERTISEMENT
AI-POWERED FAKE KIDNAP, PIG BUTCHERING
AI tools have also allowed new types of scams by voice cloning and deepfakes.
AI tools have also allowed new types of scams by voice cloning and deepfakes.
Robert McArdle, Trend Micro’s Director of Forward Looking Threat Research, noted that there were cases in the US where scammers use AI voice cloning tools to steal from people by impersonating family members.
Robert McArdle, Trend Micro’s Director of Forward Looking Threat Research, noted that there were cases in the US where scammers use AI voice cloning tools to steal from people by impersonating family members.
The AI tools only need a few audio samples widely available on social media to clone someone’s voice.
The AI tools only need a few audio samples widely available on social media to clone someone’s voice.
In the case cited by McArdle, the scammers used a 15-year old girl’s cloned voice to try to convince her mother that she had been kidnapped and a $1 million ransom needed to be paid.
In the case cited by McArdle, the scammers used a 15-year old girl’s cloned voice to try to convince her mother that she had been kidnapped and a $1 million ransom needed to be paid.
The scam failed in this case as the mother was quickly able to contact her daughter.
The scam failed in this case as the mother was quickly able to contact her daughter.
ADVERTISEMENT
More recently, scammers have hijacked images of US news anchors in spurious ads.
More recently, scammers have hijacked images of US news anchors in spurious ads.
Ryan Flores, Senior Threat Researcher for Forward Looking Threat Research at Trend Micro, meanwhile noted cases of scams and fraud are already on the rise in the Philippines even without AI.
Ryan Flores, Senior Threat Researcher for Forward Looking Threat Research at Trend Micro, meanwhile noted cases of scams and fraud are already on the rise in the Philippines even without AI.
He said with the new challenges posed by AI, cybersecurity experts need to upskill while cybercriminals have yet to fully leverage LLMs.
He said with the new challenges posed by AI, cybersecurity experts need to upskill while cybercriminals have yet to fully leverage LLMs.
“The challenge for cybersecurity professionals is to stay ahead of the curve,” Flores said.
“The challenge for cybersecurity professionals is to stay ahead of the curve,” Flores said.
Read More:
Trend Micro
cybersecurity
generative AI
large language models
LLMs
malware
voice cloning
deepfakes
cybercrime
scams
ADVERTISEMENT
ADVERTISEMENT