The rapid adoption of artificial intelligence has driven growth but has opened avenues for cybercriminals to misuse it to carry out sophisticated attacks, Kaspersky said, highlighting the need for businesses to invest in proactive cybersecurity defences to meet the challenges of the new era.
Kaspersky, a global cybersecurity and digital privacy company, said it has been incorporating AI into its products and leveraging AI models to counter threats and protect users by making technologies more resilient to new and evolving forms of cyberattacks.
Cybercriminals are using AI in novel ways, the company warned, using ChatGPT to write malware and automate attacks against multiple users and misusing AI programs to track users’ smartphone input (potentially allowing them to capture messages, passwords and banking codes).
Citing 2023 data, the company said it protected 220,000 businesses worldwide and prevented around 6.1 billion attacks with its solutions and products.
During the same period, 325,000 unique users were saved from potential money theft based on banking Trojans, he added.
On average, the company has detected more than 411,000 malicious samples every day in 2024, compared to 403,000 such samples last year.
“The number of cyberattacks that are being launched is not possible with human resources alone. They (attackers)… use automation to try to leverage AI,” Vitaly Kamluk, cybersecurity expert at Kaspersky’s Global Research and Analysis Team (GReAT), told PTI.
In a recent investigation into the use of AI to crack passwords, Kaspersky found that most passwords are stored encrypted with a cryptographic hash function.
A plain text password can easily be converted into an encrypted line, but it is difficult to reverse the process, he said.
The largest password complication leaked to date had around 10 billion lines with 8.2 billion unique passwords, according to their data from July 2024.
Alexey Antonov, Principal Data Scientist at Kaspersky, said: “We found that 32 percent of user passwords are not strong enough and can be reversed from encrypted hashed form using a simple brute force algorithm and a modern 4090 GPU in less than 60 minutes.
According to the company, threat actors can use large language models like ChatGPT-4o to generate fraudulent texts, such as sophisticated phishing messages.
AI-generated phishing can overcome language barriers and create personalized emails based on users’ social media information. It can even mimic the writing styles of specific people, making phishing attacks potentially harder to detect.
Ethan Seow, co-founder of C4AIL, said: “The moment ChatGPT came out, there was a 90-fold increase in spam emails to organizations in terms of phishing.
Aggressive adoption of GenAI by organizations has also increased the attack surface. At the same time, cyber attackers are becoming more sophisticated in their ways with the advent of AI, Seow added.
Another major challenge that has emerged with the advent of AI is deepfakes. There are countless cases of scammers and criminals tricking unsuspecting users with scams in which they impersonate celebrities, causing them significant financial losses.
Criminals are also using deepfakes to steal user accounts and send audio requests for money using the account owner’s voice to friends and family.
However, experts suggested that deepfake detection is not technically possible at present.
“…this is the future of research. It is just a matter of time before companies come up with solutions that at least try to address this problem (deepfake detection). I guess this is where the future of cybersecurity lies,” Kamluk said.
In today’s scenario of increasing threats and attacks, it is suggested that organizations aim for 100 percent uptime to keep their businesses cyber resilient.
In cyberspace, uptime is the duration during which a system is operational and resilience refers to the company’s response to a security breach by identifying, addressing and recovering from the incident.
During the annual Asia Pacific Cybersecurity Weekend 2024 held recently in Sri Lanka, Adrian Hia, Managing Director for APAC at Kaspersky, said that a company’s system with 100 percent uptime will translate into business resilience, both on-premises and in the cloud.
Using AI, attackers attempt to reshape, reform, and reorganize malware to produce more code-based variations, reducing the malware detection rate by antivirus.
Igor Kuznetsov, Head of Global Research and Analysis Team at Kaspersky, said: “21 percent of spam attacks are based on AI, which is becoming faster and potentially bigger. Therefore, instead of focusing on offensive AI, we need to improve defensive AI.
As part of defensive AI, the company said it had detected more than 99 percent of malware using automated systems by 2023.
The balance between malware detection and antivirus stealth “still stands and no one is really winning this battle,” Kamluk said.
Given the pace at which attackers and defenders are incorporating these technologies into everyday use, cybersecurity experts believe there is a need to rapidly implement regulations and ethics in the field of AI and GenAI.
“Current regulations will have to catch up,” Seow said, adding that although “it’s difficult to regulate it because the movement has already happened.”
Ethics is the foundation of humanity. “We need to pay more attention to the ethical education of people, especially among the younger generations. And with AI, this becomes even more important, as it has great potential,” Kamluk said.
(Only the headline and image of this report may have been reworked by Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)
First published: August 11, 2024 | 14:34 IS
Disclaimer
The information contained in this post is for general information purposes only. We make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability or availability with respect to the website or the information, products, services, or related graphics contained on the post for any purpose.
We respect the intellectual property rights of content creators. If you are the owner of any material featured on our website and have concerns about its use, please contact us. We are committed to addressing any copyright issues promptly and will remove any material within 2 days of receiving a request from the rightful owner.