Cyberattacks in artificial intelligence are unavoidable: four methods can be used by companies

Cyberattacks in artificial intelligence are unavoidable: four methods can be used by companies ...

When Eric Horvitz, Microsoft''s chief scientific officer, appeared before the Senate Armed Services Committee on Cybersecurity on May 3, he stressed that organizations will face further challenges as cybersecurity attacks improve in the ability to develop.

While AI is improving the ability to detect cybersecurity threats, he explained, threat actors are increasing the bar.

While there is little information to date on the use of AI in cyberattacks, it is widely accepted that AI technologies may be utilized in various cases of probing and automation, according to authorities.

However, it is not just the military that has to stay ahead of threat actors deploying AI to intensify their attacks and avoid detection. As enterprise businesses face a wide array of security threats, they need to prepare for increasingly sophisticated AI-driven cybercrimes, according to experts.

Attackers want to make a remarkable leap forward in the field of AI.

The 2017 WannaCry ransomware attack has used what was considered novel cyber weapons, while today there is malware used in the Ukraine-Russia conflict that has rarely been seen before. This type of mindset-shifting attack is where we anticipate to see AI.

So far, the use of AI in the Ukraine-Russia war has been limited to Russian applications of Deepfakes and Ukraine''s use of Clearview AI''s controversial facial recognition software, at least publicly. However, IT security professionals are gearing up for a fight: A survey last year found that a growing number of respondents expressed concerns about the possibility of cybercriminals using artificial intelligence. Nearly all (96%) have begun to protect their businesses against AI-based threats mostly related to email, advanced spear phishing and imperson

Very few real-world machine learning or AI attacks have been detected, but the bad guys are definitely already using AI, according to WatchGuard, which provides enterprise-grade security solutions to mid-market customers.

Threat actors are already using machine learning to assist in more social engineering attacks. If they have large, large data sets of lots and lots of passwords, they may learn more about that passwords, enabling them to improve their password hacking.

He said that machine learning techniques will increase the number of spear-phishing attacks or highly targeted, non-generic fraudulent emails, he said. However, it is difficult to train users against clicking on spear-phishing messages, he said.

What enterprises really need to worry about

According to Seth Siegel, the North American head of artificial intelligence consulting at Infosys, security professionals may not consider threat actors using AI explicitly, but they are seeing more, faster attacks and seeing an increased use of AI on the horizon.

He cautioned that organizations should be concerned about far more than spear phishing attacks. It is really important to understand how can companies deal with one of the major AI hazards, the introduction of unneeded data into your machine learning models?

These actions will not be backed by individual attackers, but by sophisticated nation-state hackers and criminal gangs.

This is where the problem is that they have the most advanced technology, the most advanced technology, and the cutting-edge technology because they must be capable to achieve not just past offenses, but also vast businesses that frankly aren''t equipped to deal with this level of misconduct, according to the author. Basically, you can''t add a human tool to an AI battle.

Four strategies to prepare for AI cyberattacks'' future

Security experts believe that executives of the security industry should take several critical measures to prepare for the future of AI cyberattacks, according to them:

According to Nachreiner, the problem with spear phishing is that since the emails are customized to look like true business messages, they are much harder to block. You must have security awareness training, so users may anticipate and be skeptical of these emails, even if they appear to come in a corporate context.

According to Heinenmeyer, the infosec organization should embrace AI as a fundamental security strategy. Rather, they should anticipate and implement AI ourselves, he explained. I don''t think they understand how necessary it is at the moment but once threat actors begin using more furious automation and perhaps, there are more dangerous attacks launched against the West, then you really want to have AI.

Unternehmen should focus their attention away from the individual bad actor, according to Siegel. They should focus more on nation-state hacking, about criminal gang hacking, and be capable of having defensive postures, and understand that its just something they now need to deal with everyday, according to Siegel.

According to Siegel, organizations need to ensure they are on top of their security postures. When patches are deployed, you have to treat them with a level of criticality they deserve, and you must audit your data and models to ensure you avoid entering malicious information into the models.

Siegel added that his organization encases cybersecurity professionals in data science teams and also assists data scientists in the development of cybersecurity techniques.

The future of offensive AI

According to Nachreiner, more adversarial machine learning is coming down the trap.

He said that as a result of this, we use machine learning to protect people who are going to use that against us.

One of the ways organizations today use AI and machine learning is to proactively catch malware better since now malware changes rapidly and signature-based malware detection does not catch malware as regularly anymore. In the future, those ML models will be vulnerable to threat actors.

According to Heinenmeyer, the AI-driven threat landscape will continue to worsen, as well as rising geopolitical issues that will impact the rise. A recent study from Georgetown University examined China and how they interweave their AI research universities and nation-state funded hacking. It details how closely the Chinese, like other governments, collaborate with academics and universities and AI research to develop potential cyber capabilities for hacking.

As I look at this study and other things happening, I think my outlook on the threats a year from now will be lumpier than today, he admitted. However, he stressed that the defensive outlook will also improve because more organizations are adopting AI. He said, however, you may still be stuck in this cat-mouse game.

You may also like: