Suspicious email links, too-good-to-be-true crypto opportunities, and those infamous foreign prince scams promising millions in your inbox. We’ve all encountered these classic cons, and many of us might feel confident in our ability to spot someone trying to pull off a digital heist.
Unfortunately, however, AI is about to turn the world of cybercrime on its head. And this means we’re going to have to learn some new tricks if we don’t want to end up like fish in a barrel.
Cybercriminals are increasingly turning to powerful AI technologies like deepfakes and intelligent network attacks to deceive and disrupt on an unprecedented scale. One report found that 87% of global businesses have been targeted by an AI cybercrime in the past year.
So, what steps can we take to ensure we don’t become a victim of this new wave of smart crime?
Let’s explore how we can learn to identify the risks and put safeguards in place, on a personal and organizational level.
How Cybercriminals Are Using AI
As technology has become more advanced, criminals have consistently found new ways to target individuals, businesses and governments. Society’s wide-scale adoption of AI may just be the latest facet of this, but its potential to do harm is unprecedented.
In particular, its capacity to evade the digital and psychological defenses we’ve built up to protect us against common social engineering-based cyberattacks makes it a threat that deserves attention.
Some of the most visible and publicized AI cybercrimes involve deepfakes, highly realistic, synthetic images, video, and audio intended to deceive. Criminals have used it to convince employees they are speaking to their boss, who is instructing them to transfer money.
A more basic AI deception involves using tools like ChatGPT to craft convincing, personalized phishing emails. This potentially makes it easier for fraudsters to convincingly mimic corporate communications or officials, such as tax departments.
But AI isn’t only used to trick people. It’s also deployed invisibly to carry out technical and network-based attacks. This allows hackers to automate processes like network scans to detect vulnerabilities more quickly and intelligently.
It can also enable new types of viruses and malware that can mutate to avoid detection and optimize techniques for brute-forcing and guessing access systems.
What’s more, it makes it far easier for anyone to create dangerous software like viruses, as it removes technical barriers for people who might have bad intentions but no ability to write code.
It’s been said that if cybercrime were a country, it would be the third-largest economy in the world, only behind the US and China. This is all money siphoned out of the pockets of individuals and businesses by criminals. So, what are some steps we can take to limit the risk of becoming a victim?
Where To Start?
We’re used to taking precautions against crime in the real world. Not walking down dark alleys late at night, for example, or just believing that if something seems too good to be true, it probably is.
In the online age, we’ve also become aware of the importance of basic steps like being careful what we share online, using two-factor authentication, using strong passwords, and ignoring unsolicited emails mentioning large sums of money.
AI software developers and cybersecurity experts are playing their part in solving the problem. But as previous threats have shown, it’s vitally important to take personal responsibility for our own safety, too.
The first step is to learn to identify threats. This means staying up-to-date on the dangers posed by the attacks mentioned here and on new threats appearing on the horizon. Sources like The Hacker News and Krebs On Security are good to follow for this.
Next, we should hone our critical thinking skills. Just as we worked out that it was unlikely deposed foreign royals would try to send us large sums of money, we have to learn to look at content that could potentially be fake and think, is this really likely to be true?
We will have to learn to use AI to fight AI on a personal level. Tools and apps are available to detect AI content. Common internet security platforms are also introducing features like AI-powered scanning and protection, so they should be kept up-to-date.
But this is an arms race, and it’s possible that AI malware will, at times, outsmart and evade antivirus and other defensive measures. This means we may have to reassess our practices regarding storing sensitive information online and who we give it to.
Are we sure the businesses and services that we give our data to are staying ahead of the bad guys? And are they being compelled to protect it in the right way by legislators and our elected governments?
Addressing these steps can be thought of as taking the first steps towards building resilience to AI cyberattacks, but it’s just a start.
Looking Ahead
Of course, there’s no way to know how AI will truly impact cybersecurity (or anything else) in the longer term. Deepfakes will undoubtedly get more convincing, and as our lives and identities become increasingly online and digital, the temptation and rewards for criminals will grow.
For individuals and businesses, the best way to prepare is to think of AI cyber defense as a long-term strategy. This means developing awareness and building capabilities to counter threats as they appear on the horizon rather than when they’re hitting us in the face.
Everyone has a role in calling for high standards of ethical transparency and accountability around AI. Opaque black-box systems and a lack of guardrails just make cybercriminals’ lives easier.
Essentially, the focus should be on building resilience, which means developing the capacity to bounce back when things do go wrong. This starts with prioritization. While we may not be able to protect everything, make sure we have protection, backup, and recovery plans for what matters most.
But even though the risk posed by AI-enabled cybercriminals to our data, property, and even personal safety is very real, we certainly shouldn’t feel helpless.
Yes, it’s hard to protect ourselves against a threat or enemy when we have no idea how it changes and evolves. But by taking a strategic approach and implementing some simple changes to the way we think about and approach what we see online, we can give ourselves a fighting chance.
Click Here For The Original Source.