Info@NationalCyberSecurity
Info@NationalCyberSecurity

WATCH VIDEO | The Rise of AI | Artificial intelligence drives silent arms race in cybersecurity field | News | #hacking | #cybersecurity | #infosec | #comptia | #pentest | #ransomware


Artificial intelligence can provide a new frontline in the perpetual war between white-hat and black-hat hackers.

AI has the potential to be a game changer when it comes to digital security because of AI’s capability to detect threats, experts say.







Its ability, thanks to algorithms and machine learning, to sift through an ocean of data to pinpoint and neutralize threats puts it far beyond human capability, perhaps offering an ever-alert, tireless sentinel safeguarding important digital fortresses.

“AI is akin to a double-edged sword. On the one hand, it’s the vigilant guardian of the digital realm,” said Joseph Harisson, CEO of the Dallas-based IT Companies Network. “AI algorithms act like digital bloodhounds, sniffing out anomalies and threats with a precision that human analysts might miss.”

However, it’s that awesome power to quickly analyze large datasets that also makes AI a potent tool for criminals and other malicious actors.

“They use AI to craft more sophisticated cyberattacks, turning the hunter into the hunted,” Harisson said. “These AI-powered threats are like chameleons, constantly evolving to blend into their digital surroundings, making them harder to detect and thwart.

“It’s a perpetual cat-and-mouse game, with both sides leveraging AI to outmaneuver the other.”

Researchers are building computer networks that resemble the structure of the human brain, which leads to breakthroughs in AI research.

This research isn’t just used to power cybersecurity, but also to enhance real-world security.

Biometric research, such as fingerprints and facial recognition, helps law enforcement secure important sites such as airports and government buildings. Security companies also use these technologies to secure their clients’ property.

It’s even reached the home sector, with companies such as Ring providing home security solutions.

Katerina Goseva-Popstojanova, professor at the Lane Department of Computer Science and Engineering at West Virginia University, said AI has been part of the cybersecurity landscape for a long time.

Machine learning, an integral part of AI, has been used for various purposes in the field.

Take anti-virus software, Goseva-Popstojanova said. Software such as Norton or Kaspersky antivirus has in-built AI that is trained on known viruses so it can detect viruses on host machines. Email spam filters work the same way.

Although ChatGPT has made AI a household issue, the technology itself has been in use for a long time behind the scenes.

Tearing down cyber-fortresses

Aleksa Krstic, CEO of Localizely, a Belgrade, Serbia-based software-as-a-service translation platform, said AI-powered cameras can analyze video feeds in real time and identify objects or potential threats.

“AI algorithms can recognize individuals, enabling more effective access control and tracking,” he said. “AI systems can learn what ‘normal’ behavior looks like in a specific environment and raise alerts when deviations occur.”

However, AI can also be used to tear down the cyber-fortresses that governments and companies create. Krstic said AI can automate attacks at scale, generating sophisticated phishing emails or launching automated botnets. AI, through “deepfake” videos and its ability to generate at scale, can spread misinformation or manipulate public opinion for personal gain.

“The way I look at these days, everything can be used for good or bad,” Goseva-Popstojanova said. “So let’s say dynamite. You can use dynamite to make tunnels or mines or you can use dynamite to go to kill people. It’s the same with AI.”

Goseva-Popstojanova said generative AI tools such as ChatGPT can be used by cybercriminals to scour the internet for publicly available information to quickly build a profile of a person.

That profile can be used in the furtherance of a crime, whether it’s identity theft, scamming or spamming.

The weakest link in cybersecurity is the human element. Social engineering, the use of social skills to manipulate a person into performing a desired action, becomes much easier with AI tools such as deepfakes or voice impersonation.







Katerina Goseva-Popstojanova

Katerina Goseva-Popstojanova, professor at the Lane Department of Computer Science and Engineering at West Virginia University.




“There’s something that’s called phishing, or vishing, if it’s done by phone and now it is done by text messages, where somebody pretends to be somebody and scams the person,” she said. “One of the reasons the MGM Resorts attack happened … wasn’t anything sophisticated – just somebody who used a social engineering attack to get the information necessary to log into their system.”

That cyber-attack on MGM Resorts this fall cost the company millions of dollars in lost revenue and exposed the personal information of tens of millions of loyalty reward customers, as well as disabled some onsite computer systems.

Fooling AI

Within the physical world, criminals can resort to tactics such as face spoofing to fool AI.

The technique can involve simple measures, such as using a photo of a person to fool facial recognition. Or if someone wants to avoid recognition in public, a hoodie made from a special material that reflects light differently from skin can be employed to break the facial recognition algorithm.

More sophisticated AI can look for signs of life to avoid being fooled by a photo, but a video of a person’s face might do the trick. Makeup, masks and 3D masks can all be used. Finally, there’s hacking the database itself and changing the parameters so that an attacker’s face or fingerprint is allowed by the system.

Adversarial machine learning is the field of research that looks at how machine learning can be used to attack other AI systems. Goseva-Popstojanova said it’s a huge field of research today, looking for ways in which algorithms can be fooled into classifying malicious activity as not malicious. This allows researchers to find more robust ways to secure a system.

A previous version of ChatGPT could be fooled into releasing a person’s private information, such as emails or home addresses, by spamming the AI with specific words repeatedly. Researchers deliberately worked on ways to break the AI to release this information and then reported it to OpenAI to patch the flaw.

One thing is clear: Pandora’s box is open and AI is part of the world now, officials said.

Although machine algorithms and code are behind the veneer of everyday life, the invisible war between white-hat and black-hat hackers will define life for people all around the world.

In October, FBI Director Christopher Wray spoke at a conference with leaders from the Five Eyes, a coalition of the U.S., the United Kingdom, Canada, Australia and New Zealand. The coalition emerged in the wake of World War II to share intelligence and collaborate on security. The conference was aimed at China, which Wray called the foremost threat to global innovation, and accused the country’s government of stealing AI research in furtherance of its own hacking efforts. Thus, AI extends from the individual level through the global policy level.

During a fireside chat moderated by former U.S. Secretary of State Dr. Condoleezza Rice and featuring intelligence chiefs from across the Five Eyes coalition, FBI Director Christopher Wray discusses potential misuses of artificial intelligence. The discussion was held at the Hoover Institution at Stanford University on October 17, 2023, as part of the FBI’s Emerging Technology and Securing Innovation Security Summit in California’s Silicon Valley.



“We are interested in the AI space from a security and cybersecurity perspective and thus proactively aligning resources to engage with the intelligence community and our private sector partners to better understand the technology and any potential downstream impacts,” the FBI national press office wrote in an email. “The FBI is particularly focused on anticipating and defending against threats from those who use AI and Machine Learning to power malicious cyber activity, conduct fraud, propagate violent crimes, and threaten our national security. We are working to stop actors who attack or degrade AI/ML systems being used for legitimate, lawful purposes.”

Dhanvin Sriram, founder of Prompt Vibes and an AI expert, said machine learning has more than proved its worth by swiftly analyzing data and finding patterns that might indicate risk. However, caution must be employed when assessing any new paradigm-shifting technology.

“The real challenge is to develop AI systems that not only beef up defenses, but also outsmart malicious AI,” he said. “It’s a constant cat-and-mouse game where staying ahead requires ongoing innovation and a mindful approach to ethical considerations. In this dynamic security landscape, the clash between AI-driven defense and malicious AI underscores the need for continuous advancements to ensure AI remains a force for protection, not exploitation.”



——————————————————-


Click Here For The Original Source.

National Cyber Security

FREE
VIEW