Cybersecurity

AI is making it easy for people to commit crimes, says US’ cybersecurity top official – Firstpost


Getting started with cybercriminals has becomes much easier than ever before, all because of AI. Image Credit: Pexels

Jen Easterly, the Chief of the Cybersecurity and Infrastructure Security Agency (CISA), has come out highlighting a rather alarming fact — that generative AI isn’t just empowering current cybercriminals but also lowering the bar for entry into nefarious activities. In short, it is making it easier for people to indulge in criminal activities over the digital realm.

As AI continues to evolve, cybercriminals have been made more capable than ever before of engaging a wide array of malicious attacks, which can range from traditional phishing and spamming to more sophisticated acts like blackmail, election interference through misinformation campaigns, and even terrorism, Easterly told the online news portal Axios in an interview.

Easterly highlighted the significant increase in risk posed by the rapid advancement of AI, and how its unpredictable nature and potent capabilities can exasperate a situation that is anyway worsening by the day.

The agility and power of AI-powered cyberattacks introduce several new layers of uncertainty into the cybersecurity landscape, which necessitates several proactive measures to mitigate emerging threats.

While CISA lacks regulatory authority over private businesses, Easterly highlights why government agencies need to collaborate with tech companies and work on developing robust cybersecurity practices.

The recent launch of the “secure by design” pledge, which has been endorsed by major tech firms, is a great indication of the collective commitment to make cybersecurity more resilient against evolving threats.

Easterly’s extensive background in the US military and global cybersecurity makes her a voice that needs to be taken seriously. Naturally, when she takes a proactive stance on protecting critical infrastructure, including election systems, it would behove for world leaders to take note.

While she is confident that most electoral mechanisms are safe against direct AI-fueled attacks, Easterly suggests that we stay vigilant about the potential for generative AI to exacerbate distrust and undermine an election’s integrity. Such attacks also diminish public trust in well-meaning public institutions.

What makes matters more complicated is the fact there is no global norms that can properly govern cyber warfare, which allows threat actors to exploit vulnerabilities in civilian critical infrastructure.
Despite the challenges posed by AI-driven threats, there are reasons to be optimistic about technology’s potential in cybersecurity. As it turns out, AI can be beaten by AI. If managed and harnessed properly, AI can aid in identifying vulnerabilities and fortifying old, or legacy systems against cyberattacks, while we prepare to replace them with more, up-to-date systems.

Easterly highlights why it is important to have proactive cybersecurity practices, including routine patching and robust password protocols, as the basic foundation of defenses against AI-based threats. What’s also important is a culture that makes the public acutely aware of cybersecurity concerns. Organisations too should be able to mitigate the risks posed by AI-infused cyber threats and safeguard against potential disruptions to critical systems and infrastructure.

Latest News

Find us on YouTube

Subscribe



Source

Related Articles

Back to top button