Navigating the New Frontier of AI-Driven Cybersecurity Threats
A few weeks ago, Best Buy revealed its plans to deploy generative AI to transform its customer service function. It’s betting on the technology to create “new and more convenient ways for customers to get the solutions they need” and to help its customer service reps develop more personalized connections with its consumers. By the end of this summer, this new experience is expected to be completely rolled out. Best Buy’s initiative is a harbinger of generative AI deployment in enterprise settings, aiming to increase productivity and improve efficiencies.
Best Buy isn’t alone. A PwC survey reported 73% of U.S. companies have adopted AI to some extent within their operations, and a whopping 54 percent of survey respondents have implemented generative AI specifically in various areas of their business.
Scammers are licking their lips. With the benefits of generative AI come risks, and adversaries are quick to innovate and act. Already they are starting to deploy attacks on LLM platform providers in the form of prompt scraping and reverse proxy threats. Enterprise websites are also in the cross-hairs, as adversaries deploy sophisticated generative AI-based scraping attacks. Below, I share up-to-the-minute, detailed insights on the risks we’re observing through our threat research unit, ACTIR, and while working right alongside the most recognizable companies in the world as they trailblaze uses for generative AI and, in lockstep, protection for consumers.
The Evolution of Scraping Technologies
Traditionally, web scraping has been a concern for businesses due to its potential to extract valuable data without permission. However, with the advent of generative AI, scraping techniques have become more advanced and accessible. Today, commercial scraping services and scraper-as-a-service (SaaS) models are on the rise, enabling even those with minimal technical skills to deploy scraping bots. This shift has led to an increase in scraping activities, significantly impacting data privacy and security.
Generative AI technologies have transformed scraping from a specialized task requiring significant technical know-how into a more straightforward process. These AI-driven bots are capable of understanding complex instructions in plain language, converting them into executable actions without needing detailed knowledge of the website’s underlying structure. This has lowered the barrier to entry for scraping activities, posing a severe risk to data security.
Infrastructure for Abuse: Reverse Proxy Services
Another emerging threat vector facilitated by generative AI is the use of illegal reverse proxy services to conduct LLM platform abuse. These services allow attackers to bypass geo-restrictions and conceal their activities, making it challenging for AI platform providers and regulatory authorities to track and mitigate malicious actions. Such setups are often used to generate phishing emails, create deepfake videos, or perform other harmful activities without leaving a digital trace.
The use of these reverse proxies has become a significant concern as they enable attackers to operate anonymously, complicating efforts to impose accountability and enforce security measures. Cybersecurity professionals are witnessing a sharp increase in traffic from these services, indicating their growing popularity among cybercriminals.
Abuse using reverse proxy services is a threat vector that is escalating at such an alarming speed that my upcoming blog is completely dedicated to this type of attack. Stay tuned for more details.
Advanced Detection and Mitigation Strategies
To combat these sophisticated threats, our Arkose Bot Manager platform deploys a blend of bot detection capabilities along with workflow anomaly and API instrumentation detection features. The core bot detection capability evaluates the traffic across the device, traffic pattern, network, and biometric-based anomalies. We also partner with our customers, which are the most recognizable brands in the world, thus the most targeted, in defining new methods of detection and mitigation measures. By analyzing traffic patterns and anomalies, our cybersecurity teams are able to effectively pinpoint emerging threat vectors and trends.
For example, in dealing with the increased traffic from reverse proxies, Arkose Bot Manager can identify and handle traffic coming from mirror sites or reverse proxies, either challenging it or stopping it outright. Our customer’s cybersecurity teams are able to use the data to help in developing long-term strategies to not just stop immediate threats, but also dismantle the networks behind these malicious activities. A good example is the ongoing collaboration with one of our largest customers to effectively stop and takedown a large reverse proxy operation based out of China.
These measures have proven effective in reducing the impact of attacks and maintaining the integrity of LLM platforms.
The Role of Computer Vision Technologies
The emerging role of computer vision technologies in cybersecurity is in just about every discussion we’re having with prospective customers. While the use of computer vision technologies in solving CAPTCHAs has been limited due to accuracy issues, advancements in model architectures are beginning to change the landscape. Attackers are increasingly exploiting these technologies to bypass traditional security mechanisms, prompting a need for innovative defenses.
Cybersecurity teams are now re-evaluating their approaches to designing CAPTCHAs and other security tests to counteract the advances in computer vision technologies. By understanding the capabilities of the latest AI models, they can develop more robust systems that are difficult for AI-driven tools to circumvent.
Here at Arkose Labs, we harness AI to secure AI against cyber attacks, including AI-based threats, that adversaries are using. And we do that by employing multiple techniques that harden our challenges against machine vision and AI solvers. For example, we deploy image perturbation, which inserts pre-designed noise to the image using Deep Neural Network Methods to confuse the models. The result is that a genuine human would see, let’s say, a dog; whereas a malicious bot would “see” a ghost. So the same challenge looks different based on whether the action is from a consumer or a bot.
We constantly evaluate our suite of challenges against advanced computer vision models including generative AI platforms. And test the models against various criteria including cognition, attempt approach and recall. Those efforts are bolstered by the work leading AI LLM providers are doing to incorporate protections in their platforms to ensure they aren’t abused to circumvent CAPTCHAs and similar protections.
Conclusion
A fresh McKinsey analysis found that four specific functions could account for 75% of the total annual value a company can derive from deploying generative AI. Those four functions? Customer operations, marketing and sales, software engineering, and research and development – all mission-critical areas vulnerable to new threat vectors like GPT prompt compromise.
Defending against these threats requires businesses to perform a balancing act: harnessing the power of generative AI while mitigating its vulnerabilities. But the payoff is substantial. Enterprises will be better equipped to protect themselves and their consumers from the growing wave of AI-driven threats before they can make an impact.
Have questions about how your company can achieve that balance and maximize the value of generative AI? Reach out to me here, let’s discuss your situation and the best ways to reduce risks and the associated costs.
*** This is a Security Bloggers Network syndicated blog from Arkose Labs authored by Vikas Shetty. Read the original post at: https://www.arkoselabs.com/blog/navigating-frontier-ai-driven-cybersecurity-threats/