Generative AI

Force of Change or Potentially Dangerous Flub? (Chris Wright Commentary)


THIS IS AN OPINION

We’d also like to hear yours.
Leave a comment below, tweet to us @ArkBusiness or email us

Where do we turn when stumped by a complex question or situation and need guidance?

That’s easy. We pull up our search engine of choice. Most of us make at least a handful of online queries daily.

But are we putting too much trust in these technologies? What if the digital tools we rely on don’t always provide accurate information? Unfortunately, that’s still the case for many generative artificial intelligence (AI) platforms.

We must look no further than the recent generative AI-related inaccuracies — and there are many — as evidence. Should we put glue on our pizza to make the cheese stick? Of course not. But that’s what Google’s AI Overviews asserted. What about eating rocks for health benefits? That’s a scary prospect—and a tip suggested by generative AI. While many online tools offer disclaimers, such as Google’s statement that the technology is “experimental,” consumers are apt to take the content at face value. And that’s a slippery slope.

Glaring and somewhat comical errors in generative AI results, like dogs playing in the NBA, are simple to spot. But what about digital content that has seeds of truth in it? Gray areas are increasingly common around sensitive subjects like health care decisions, financial investments or national elections. In worst-case scenarios, criminal actors could leverage generative AI content to manipulate us with social engineering or disinformation campaigns.

Consider, for example, a pro-Russian influence group’s latest effort around the 2024 Paris Olympics. The campaign’s generative AI components floated the idea of potential terrorist attacks with a fake Netflix documentary and Tom Cruise voiceover, a malicious tactic Microsoft Threat Analysis Center experts said was used to “generate a sense of fear and uncertainty.”

Some misleading generative AI content is unintended and innocuous. Others, like those around the 2024 Paris Olympics, may be designed to encourage certain behaviors, such as swaying public opinion, convincing consumers to divulge personally identifiable information or unknowingly opening entry points into their digital systems. So, how easy is it to tell the difference? It depends. That’s why following basic precautions is crucial.

First and foremost, we should exercise a heavy dose of skepticism about the generative AI content we find online. If the materials seem suspect, we should research and consider the integrity of the initial source. Generative AI creates content using different inputs — texts, videos, images and more. It isn’t necessarily pulling accurate or updated information — or considering the original context. Users should always verify content’s factuality with the help of trusted, credible resources.

If we are caught by deceptive generative AI content (e.g., clicking on suspicious links.), having layered security practices in place can prevent or mitigate the possible impact. Measures like updating our systems, patching software and hardware and leveraging multi-factor authentication are no-brainers. We should also use password managers to deploy different passwords across accounts, preferably with passphrases of 4-7 memorable but unrelated words. For businesses, an often underutilized yet cost-effective step is to hire an experienced cybersecurity firm. These partners can help us develop and implement strategies to address our risks and vulnerabilities, including enacting filtering systems and conducting employee security awareness training.

Generative AI is a hot topic, and it’s tempting to get caught up in the hype about its potential “revolutionary” uses. However, recent missteps confirm that the technology is still in its infancy. For now, the key is constantly verifying the generative AI content we encounter. By taking a cautious mentality and adopting cyber hygiene measures like those listed above, we can fortify ourselves against potential social engineering or disinformation campaigns and their ill effects.


Chris Wright is co-founder and partner at Sullivan Wright Technologies, an Arkansas-based firm that provides tailored cybersecurity, IT and security compliance services. For more information, visit sullivanwright.com.





Source

Related Articles

Back to top button