Stop AI from normalizing plagiarism
Artificial intelligence. Machine learning. Generative AI. They’re all buzzwords for the new high-tech time saver that can streamline rote tasks and generate content at the speed of light.
But this dynamic content creator is proving to be a crisis in the making. A lawsuit filed on Tuesday in the Southern District of New York on behalf of eight news organizations, including the Sun Sentinel, accuses OpenAI’s ChatGPT and Microsoft’s Copilot of insidious corporate thievery. Specifically, the illegal harvesting of millions of copyrighted articles to create their “generative” AI products.
It’s the latest volley in an increasingly bitter war over AI’s apparent stampede over copyright issues affecting original content that is produced by writers, artists, book authors and musicians.
Sadly, there seems to be no end in sight for this morass of mounting legal disputes. AI technology is evolving in real time with no guardrails. It’s a huge problem because AI gets its information by skimming content from existing sources and no regulations currently exist that require accountability.
And as original creators try desperately to lay claim to content that they created on their own, big tech is marshaling its assets to fight back in a huge way and reap profits.
Meanwhile, the number of groups lobbying the federal government on artificial intelligence nearly tripled from 158 to 451 between 2022 and 2023. Data by lobbying watchdog OpenSecrets suggests that “large technology companies” are dominating lobbying efforts to influence legislation that will affect what AI firms can and cannot do in the future.
If left unchecked, AI will turn into a voracious plagiarist that steals content as quickly as it’s created, regurgitating it in a homogenized form that’s consumed by users everywhere with no attribution or compensation.
The stakes could not be higher for creators whose livelihoods depend on maintaining exclusive ownership of their original content.
In this latest lawsuit, the MediaNews Group-owned Mercury News, Denver Post, Orange County Register, and St. Paul Pioneer-Press joined forces with Tribune Publishing’s Chicago Tribune, Orlando Sentinel, Sun Sentinel and the New York Daily News.
It remains to be seen if the power of many will have any effect on the seemingly unstoppable forward motion of ChatGPT and Copilot.
The excitement about AI technology is both real and warranted. AI can be used to automate mundane tasks, analyze big data, create groundbreaking new medicine, power autonomous vehicles, and so on. It’s an unprecedented tool that opens up myriad opportunities.
But its dark side has also become starkly evident. The magic of AI imparts a new dimension to the term “fake news.”
We’ve already seen the hocus-pocus effects of how “deepfake AI” is used to create faux images, videos and voice clones, and the trouble that can cause.
Last October, A-lister Tom Hanks abruptly announced that a video image of himself promoting a dental plan was not him at all but a likeness that had been created by deepfake AI.
Months later, megastar Taylor Swift dominated the news when a slew of digitally altered pornographic images of her likeness were shared online. The deepfakes were viewed millions of times before social media platforms took action and began removing them.
And more crises are on the way, as it becomes increasingly difficult to differentiate fact from fiction and easier to destroy an individual’s reputation or livelihood with a simple click of a mouse.
The newspapers’ lawsuit claims that the two tech firms are “purloining” the papers’ original reporting without compensation “to create products that provide news and information plagiarized and stolen.”
At issue are complex copyright laws that address how much content is reproduced and how much it is transformed.
As news organizations nationwide struggle to survive, the specter of AI blatantly appropriating their original reporting with no consequences is a devastating blow that merits pushback.
Unfortunately, Congress is dragging its feet on creating new laws to address a technology that is evolving in real time. All the while, the damage is being done, as AI continues to be trained on copyrighted material already in existence.
The consortium of eight publications has the capital to fight back — after all, there is strength in numbers.
But what happens if you aren’t a big corporation, a wealthy celebrity or a political figure, and find yourself in the crosshairs of an insidious AI assault?
Authors, musicians, artists and computer coders have filed lawsuits in the past with no definitive results. You could be next.
Vlad Drazdovich is vice president of performance improvement and analytics at Red Banyan, a strategic communications and crisis management agency. As part of his role, Vlad oversees AI-based technology implementations at the firm. He lives in Fort Lauderdale.