Why Generative AI isn’t necessarily a golden ticket in the world of advertising
In response to this growing concern, the ASA published an article last year on AI advertising, explaining that the CAP Code (which governs advertising in the UK) is largely technology-neutral. This means that the ASA will review any ad that falls within its scope and apply the same regulatory principles regardless of how the ad was created, which includes AI-generated ads. This means advertisers using AI-generated images in their campaigns will need to ensure that their ads comply with all parts of the CAP Code, just as any traditional ad would. Perhaps most importantly, advertisers must still ensure that their ads do not mislead purchasers by misrepresenting their own products, even where generated by AI.
International responses
We are now also starting to see regulatory responses to generative AI at governmental levels, with different countries taking a variety of approaches. On the “pro-regulation” end of the spectrum, the EU recently passed the EU AI Act, which imposes certain transparency and labelling requirements which may require certain users of AI-generated works to appropriately disclose the nature of such content. Little information has been provided on how the disclosure requirements can be satisfied, but it seems likely that this will affect advertisers who wish to take advantage of these technologies.
The UK, on the other hand, is following a “pro-innovation” approach. Rather than implementing laws that impose direct obligations on the use of AI, a patchwork of new regulatory principles, sector-specific best practices, and pre-existing legislation (such as the Consumer Rights Act) will be used to regulate the commercial use of AI. Various other initiatives have also been presented by the Government, which are due to come out later this year.
Other potential pitfalls – avoid getting your just desserts
Ensuring that AI-generated ads do not mislead and are properly labelled where necessary, are not the only concerns for advertisers. Rather, they must also ensure that AI generated ads do not infringe someone’s copyright, trade marks, or personality rights as well as making sure they are not in breach of any elements of the CAP Code. Advertisers would also do well to keep implicit bias and stereotyping front-of-mind when reviewing AI generated adverts.
AI systems are known to unintentionally produce content that perpetuates harmful stereotypes on matters such as race and gender. To avoid breaching advertising rules (such as rules 4.1 and 4.9 of the CAP Code) and any resulting reputational damage, advertisers must consider whether the generated content is in fact promoting or perpetuating damaging stereotypes. Further, some social media platforms already have their own generative AI policies in place which advertisers will need to follow. Many of the major players, for instance, have already put rules into place for disclosure and labelling that apply to advertising on their services.
The ASA’s position remains quite simple; continue to comply with the CAP Code when using AI-generated ads as you would for traditional ads. As a result, for advertisers best practice at the moment would be to continue ensuring human involvement at all stages to sense-check any AI-generated content for potential misleading, stereotypical or infringing content. For now, further regulatory clarifications are not necessary on the ASA’s part, and could ultimately confuse the picture. As a result, the investigation Willy’s Chocolate Experience, and indeed any other similarly substandard AI-generated adverts, will likely just reinforce the ASA’s current position.