A creature is formed of clay. A puppet becomes a boy. A monster rises in a lab. A computer takes over a spaceship. And all manner of robots serve or control us. For generations we’ve told ourselves stories, using themes of magic and science, about inanimate things that we bring to life or imbue with power beyond human capacity. Is it any wonder that we can be primed to accept what marketers say about new tools and devices that supposedly reflect the abilities and benefits of artificial intelligence (AI)?
And what exactly is “artificial intelligence” anyway? It’s an ambiguous term with many possible definitions. It often refers to a variety of technological tools and techniques that use computation to perform tasks such as predictions, decisions, or recommendations. But one thing is for sure: it’s a marketing term. Right now it’s a hot one. And at the FTC, one thing we know about hot marketing terms is that some advertisers won’t be able to stop themselves from overusing and abusing them.
AI hype is playing out today across many products, from toys to cars to chatbots and a lot of things in between. Breathless media accounts don’t help, but it starts with the companies that do the developing and selling. We’ve already warned businesses to avoid using automated tools that have biased or discriminatory impacts. But the fact is that some products with AI claims might not even work as advertised in the first place. In some cases, this lack of efficacy may exist regardless of what other harm the products might cause. Marketers should know that — for FTC enforcement purposes — false or unsubstantiated claims about a product’s efficacy are our bread and butter.
When you talk about AI in your advertising, the FTC may be wondering, among other things:
Are you exaggerating what your AI product can do? Or even claiming it can do something beyond the current capability of any AI or automated technology? For example, we’re not yet living in the realm of science fiction, where computers can generally make trustworthy predictions of human behavior. Your performance claims would be deceptive if they lack scientific support or if they apply only to certain types of users or under certain conditions.
Are you promising that your AI product does something better than a non-AI product? It’s not uncommon for advertisers to say that some new-fangled technology makes their product better – perhaps to justify a higher price or influence labor decisions. You need adequate proof for that kind of comparative claim, too, and if such proof is impossible to get, then don’t make the claim.
Are you aware of the risks? You need to know about the reasonably foreseeable risks and impact of your AI product before putting it on the market. If something goes wrong – maybe it fails or yields biased results – you can’t just blame a third-party developer of the technology. And you can’t say you’re not responsible because that technology is a “black box” you can’t understand or didn’t know how to test.
Does the product actually use AI at all? If you think you can get away with baseless claims that your product is AI-enabled, think again. In an investigation, FTC technologists and others can look under the hood and analyze other materials to see if what’s inside matches up with your claims. Before labeling your product as AI-powered, note also that merely using an AI tool in the development process is not the same as a product having AI in it.
This message is not new. Advertisers should take another look at our earlier AI guidance, which focused on fairness and equity but also said, clearly, not to overpromise what your algorithm or AI-based tool can deliver. Whatever it can or can’t do, AI is important, and so are the claims you make about it. You don’t need a machine to predict what the FTC might do when those claims are unsupported.
Leave A Comment