AI-generated media, whether photos or videos, can get a lot of criticism and pushback from your existing audience. While using AI reduces our reliance on expensive equipment and makes certain creative executions more accessible, brands must realise that using AI brings with it an additional message—one that could distract from the ad’s intended message.
Because of this, using AI in the wrong way can result in public backlash, poor marketing performance, and a more tainted brand image.
But why does this happen? First, it’s important to acknowledge that while AI-generated media represents a huge leap in AI capabilities and business efficiency, AI also has an equally negative view among a significant portion of the public.
During an AI imagery webinar with MediaOne CEO, Tom Koh, and his co-host, Walter Lim, guided marketers through the thoughtful use of AI imagery and how businesses can benefit from its generative efficiency without facing the drawbacks of public backlash.
Key Takeaways
- Public perception of AI remains mixed. While AI improves efficiency, poorly executed or overly artificial visuals can make campaigns feel fake or insincere.
- Brands built on trust or emotion must tread carefully. Industries like finance, healthcare, insurance, and government risk losing credibility with obvious AI-generated imagery.
- AI-generated, nostalgia-driven campaigns can easily backfire. Replacing authentic creative craftsmanship with AI visuals can alienate long-time fans and damage brand sentiment.
- Responsible AI use is most effective for product demos or neutral content, where emotional connection, human faces, or legacy imagery aren’t central to the message.
- Establishing internal AI-use guidelines is essential, as it helps marketing teams determine when AI enhances creativity versus when it risks undermining authenticity.
Public Sentiment around AI
The following are some ways in which people perceive AI-generated media:
- Fake and inauthentic-looking. In its current form, a good portion of people can still tell what’s AI and what’s not. Plus, even if it’s not immediately apparent that it’s AI, there will always be people looking deeper.
- No heart in the craft. AI is criticised in the art and creative community as taking the heart out of the process. While this is more of a philosophical argument around art, it still contributes significantly to people’s disdain towards AI.
- Inauthentic messaging. Because AI allows for such easy creation of content, some people feel that the message around the AI-generated content lacks authenticity and value. Much like a “Sorry” that was not sincere.
- Cheap. With the amount of AI content found all over the internet, it has become common and ubiquitous. Obvious AI usage suggests the company cuts corners and isn’t willing to pay human professionals to execute an idea that AI has done sloppily.
- Gets the most flak from the creative community. This community gives the most pushback and resistance to AI-generated imagery. So if your users are mainly creative types, you’d want to be extremely cautious with AI imagery.
If your brand is already dealing with a publicity crisis, playing with AI will stoke the fire that’s already burning your brand. If you already have a crisis to deal with, it may not be the best choice to start using AI-generated images. Or worse: using AI-generated imagery for your official statements and apologies.
AI’s Image Realism in Its Current Stage

Source: https://www.youtube.com/watch?v=krnQJIl8tnA
It’s crucial to note that with anything AI-generated, no matter how realistic, there will be sharp-eyed people looking at every square inch of the image, snuffing out signs of AI. It’s an inevitability whenever we’re creating AI imagery.
While the key isn’t to eliminate this nitpicking, we should aim to at least reduce this tendency by using media that is not AI-apparent. It takes an excellent AI user to create AI images that look authentic and indistinguishable from real life. Otherwise, with a non-descriptive AI user, the AI’s image realism remains questionable.
When it comes to videos, these AI-apparent gaps become even more evident. Exaggerated facial expressions, mouth movements too wide for their nuanced dialogue, and a polished smoothness that is never seen in real life.
When To Be Cautious with AI Use?
This doesn’t mean you have to steer completely clear of using AI-generated media, however. You may still use it, but you’d want to be extra careful when using it under the following circumstances:
Be Cautious with AI when Establishing Trust and Confidence

Source: Slide 4
When the Ministry of Finance ran some AI-apparent ads, they drew flak from some of the more vocal people of Singapore.
For a government body, a high level of trust is crucial. They need to convey that they are “Official.” However, AI-apparent imagery doesn’t convey that kind of trust. Using AI in this case would go against what the body should strive to establish—confidence and a sense of safety among the people of Singapore.
Another example of AI eroding trust and confidence is through these retirement services shown below:

Source: Slide 12
The ads all feature a downloadable resource, which is good marketing practice for acquiring leads. However, the creatives they’re anchored to are anything but.
Because of their sloppy AI generation, they are hastily made and do not convey the trust associated with someone you’d be trusting your CPF with. It’s imperative that retirement companies—or any company that needs to acquire customer trust (e.g., insurance, banking)—take time and effort to produce their content. Even if it means using non-AI means to do so.
Tom and Walter go into a more detailed teardown of AI imagery, indicating the particular areas that are dead giveaways of generative AI.
See the replay here.
Be Cautious with AI when Appealing to Emotion
An appeal to emotion aims to tug at people’s heartstrings, supposedly highlighting a brand’s caring nature.

Source: Slide 7
Banks are a critical service throughout a person’s life journey. As such, banks often position themselves as a life partner, appealing to emotion when it comes to ads. Ads for the banking industry should ideally celebrate emotion, life, and humanness.
When POSBank attempted to do so using AI, they made a glaring omission.
Using a growing lady, POSBank attempted to convey emotion and life’s joy. However, since the background didn’t age with the lady, the attempt to tug at the heartstrings fell flat. POSBank’s effort to present itself as the ideal life partner came across as inauthentic.
When trying to appeal to emotion, make sure the execution of the ad idea is perfect. Instead of warming people’s hearts, these AI errors will only dishearten those who see them.
Be Cautious with AI when Nostalgia is Involved
Nostalgia is a powerful marketing tool, especially if it forms the base of your brand loyalty.
Coca-Cola has a long history of creating vibrant caricatures, featuring diverse Christmas characters and a festive art style. They’ve made memorable Christmas marketing campaigns, such as the Coca-Cola Christmas truck and the bright-red Santa Claus.
Coca-Cola’s grip on the Christmas look has made it a nostalgic, reminiscent brand, associated with creating magical memories.

However, Coca-Cola served up controversy when it ran its “Holidays Are Coming” ad featuring the apparent AI-generated look. Internet users have been quick to air out their frustrations at a company they once thought valued the art of the craft.
Users noted how “cheap” and “disappointing” Coca-Cola’s new Christmas ad was, in stark contrast to their previous ones, full of character and personality.
So, if your brand has had a rich history of, say, “crafting magical moments,” you should be very careful if you plan to change how you craft said magical moments, especially if there’s nostalgia riding on how you used to do things.
Another example of a brand succumbing to AI controversy is Toys“R”Us.

The company was at the forefront of childhoods and Christmas seasons for over six decades. Geoffrey the Giraffe is the face of the brand and beloved children’s mascot. So, when the toy company shifted to AI for the story of its founder, Charles Lazarus, some people were shaken up.
To many, Toys”R”Us is a nostalgic brand that embodies fun and memories. So when people saw an inorganic Geoffrey the Giraffe and an even more inorganic Charles Lazarus, they felt that their nostalgia was spat on and their childhood ruined.
AI Is Not a License to be Dishonest Towards Customers
Honesty should remain at the core of your business practices and work culture. You can’t deepfake a celebrity onto your ads just to give your brand more “credibility.” You need to properly partner with influencers in order to use their likeness. You can’t just use AI to steal it.
Else, you’d be painted in the same light as those prevalent scammers who use AI to scam people.
At this point, we’ve already discussed extensively about being cautious with AI. Are there any instances wherein using AI is completely permissible, okay, and is sure not to get flak?
AI Imagery Works Best if No Nostalgia or Emotion is Involved
For instances wherein you’re generating AI imagery to showcase how effective a product is—as long as AI isn’t exaggerating the product’s actual capabilities and misleading people—it should be okay. In a product demonstration, no nostalgia or emotion is involved. Just plain highlighting the product’s efficiency in its intended use. Nothing there to cry about broken childhoods.
Bosch uses AI beautifully and executes its AI imagery without going into that uncanny valley. They achieved a non-controversial output because:
- Bosch featured no humans or living things in their AI-generated video. These are often the most difficult to reproduce naturally through video.
- Bosch isn’t trying to tug heartstrings with their ad. They’re trying to showcase their cooking appliances.
- Bosch uses a fast-paced video, leaving little opportunity for generative-AI gaps to take up screen time.
Going through the video, you can see the AI portions, which include the kitchen layouts “without Bosch” and “with Bosch,” as well as the cut vegetables. You’ll need to pause the video to even see the cut broccoli, carrots, brussels sprouts, and red peppers that were generated by AI.
It wouldn’t be a wise use of resources either to hire an entire special effects team to produce these flying slices for a less-than-five-second segment. So, using AI was the wise choice.
Using AI to Criticise AI is a Valid Marketing Approach
Dove criticised how AI generates “the most beautiful woman in the world,” which, in turn, also meant they’re challenging existing beauty standards focused on perfection and infallibility.
Through their AI-critique ad, Dove faced AI head-on, prompting it to create “the most beautiful woman in the world according to ‘Real Beauty Ad’”
The AI program then imagined PWD women, plus-size women, women of colour, women with aged skin, Asian women, Southeast Asian women, women of differing ethnicities, among a vast range of underrepresented others.
It wasn’t just a critique against AI, but also existing cultural beauty standards that leave many women left out and alienated.
Drawing the Line with AI
AI is a technological innovation, and regardless of how we feel about it, it’s here to stay. Now that it has permeated our digital lives, how do we ensure we don’t make crucial AI mistakes when running our marketing campaigns?
For one, we need internal guidelines on where and when AI is permissible. Perhaps we can use them for our internal training videos, but avoid them entirely when it comes to public-facing outputs.
Or, maybe we could have them if specific criteria are met. For example: 1) the ad is not harkening to nostalgia, 2) the ad will not feature people, and 3) the ad is not attempting to appeal to emotion. If all three are checked, then we may use AI. This checklist will be different for everyone, however.
By having company guidelines on AI use, we prevent ourselves from being met with backlash. We shield ourselves from further brand tarnishing and promote responsible AI use within our workforce as well.
Catch the replay here.



