Now we come to the racist question.

3 years ago


By: Chris Greenough – CMO & Co-founder of HYPERLAB

Over the past week, my news feed consisted of two main conversations: Watsons Malaysia’s sexist blackface Ramadan ad Legenda Cun Raya and Apple’s focus on Machine Learning. (View for a laugh)

As a marketer and founder of an AI (Artificial Intelligence) company, I found myself thinking about the intersection of marketing, creativity and artificial intelligence.

Futurist Ray Kurzweil predicts that by 2029, the first business will be founded by an AI. While this AI won’t be able to execute all functions of the business, maybe it’s very possible that it will settle into the function of a CMO.

Which begs the question, could it conjure up an equally racist, sexist ad?

Can an AI be racist?

To answer that we have to understand how an AI is built and how it would create an opinion of the world around it. Although there are many instruments that make up the A.I. symphony, we’ll focus in on one of the main components that makes a machine smart – Machine Learning.

In the simplest explanation, to teach a machine what “beauty” is we would have to feed thousands, if not millions, of photos of beautiful women into a machine where it would build a probabilistic model out of the similar patterns it finds.

After learning this, you could give it new images and it could determine whether that person is “attractive” or not. So, where do we get these images?

Teaching a bot to be racist

Also called supervised learning, a human is responsible to choose what to input into the machine to learn. For example, if only images of beautiful caucasian women are shown to an AI, it too would likely define beauty based on skin colour. But, if I feed it an equal distribution of beautiful women from around the world, it’s quite probable that the machine would find all women beautiful.

The fact is a machine will judge on probability, but humans act on bias – a machine’s capability for bias is only as far as our own. So we have to teach them to reflect the values we want to see in this world. Unfortunately,  teaching takes effort.

In response to this, a common question we hear is, “can the bot just learn on its own?”. To which we respond: Well of course it can, but why would you want it to. 

Would you let your child learn alone?

I don’t mean learn literally alone. I mean learn without any curriculum or supervision, using only the media available to them? If yes, their altars of educational source material would consist of a TV and their Tablet.

In Microsoft’s now infamous experiment, they accidentally demonstrated how a perfectly benign chatbot can quickly become racist after learning from its netizen peers. Only with supervised learning can we maximise the value of AI, while controlling its narrative.

Who’s watching the watchers?

Although we’re still about 10 years away from meeting our fictitious Founder/CMO/AI, asking if it will be racist is a bit like asking if a pitbull is dangerous – just look to the owner.

For this, and many other reasons, there has been a call for governments to  start getting involved in order to regulate the industry. 

But regulation or not, as technology, marketing and media continue to converge it’s our responsibility as agents in this evolution to actively build futures that are inclusive for all life, human or otherwise.

Cindy Gallop summed it up best, when she spoke to Mumbrella, in her reflection on the responsibility of brands like Watsons: “Our industry should not only reflect the world as it really is, but has the opportunity to change the world to be what we and our consumers would like it to be.”

What’s your take on how artificial intelligence will, for better or worse, evolve the marketing landscape?

I’m the co-founder of HYPERLAB, a leader in Conversational AI, which we use to create Cognitive Customer Experiences for APAC enterprises. Continue the conversation below or email me at [email protected].