The AI Honeytrap

On close examination, it appears most instruction on AI is focused on the user. Simple folks, newbies, wannabe ad youths, GenGPT (you made that up. Ed.) are told, quite rightly, ask the right question because the wrong question would give you a wrong answer, ‘rubbish in, rubbish out’ as was the mantra of original computer programming. (Those were the days, a simple, less complicated stinging.)

I’ve tried AI myself, asking Chat a question about a PhD proposal. When I phrased it one way I got a reasonably good response, and when I rephrased the question I got a different but still reasonable answer. Both were indeed useable. So much so, I gave myself a little cuddle.

But dare I use them? Possibly not because there are a great many plagiarism apps around. The first one on Google is Duplichecker, there follows very many more, Grammarly is the one used by buzzing hives of higher learning. In fact the risk of having a proposal or indeed a whole thesis bounced is terrifyingly and gonadshrinkingly real.

It’s something the institutions of higher learning take very seriously so it might mean being exposed and expelled from the hive. But let me drag you thankfully back to adverts, where most of my contact with the sweet lure of AI has been.

Here I must relate a strange phenomenon which comes to mind that would plague my days in the making of advertising campaigns. Among the many ludicrous comments we would get from advertisers to an idea, our little pride and joy, painfully birthed and delivered kicking and mewling into the world, was “I can’t relate to it, understand it or approve it because I’ve never seen anything like it before. I am not comfortable.”

Now, because it would do no good, we would fight back the urge to scream, “This is not done for your comfort you buffoon, it’s original thinking, so of course you haven’t seen anything like it before, that’s why it’s so effing good”.

So should there be some direction as to how to read AI from the recipient’s POV? Here I hesitate to be so preposterous as to suggest the average advertiser might be quite happy with an unoriginal idea generated by AI as it poses less strain on their ability to approve/disapprove.

They won’t be going to Grammarly to check, too much like hard work and it might not make them comfortable. But I think that makes my question moot. Don’t worry about originality, it’s no longer a factor. More people would be content to be ignorant than would take even a little trouble to acquire knowledge. (Dr Johnson I presume. Ed).

To test my proposition, having written so far, I put the title of my article into ChatGPT; this is what it said, you make up your mind whether I passed or can expect my gonads to shrivel:

The idea of AI as a “honeytrap” is an interesting metaphor. A honeytrap typically refers to a scheme designed to lure someone into a compromising situation, often involving seduction or manipulation.

In the context of AI, it could suggest that while AI systems might provide enticing benefits—like convenience, efficiency, or personalized experiences—there are also potential risks and pitfalls, such as privacy concerns, data security issues, or overreliance on technology.

It’s important to approach AI with both curiosity and caution, recognizing its advantages while being aware of its limitations and the ethical implications of its use. How do you see AI in this light?’

I wish I’d said that. (Maybe you did. Ed).

Paul J Loosley is an English person who has been in Asia 40 years, 12 as executive creative director and regional planner for JWT and 26 making TVCs. Retired some five years ago yetstill, for some strange reason, he can’t shut- up about advertising. Any feedback: mail p.loosley@gmail. com (please keep it buzzing).


MARKETING Magazine is not responsible for the content of external sites.




Subscribe to our Telegram channel for the latest updates in the marketing and advertising scene