By India Fizer
As AI rapidly evolves, Lifelong Crush and Broken Heart Love Affair are taking proactive steps to ensure its use remains responsible, transparent, and collaborative. Joline Matika reveals how the agency has developed clear guidelines for AI usage, co-created custom AI strategies with clients, and fostered an environment of ongoing learning.
How is your agency approaching internal guidelines around AI use? Are there specific principles or guardrails you’ve implemented to guide decision-making?
We have a clear POV and a policy around how we use AI, and just as importantly, how we won’t. Our approach is grounded in three core principles: responsibility, transparency, and collaboration. Policies are no fun to read so we’ve built cheat sheets and Do’s and Don’ts for easy reference.
We’ve taken that one step further by sharing our approach with our clients, to open the conversation, and in some cases, co-create custom AI guidelines that fit with their own corporate standards. It’s about building trust and navigating this space together.
To keep us honest, we’ve also set up an internal AI task force, affectionately known as ‘Artificial Intel’, that helps guide our approach, ensures responsible use, and provides support (functional AND emotional) as we continue learning while generally staying positive and not totally freaking out about all of it.
We’re also fostering a culture of shared learning through a monthly Lunch’n’Learn series, open discussions, and a training program that combines instructor-led workshops, task force sessions, and community-based learning.
How are you navigating the topic of transparency with clients when it comes to AI-generated content?
We never try to pull a fast one on our clients. If AI has had a hand in creating something, we say so. It’s built into our protocol. Clients appreciate that openness, and it sparks good conversations about where AI adds value and where human craft is non-negotiable. In turn we ask that our clients be upfront with us when they use ChatGPT to give feedback on our ideas … we can tell anyways with those pesky long dashes!
What role do you think agencies should play in advising clients on AI-related brand safety and reputational risk?
Agencies should be the designated thought leaders in the AI room. AI isn’t going anywhere and it’s certainly not slowing down. We need to help clients move forward eyes wide open. That means being transparent about its use, flagging misuse, and educating on best use. It also means translating complex risks into practical, brand-friendly guidelines.
And sometimes, it means having a heart-to-heart with clients and simply saying “I know you think it will cost less, or that maybe it will be faster but it won’t be good.” Long live the triple constraint.
In short, agencies need to keep doing what we’ve always done. Advising and partnering with clients in ways that put their brand reputation and their business objectives first.
Where do you see AI’s strengths moving forward?
Oof. There is no one thing. AI is impacting outcomes for our clients across every department. I could talk about efficiencies and eliminating mundane tasks, but everyone is saying the same thing about this and to be honest it’s getting boring.
The real excitement lies underneath. AI lets us stop sweating the small stuff and focus on the work we actually love. At the end of the day, it’s not about replacing what we do, it’s about giving the team more space to create the kind of work that makes people feel something. Collectively. We’re all a part of that.
AI will help the incredibly talented people reach new creative heights, using technology to amplify their innate talent and passion. And we’re lucky, because we’re home to some of the most talented and passionate people out there.
Share Post:
Haven’t subscribed to our Telegram channel yet? Don’t miss out on the hottest updates in marketing & advertising!