For marketers with an interest in research, it’s a good time to start talking about AI-facilitated online surveys. What are those, exactly?
They’re surveys that use machine learning to engage with respondents (think of a chatbot) which then manages a lot of the back-end data involved with implementing and reporting (think of pure drudgery).
The good news is that a conversation about AI-facilitated online surveys is well underway. The bad news is that it’s rife with exaggerated claims.
By distinguishing between hype and genuine promise, it’s possible to set some realistic expectations and tap into the technology’s benefits without over investing your time and research dollars in false promises (which seems to happen a lot with AI).
One misconception is that AI surveys reduce fatigue because traditional surveys are too long. Not quite. Surveys are only too long if they are poorly crafted, but that has nothing to do with how the instrument is administered.
Where AI does help is in creating an experience that is very comfortable for the respondent because it looks and feels like a chat session.
The informality helps respondents feel more at ease and is well-suited to a mobile screen. The possible downside is that responses are less likely to be detailed because people may be typing with their thumbs.
There are three advantages to how AI treats open-ended questions. First, the platform we used takes that all-important first pass at codifying a thematic analysis of the data.
When you go through the findings, the machine will have already grouped them according to the thematic analysis the AI has parsed. If you are using grounded theory (i.e., looking for insights as you go), this can be very helpful in getting momentum towards developing your insights.
Secondly, the AI also facilitates the thematic analysis by getting each respondent to help with the coding process themselves, as part of the actual survey.
After the respondent answers “XYZ,” the AI tells the respondent that other people had answered “ABC,” and then asks if that is also similar to what the respondent meant.
This process continues until the respondents have not only given their answers but have weighed in on the answers of the other respondents (or with pre-seeded responses you want to test).
The net result for the researcher is a pre-coded sentiment analysis that you can work with immediately, without having to take hours to code them from scratch.
The downside of this approach is that you will be combining both aided and unaided responses.
This is useful if you need to get group consensus to generate insights, but it’s not going to work if you need completely independent feedback.
Something like GroupSolver works best in cases where you otherwise might consider open-ended responses, interviews, focus groups, moderated message boards or similar instruments that lead to thematic or grounded theory analyses.
The third advantage of this approach over moderated qualitative methodologies is that the output can give you not only coded themes but also gauge their relative importance.
This gives you a dimensional, psychographic view of the data, complete with levels of confidence, that can be helpful when you look for hidden insights and opportunities to drive communication or design interventions.
There are claims out there that AI helps drive speed-to-insight and integration with other data sources in real-time. This is the ultimate goal, but it’s still a long way off.
It’s not a matter of connecting more data pipelines; it’s because they do very different things. Data science tells us what is happening but not necessarily why it’s happening, and that’s because it’s not meant to uncover behavioral drivers.
Unless we’re dealing with highly structured data (e.g., Net Promoter Score), we still need human intervention to make sure the two types of data are speaking the same language.
That said, AI can create incredibly fast access to the types of quantitative and qualitative data that surveys often take time to uncover, which does indeed bode very well for increased speed to insight.
There is an idea out there that AI surveys can access ever-greater sources of data for an ever-broader richness of insight.
Yes, and no. Yes, we can get the AI to learn from large pools of respondent input. But, once again, without two-factor human input (from respondents themselves and the researcher), the results are not to be trusted because they run the likely danger of missing underlying meaning.
The final claim we need to address is that AI surveys can be created nearly-instantaneously or even automatically.
There are some tools that generate survey questions on the fly, based on how the AI interprets responses. It’s a risky proposition.
It’s one thing to let respondents engage with each other’s input, but it’s quite another to let them drive the actual questions you ask. An inexperienced researcher may substitute respondent-driven input for researcher insight.
That said, if AI can take away some drudgery from the development of the instrument, as well as the back-end coding, so much the better. “Trust but verify” is the way to go.
So, this quote from Picasso may still hold true: “Computers are useless. They can only give you answers,” but now they can make finding the questions easier too.
The good news is that AI can do what it’s meant to do – reduce drudgery. And here’s some more good news (for researchers): There will always be a need for human intervention when it comes to surveys because AI can neither parse meaning from interactions nor substitute research strategy.
AI approaches that succeed will be the ones that can most effectively facilitate human intervention in the right way, at the right time.
MARKETING Magazine is not responsible for the content of external sites.