Data & Artificial Intelligence
When AI becomes a research partner
Generative AI is no longer just a subject of study. Increasingly, it is becoming an active participant in research itself. In marketing and behavioral sciences, tools like ChatGPT are now used to interview participants, simulate interactions, personalize stimuli, and even co-create content with respondents.
These new possibilities raise an important question for researchers: when does integrating generative AI genuinely improve research, and when does it add little beyond existing approaches?
A recent methodological study explores this question by offering a practical guide for researchers who want to integrate generative AI interactions into their studies – without needing advanced programming skills. Beyond academia, these developments are also relevant for organizations that rely on experiments, user research, and behavioral insights to inform decisions.
From studying AI to studying with AI
Most early research on generative AI treated it as an object of study, examining trust in chatbots, the persuasiveness of AI-generated messages, or how people collaborate with machines. While this line of work remains important, a second approach is gaining momentum.
Here, generative AI is used as a research method. Instead of static questionnaires or pre-written stimuli, participants interact dynamically with an AI system during the study. This shift allows researchers to design experiments that were previously impossible at scale.
Examples already exist. AI systems have been used to conduct hundreds of in-depth qualitative interviews simultaneously, simulate online environments with controlled social dynamics, or personalize messages in real time based on participant input. These approaches blur the traditional boundary between qualitative depth and quantitative scale.
Yet despite this potential, adoption remains limited.
When AI adds value to research
The hesitation is understandable. Generative AI is powerful, but its value depends on how it is used. The key question is not whether AI can be integrated into a study, but when it meaningfully improves the quality of research.
Generative AI is especially valuable in research settings where interaction itself matters. This includes studies that aim to understand how people respond to AI systems, how humans collaborate with machines, or how persuasion and personalization unfold in real time. It is also particularly useful when researchers want to replace static stimuli with dynamic exchanges, or when they seek to scale qualitative methods without losing conversational richness.
In other settings, the added value of generative AI is more limited – for example when absolute experimental control is essential, when the research relies heavily on specialized domain expertise or cultural context, or when non-verbal cues are central to the analysis. In these cases, established methods continue to play an important role.
Seen this way, generative AI is best understood not as a universal upgrade, but as a targeted extension of the methodological toolbox. Its usefulness depends on the research question at hand.
The table below summarizes when generative AI integrations are most likely to create value in research studies.

Even when generative AI clearly fits the research goal, many researchers encounter the same obstacle: implementation.
Lowering the technical barrier
Building an AI-driven interaction is often perceived as technically complex, time-consuming, and risky. Researchers worry about programming skills, unstable systems, unpredictable outputs, and data security. As a result, many promising ideas never move beyond the conceptual stage.
Recent developments are changing this dynamic.
Integrating generative AI into a study no longer requires advanced programming expertise. New AI coding assistants can translate plain-language instructions into functional code, allowing researchers to focus on research design rather than software engineering.
In practice, this means a researcher can describe what they want to build – such as an AI interviewer, a search assistant, or a co-creation tool – and receive working code that can be embedded directly into common survey platforms like Qualtrics. Participant inputs are sent to the AI via an application programming interface (API). Responses are generated in real time, and the full interaction is automatically stored alongside survey data.
At a high level, integrating generative AI into a research study follows a simple logic. Rather than building custom software from scratch, researchers connect three familiar elements: a survey interface, an AI model, and a lightweight technical bridge between the two.
In practice, this can be broken down into a small number of concrete steps:
- The researcher designs the study as usual in a survey platform. A standard text or graphic question serves as the container for the AI interaction. From the participant’s perspective, this appears as a chat window embedded in the survey.
- The researcher selects an AI model and obtains access credentials through an application programming interface, or API. The API allows the survey to send participant inputs to the AI model and receive responses in return.
- Instead of writing code manually, the researcher uses an AI coding assistant. By describing the desired interaction in plain language – such as “send the participant’s input to the AI and display the reply” – the assistant generates the necessary script automatically.
- This script is embedded into the survey. When a participant types a message, the script sends it to the AI model via the API and displays the response in real time. The interaction feels conversational, but the underlying logic remains tightly defined.
- The researcher specifies which elements of the interaction should be stored as data. Participant prompts, AI responses, or full interaction histories can be saved automatically alongside traditional survey variables such as demographics or attitudinal measures.
- The system is tested like any other online study to ensure responses are generated correctly, data is recorded as intended, and potential errors are handled gracefully.
- Once the study is live, all AI interactions become part of the dataset. Conversations are no longer ephemeral. They are observable, analyzable research material.
For practitioners, this setup means that interactive, AI-based research can be conducted using familiar survey tools, while producing richer behavioral data than static questionnaires alone.
From possibility to practice
Generative AI is often discussed in abstract terms, either as a disruptive force or a source of concern. What this work highlights is a more pragmatic perspective: integrating AI into research is no longer a distant or purely technical challenge. It is a methodological choice that researchers and organizations can already make today.
By clarifying when generative AI adds value and by lowering the barriers to implementation, the goal is not to encourage its use everywhere, but to make its use more deliberate. When interaction is central to the research question, AI can expand what is feasible. When it is not, established methods remain highly effective.
Seen this way, generative AI does not replace existing research practices. It extends them. As familiarity with these tools grows, attention can shift away from technical feasibility and back to where it belongs: asking better questions about behavior, decision-making, and interaction – in contexts that matter to both science and practice.
The challenge ahead is therefore not whether researchers can work with generative AI, but how thoughtfully they choose to use it to generate insight.
This article relies on the academic paper:
Joerling, M. (2025). Integrating GenAI interactions in marketing studies: A methodological guide. International Journal of Research in Marketing, forthcoming. DOI: https://doi.org/10.1016/j.ijresmar.2025.12.003.
