A researcher reviewing AI-assisted transcript clusters
Project Overview
My Role UX Researcher
Scope Workflow design & methodology
Methods Mixed-methods, AI-assisted synthesis
Context Cross-functional product teams
AI Tools Claude, Dovetail, Microsoft Copilot, ChatGPT
Languages English & French
Key Insight

The most useful thing AI does in research isn't surface insights. It buys back the time to develop them properly.

Interested in research like this?

Whether you have a brief ready or just an early idea, I'd love to talk about what's possible.

Get in Touch
UX Research AI Integration Mixed Methods

AI-Assisted Research

Using AI to do more research, not less thinking.

An hour-long interview produces roughly 10,000 words. A study with eight participants produces 80,000. The bottleneck in qualitative research has rarely been the data itself. It is the time between collecting it and arriving at a finding clear enough to act on.

AI tools compress that gap. First-pass affinity clusters, preliminary code assignment, discussion guide drafts — the mechanical work that used to take days now takes hours. The quality of the research does not come from that speed. It comes from what you do with the time it frees up.

Laptop showing Dovetail affinity clusters alongside handwritten research notes

AI with human oversight. The model handles volume and speed. Every inference, every cluster, every pattern it surfaces gets reviewed and owned by the researcher before it becomes a finding.

AI Accelerates
  • First-pass transcript clustering
  • Affinity grouping at scale
  • Screener and discussion guide drafts
  • Preliminary code assignment
  • Pattern surfacing for review
  • Report formatting and documentation
Researcher Decides
  • Interview rapport and participant trust
  • Ambiguity, hesitation, and contradiction
  • Methodological judgment at every step
  • Which finding changes the recommendation
  • Stakeholder framing and communication
  • Bias review of all AI-generated output

The researcher sets the direction, reviews the output and owns the final decision.

Writing a good prompt works a lot like writing a good interview question. Be specific about what you need, leave room for what you didn't expect, and know in advance what a bad answer looks like. The same habits that make you careful with research questions make you careful with prompt engineering.

Screenshot of a research synthesis prompt in Claude
1

Research framing

Setting the interpretive context before the model reads any data. Same function as a research brief or screener preamble.

2

Verbatim anchoring

Requiring a direct quote keeps every finding traceable back to the source. The researcher can verify or reject each one independently.

3

Constraint clause

Guards against the model resolving ambiguity that the researcher needs to preserve. Uncertainty in the data is data.

A note on confidentiality

The example above is for demonstrative purpose only, to show how I integrate AI into my workflow. To protect confidential business information, the specific task scenarios, participant data, and qualitative results shown here are illustrative and the AI (Claude.ai) was instructed to generate a fictional interview summary. The image above represents the kind of outcomes my research typically produces, not actual data from any real study or any real participants. The information in the image should not be attributed to any specific organisation or cited as factual research data.

Getting findings into a room is only half the job. Getting them to stick is the other half. The way research is framed, sequenced, and delivered shapes whether it becomes a decision or a document nobody reads again.

Stakeholders don't act on data. They act on clarity. The researcher's job is to translate one into the other — and to know which parts of the evidence need context, and which just need to be said plainly.

Researcher presenting findings to a cross-functional product team

It's not always the most thorough study or the longest report. It's the research that arrives at the right moment, framed in a way the team can use, and done honestly enough that people trust the conclusions even when they're uncomfortable.

That's what I aim for. If you're working on something where the user experience matters and the decisions are genuinely hard, I'd be glad to be part of the conversation.

Let's connect

Have a research challenge?

Whether you have a project in mind or just want to talk through an idea, I'd like to hear from you.

Message sent. I'll be in touch within 24 hours.