A few weeks ago at Spark Together, I found myself in the same conversation I’ve had dozens of times with other consultants and researchers:
“How do you make sure clients actually do something with the research?”
“What do you do when they hire you, you deliver great insights, and then… nothing happens?”
“How do you avoid projects that just sit in a deck somewhere gathering digital dust?”
These questions come up constantly because research projects fail all the time. Not because the research was bad. Not because the insights weren’t valuable. But because somewhere between “here are the findings” and “let’s execute on this,” something breaks down.
After eight years of running research projects—and plenty of hard-earned lessons about what works and what spectacularly doesn’t—I’ve learned that research failure comes in two flavors: delivery failure (you can’t deliver the research at all) and impact failure (you deliver it, but nothing changes afterward).
Delivery failure sucks, but it’s rare. Impact failure? That’s where most of the tension lives. And it’s usually avoidable.
Let me walk you through why research projects fail from both sides—the consultant perspective and the client perspective—and more importantly, how to actually prevent it.
The two types of research failure
Let’s define what we mean by “failure” because it’s not always obvious.
Delivery failure is when you literally can’t deliver the research. Maybe you can’t get people on the phone. Maybe the analysis falls apart. Maybe the data just isn’t there. This is the worst-case scenario, but honestly? It’s not that common.
Impact failure is far more insidious. This is when you conduct the research, deliver beautiful insights, present everything perfectly… and then nothing happens. The research sits there in no man’s land. It doesn’t unlock anything. It doesn’t change any decisions. It just exists.
Impact failure is where most of the pain lives because you’ve spent resources—time, money, energy—for essentially nothing. Or worse, you got value out of it, but that value never transferred to other departments or functions.
And here’s the thing: impact failure is preventable 70% of the time. (The other 30% likely due to reasons outside of your control).
From the consultant’s perspective: clients aren’t buying research
If you’re a consultant or agency selling research, here’s the first truth bomb you need to internalize:
Clients aren’t hiring research for the sake of research.
They’re hiring research in service of progress in whatever program or function they’re trying to improve. Clients are ultimately focused on outcomes—improving customer acquisition, increasing retention, figuring out pricing, whatever. They’re not hiring you because “knowing your customers” sounds cool.
Research is a means to an end, not the end itself.
The disconnect happens when consultants pitch research like it’s the product, when actually the client is buying the outcome. You’re selling the research, but they’re buying the end result.
Think about it: most teams aren’t conducting research for fun. There’s always a reason. Your job as the consultant is to understand what that reason is, make sure you’re aligned with their goals, and ensure no one feels like it’s a waste because everyone knows what the ultimate objectives were.
Screening for clients who will actually use your work
I’ve heard from plenty of consultant friends about clients who hire them just to check a box. Or because “everyone’s hiring a research consultant, so we should too.” These projects almost never go anywhere.
But here’s the good news: you can screen for this upfront.
During discovery, I ask questions like:
- Have you ever done a project like this before?
- What happened in that project? What did you execute after?
- What types of projects have you executed in the past to get the results you were looking for?
If a team has no history of conducting research and no history of successfully troubleshooting problems and iterating, that’s a yellow flag. It doesn’t mean don’t work with them—it means you need to set expectations differently.
I look for one of two things:
- A history of deploying research insights (even if it’s messy)
- A history of problem-solving and iteration in general
Teams that know how to troubleshoot their own issues and have successfully tested, learned, and adapted—those teams will likely apply your research insights to their problem-solving process, even if they’ve never formally done “research” before.
The person who hires you matters
Who’s actually hiring you? That matters more than you think.
I typically work with executive leaders, CEOs, or founders because those are the people with enough internal influence to actually move things. When someone more junior with less influence hires you, you might deliver great insights, but they may not have the power to implement anything or influence anyone.
There are exceptions, of course—sometimes you’re essentially staff augmentation, working hand-in-hand with someone to create impact together. But if you’re tackling something large that requires buy-in from multiple executives, entering at the wrong level can doom the project from the start.
Ask yourself: does this person have enough influence, power, and leeway to actually facilitate change with what I’m going to provide? And if not them, then who would you need to get in front of?
Taking clients from A to B (not A to Z)
My business coach Charlie once told me something that completely shifted how I approach consulting:
“If your client is at point A and you’re all the way at point Z, your job is to get them from A to B first. Then B to C. Then C to D. They can’t go with you to Mars if you can’t first get them to zero gravity.”
You can’t expect clients to just “get it” if you’re not willing to lead them there.
Some founders have never thought about growth as loops versus funnels. Some teams have only ever focused on marketing and sales, not realizing that exponential growth requires investing in activation, retention, pricing, and operations too.
If you see a massive opportunity—say, in pricing optimization—but the team has their eyes 100% on acquisition, you might need to educate them first. Show them the data. Walk them through case studies. Send them resources. Help them see what you see.
Not every client will care, and that’s okay. But the ones who are open and willing to work with you? They deserve your time and energy in educating them.
Getting buy-in: the make-or-break factor
Let’s talk about buy-in because this is where most research projects fall apart.
There are two levels of buy-in you need:
1. Buy-in on the work itself
Your primary project sponsor needs to see that the research you’re doing is in service of an outcome they care about. And they need to understand that they probably won’t achieve that outcome efficiently (or at all) without doing this work.
But here’s the thing: it’s usually not just one person who needs to be bought in.
It’s that person’s team, their peers, possibly their boss. You have multiple levels of influence to manage.
Most clients won’t think of this on their own. They won’t proactively say, “You should probably talk to my VP and a few members of my team and the head of product.”
You need to pitch this to them.
Say something like: “I’d like to talk to the people this work will eventually touch. Who else should be part of this conversation?”
When you position it that way, your project lead will usually realize, “Oh, actually yes—this work is going to touch these people in these ways.”
If you’re only ever presenting insights to your one project sponsor, and you’re depending on them to distribute the findings to everyone else, you’re setting yourself up for failure. Unless they’re working hand-in-hand with you through the entire process, they’re never going to position your work the same way you would.
2. Buy-in through the research process itself
Here’s something I learned from Bob Moesta, co-architect of Jobs to Be Done: insights are not truly absorbed unless you’re part of the process.
We used to be very anti-client-on-the-call. We thought it would bias the interview. And yes, if you’re Rand Fishkin or some other highly influential person in your space, you absolutely will bias the interview. People will be too nice. They won’t give you critical feedback.
But for most people? The bias concern doesn’t impact it that much.
I won’t say it has zero impact, but it’s not big enough to change our insights.
Even when we conduct research on behalf of clients, interviewees still kind of assume we’re part of the company anyway. They’ll say “your product” even though we have no idea what they’re talking about. The perceived separation isn’t as strong as we think.
And here’s what we’ve learned after years of doing this both ways:
When people aren’t part of the research process, they simply don’t absorb it the same way.
You can record Zoom calls all you want. Nobody listens to them. It’s the same thing that happens when you’re standing in line at the grocery store—you just dissociate, and start doomscrolling. It’s really hard for people to be present with a recording versus being on the actual call.
I had someone push back on this once: “But you can attend a call and ignore it too.”
Yes. That’s why you need a debrief immediately after.
The debrief is where the magic happens
After every interview, we do a 20-30 minute debrief with whoever from the client side attended. We do this for a few reasons, but largely to create buy-in, and to create accountability so everyone is encouraged to actively listen.
We break down:
- What we heard
- What stood out to us
- The four forces (Push, Pull, Anxiety, Habit) that emerged
- What Jobs to Be Done surfaced
- Anything else interesting or surprising
This is where buy-in actually gets created. When the client is part of this debrief process, you’re not debating what you heard—they heard it too. You’re collaborating on what it means and what to do about it.
By the fifth interview, you’ve usually already started mapping out next steps together. And because they were there, you don’t have to “sell” them on the findings. They’re already convinced.
Plus, they’re encouraged not to show up empty-handed to every debrief. It’s embarrassing to attend one and have nothing to say because they weren’t really listening.
A cautionary tale: when clients aren’t part of the process
We once worked with a very large enterprise SaaS company. Huge organization. They hired us to conduct research for a few of their products, specifically to help marketing better understand customers and improve activation, messaging, and conversion rates.
The priorities shifted from when we first talked to them—it started as positioning and messaging work, then became more about conversion rates. Fine, that happens. But here’s where it went wrong:
We couldn’t get anyone from the client side to actually attend the interviews.
This was before we started enforcing client attendance and mandatory debriefs. We thought, “We’ll record everything, they can listen later.”
We conducted over 100 interviews. I remembered almost every single conversation because I was there. I was intimately familiar with the data. I analyzed it myself.
But the team? They didn’t attend. They didn’t listen to recordings (because who has time?). And when we emerged from our research “black hole” with all these insights, we had to convince them of what we heard instead of collaborating on what to do about it.
Despite all our preparation, analysis, and presentations, the findings were met with skepticism. Not because the research was bad—because they weren’t part of the process.
The team leader did their best to listen to recordings, but they don’t have endless bandwidth. And even they had a hard time really absorbing “Okay, what did we actually hear in this interview?”
We provided clear insights. We showed them how to apply it to their work. But the team still struggled because they just weren’t part of it.
That’s impact failure.
From the client’s perspective: when to invest in research
If you’re in-house and trying to figure out when research makes sense, here’s my framework:
Do research when you need to make big decisions or achieve big outcomes that have real risk or opportunity attached to them.
Research—or really, insights gathering—is critical when:
- The decision dramatically impacts your growth trajectory
- There’s high risk if you get it wrong
- There’s high opportunity cost if you miss it
- You’re troubleshooting a problem and nothing has worked
For example: Can you imagine doing pricing work without conducting a single interview or running a survey? You’d just be guessing. Maybe educated guessing, but still guessing.
Or trying to improve activation rates without understanding what’s actually blocking users? You’d just be throwing darts.
Research is fast if you want it to be
One of the biggest myths I hear: “Research takes too long.”
Slowness is a choice.
We recently kicked off a project and sourced research participants in 24 hours using platforms like Respondent.io and User Interviews. Research can happen fast.
Even if you don’t have budget, you can still be scrappy:
- Reach out on LinkedIn
- Post in relevant communities
- Ask your customer success team to facilitate intros
- Offer non-monetary incentives (donations to charity, swag, access to features)
- Just… ask for free (we did this for years and people still said yes)
Think about it like journalism. Journalists aren’t paying people for interviews—people talk because they want to be heard, because they care about the topic, because it matters to them.
You don’t need hundreds of interviews either. For Jobs to Be Done work, Bob Moesta rarely does more than 10-12 per segment. By the fifth interview, you’re already seeing patterns.
A recent example: competitive intelligence that shifted everything
At my fractional CMO client, we recently did competitive intelligence research that completely changed how we thought about a particular department.
We were trying to decide whether to invest more in a specific function (I’m keeping this vague on purpose). We didn’t know if we weren’t doing it right or if we just needed to pivot entirely.
So we interviewed ex-department leaders from competitors—specifically people who hadn’t worked there in a while (we didn’t want them to feel like they were sharing trade secrets). We asked:
- How did you structure your team?
- What were your goals? Did you hit them?
- What did good performance look like?
- How much did you invest in this versus other areas?
We learned how competitors were structuring their sales functions (we’d been struggling with SDR quotas and team size), how much they were investing in marketing (turns out, not a lot), and what they felt their differentiators were (spoiler: nobody had real differentiators).
This research validated that investing in brand was a huge opportunity, that we needed to build toward a clear product differentiator, and that throwing more money at sales might not be how we “win.”
That’s insights gathering in action. We couldn’t have made that decision well in a vacuum.
When research leads nowhere: another cautionary tale
We once worked with a company where we were hired to improve activation rates. The research was clear: new users were struggling with specific UX issues, and the signup flow made it impossible to understand pipeline quality.
The software was a desktop add-on to a Microsoft product, which made tracking incredibly difficult. We identified clear solutions:
- Create browser-based accounts first, then trigger the desktop download
- Add qualification questions during signup to understand who was coming in
- Improve the UX of the actual download and setup process
We did UX research. We watched users struggle. The team acknowledged the problems—they didn’t disagree that issues existed.
But then? They decided not to prioritize any of it.
They took the smallest possible swings. Did the bare minimum. And ultimately, nothing really changed.
Why? I think improving activation felt too daunting. Even though the scope was clearly defined, the CEO had a hard time mentally committing to making changes to the core experience.
The irony? They hired us specifically for growth.
Sometimes hiring a consultant doesn’t wave a magic wand. If you want to see growth, you have to deploy growth. You need a team ready to execute.
That’s the harsh reality. Research can’t force you to act. It can only illuminate the path.
How to make research actually work
Whether you’re the consultant or the client, here’s what needs to be true for research to create real impact:
For consultants:
- Understand what outcome the client is actually buying. Research is in service of something. Know what that something is.
- Screen for clients who will act. Look for a history of problem-solving, iteration, or previous research deployment. If they’ve never done anything like this before, set expectations accordingly.
- Enter at the right level. Work with people who have influence and power to create change.
- Get buy-in from everyone the work will touch. Don’t rely on your one project sponsor to distribute insights—talk to the team yourself.
- Make clients part of the process. Have them attend interviews. Do debriefs after every single one. This is non-negotiable if you want impact.
- Don’t disappear into a black hole. Fast, iterative research beats three-month deep dives that end with a 100-page report nobody reads.
For clients:
- Do research when the stakes are high. Big decisions, big opportunities, big risks—that’s when you need insights, not just dashboards.
- Participate in the research. Attend interviews. Join debriefs. You won’t absorb it the same way from recordings.
- Be ready to act. Don’t commission research unless you’re actually willing to execute on what you find. Otherwise, you’re just wasting money.
- Use research to make numbers talk. Quantitative data tells you what is happening. Qualitative research tells you why. You need both.
- Move fast. Research doesn’t have to take months. You can source participants in 24-48 hours. You can get meaningful insights from 5-10 interviews.
- Build insights gathering as a muscle. Teams that do this well make better decisions faster. They challenge assumptions. They don’t guess when they can know.
The bottom line
Research projects fail when they’re treated as the end goal instead of a means to an end.
They fail when clients aren’t part of the process.
They fail when there’s no clear outcome the research is in service of.
They fail when teams aren’t ready to actually do anything with the insights.
But when research is done right—when it’s fast, collaborative, and directly tied to outcomes that matter—it becomes one of the most powerful tools in your growth toolkit.
The question isn’t whether you should do research. The question is: what decision are you trying to make, and what insights do you need to make it well?
Answer that, and you’re already halfway there.
Need help designing research that actually drives action? Let’s talk. In just 45 minutes, we can identify the biggest opportunities to explore and what insights you actually need to move forward—book a discovery call.
For more on research methodologies mentioned in this post, check out: