You ship a new feature and 10% of your customer base uses it. You thought it would solve a real problem. Customers said they wanted it. But now it’s sitting there, mostly ignored.

Here’s what happened. You skipped problem space discovery and went straight to building a solution. Most early stage SaaS companies do this. They take snippets of customer feedback at face value, build what was requested, and wonder why adoption is terrible.

The issue isn’t that you’re building. The issue is you’re building before you understand the actual problem. There’s a difference between idea validation (testing if a solution works) and product discovery (understanding user stories within a problem space before you define solutions). Most teams skip the second part entirely.

Problem space vs. solution space

There’s problem space and there’s solution space. This concept comes from Newell and Simon in 1972, popularized by Eric Ries in The Lean Startup.

The problem space is understanding what problems exist within a particular market. I think about problems as the jobs people are trying to accomplish. In business contexts, problems sound like “my tasks are spread across disparate systems” or “I don’t know what questions I should be asking when I pull reports.”

The more that you understand the context of a problem, the better you’re going to be at defining solutions. There are many different solutions you could produce, but there’s likely a handful that will be adopted, stick, and have lasting staying power.

What I’m observing right now is how many teams skip right over problem space thinking and jump straight into solution space without really questioning what they’re building.

What’s happening in most early-stage product teams

In most early-stage product functions where the founder is also the head of product, discovery isn’t happening. Teams operate on snippets of customer feedback taken at face value. “I want the report to have this filter.” “I want to be able to send text messages, not just emails.” These requests get built without questioning whether they’ll support growth or retention.

Some of these will be safe bets. But without a product management process, you can’t distinguish between what will move metrics and what will sit unused.

Many teams think discovery means asking “What do you struggle with?” or “What don’t you like about the product?” But customers aren’t product managers. They can’t tell you what to build. If you’re building for developers or product people (like Notion), you might get useful feature requests. For everyone else, you’re putting the burden on the customer to design your product.

Two ways to approach discovery

Observational studies. You observe them accomplish a task and ask questions along the way about what they’re trying to do and why. When we work on activation, we do UX interviews where we watch people sign up. Watching someone struggle through what you thought was an intuitive flow is humbling.

We’re doing pricing interviews right now where qualified prospects navigate the pricing page. What we thought was clear design turns out to be confusing when you watch real people use it. UX interviews surface gaps between your assumptions and reality faster than anything else.

User story interviews. You’re interviewing people based on actualized behavior, not fictitious or hypothetical behavior. This is the key difference.

A hypothetical question sounds like this. “How often do you go to the grocery store?” “Well, I probably go like, usually it’s two to three times a week.” That’s a generalized hypothetical statement. Useless.

Ask about actual behavior instead. “When was the last time you went to the grocery store?” Now I’m remembering the day. Last Wednesday. I picked up a sub sandwich because I had to play tennis. And some sushi. And milk.

See the difference? You’re listening for real behavior, not what people think they usually do. You’re trying to get to what was the actual thing that you did.

Then ask contextualized questions. How did you know to do that? What triggered that? Was there anything frustrating about that experience?

A real example: budget vs. cash flow

We’re doing pricing interviews for a client. In one interview, I spent 5 to 10 minutes digging into the prospect’s budgeting process because it’s correlated to reporting in this product.

She kept saying “budget.”

I asked “When was the last time you did it? Why did you do it? How did you know that you needed to do that?” And people are going to be like “Oh, well, generally I…” And I’m like “No, no, no. I want to know exactly what you did.”

The answers were so much different than I would have thought. She has all these different things she’s pulling up. The budget isn’t a budget like you and I would say. It’s a number she comes up with every week. She does it weekly.

We learned that “budget” to her means something very different than what it means to us. To her, she’s cash flow managing. She’s not budgeting. She’s reacting to her cash flow, not setting a budget.

From that one user story, I can see 3 to 4 features the product could build that would fit her context. Imagine if we had done 20 of those interviews. Now that solution space starts to get clearer.

The consequence of skipping discovery

Most early stage product functions skip all of that. They’re like “I’m just going to add this feature to reporting and just see if people use it.”

This to me is no different than “I’m just going to build a whole product and just see if people want it.”

If you skip the problem space, you’re going to ship something, pat yourself on the back, and then nobody uses it. They’ll say “That’s a great idea.” And then never use it.

That’s extremely common.

So you ship something and 10% of your customer base uses it. If you’re small, like if you’re less than five or 10 million, I would be concerned. I would want to see at least 40 or 50%. But 10% is insane. You shipped a new feature and 10% of people use it. We didn’t do enough discovery or validation.

This doesn’t have to take weeks

Discovery doesn’t have to be hundreds of interviews. The best way to do discovery is 15 to 20 minute interviews. Get 5 to 10 of those. Pack the front half of your week. Knock all of those out Monday and the next day is ideating around the solution space.

Slowness is a choice. This can happen in hours. We’ve conducted pricing interviews in hours. From zero to five interviews booked takes a day.

You don’t need a million user stories. You need 10, maybe 20. They don’t have to be long. 15 to 20 minutes is all you need. Ideally focused on a product area.

You already have data. If you have support tickets, you already have data on what areas to focus on. Pump that through LLMs and have it tell you where to put your energy. Really validate and test assumptions related to what customers ask for. Some of it is obvious. Some of it’s feature parity stuff that you should just have. But some of it, there’s a deeper story.

Teams either don’t collect user stories at all, or they collect two or three and call it done. This isn’t enough to surface patterns. You need 10-20 to see what’s consistent across users versus what’s individual preference.

Surfacing problem stories requires focus. You have to be sensitive to the difference between generalized hypotheticals (“I usually do X”) and actual behavior (“Last Tuesday I did Y because Z happened”).

The best book I recommend is Continuous Discovery Habits by Teresa Torres. It’s my favorite resource for digging deep into what user stories that surface problem spaces sound and look like.

How to fit this into your process

If you’re using the sprint methodology, part of the sprint should include a few days of discovery within your core target areas. The way you strategically prioritize should be based on data.

Chances are you already know your top three to four areas of opportunity in your product. You probably have great ideas for what to do, but they need to be validated using discovery and user stories.

If you already have the idea of building AI features, spend a week talking to 10 customers, collecting their user stories around the specific business problems you’re hoping to solve using AI. Don’t pitch them the AI solution. Talk to them about how they’re currently using reporting and what challenges exist there. The more you understand the problem space, the more you’ll be able to define what the AI feature should do and where it should live.

When you kick off your sprint, maybe you already know your core areas of focus, but maybe you don’t know exactly what the solutions are until you do user discovery. That’s what informs your solution space.

I do feel like sometimes we commit to sprints when clearly this thing over here is dying. That’s a lot of our teams. The award winning growth creating part of the product is getting no love, but the new shiny thing, probably AI, is getting all the attention. Meanwhile, all of your customers are like “I just need the report to do this.”

Founders don’t have confidence when they listen to customers on what to build because they’re not doing the user stories. If they did that part, they’d uncover a deeper pattern which is more applicable to more people.

You don’t want to run the risk of launching something disappointing and then people never use it again, even after you’ve made improvements. You launched something under baked, people got disappointed, and then they ignored it even after you fixed it.

Where to start

If you’re going to change one thing about how you build product, start with user story interviews. 15 to 20 minutes each. 5 to 10 interviews total. Focus on a product area where you already have data showing problems (support tickets, low adoption rates, customer requests).

This doesn’t take weeks. You can book and complete these interviews in a few days. The insight you gain from understanding actual behavior will save you from building features that sit unused.

Want help figuring out what your customers need? This is something we help with at DemandMaven. Let’s talk: https://demandmaven.io/contact/