Could this be why your user interviews aren’t giving you real answers?
In our live session, we asked product management coach Tim Herbig a question we hear from product teams all the time:
“How do we balance quantitative metrics with qualitative insights to actually measure progress?”
Tim’s answer hits a major challenge we see often: teams doing constant user interviews, yet still struggling to make confident decisions.
Split testing gives you an objective signal, but if we’re only sitting next to customers asking, “What feature should we build?”, we’re not collecting valid data or making real progress.
The real work is identifying the core question we need answered, and choosing the data source best equipped to get us to informed conviction.
Watch the full session on our events page or YouTube for more on real progress, strategy, OKRs, and discovery.
And I think it comes down to something simple like split testing. Split testing is so amazing because it's, it's science. It's not subjective. You shouldn't be, you can't discuss it or you can discuss it, but it's, it's, there's an objective answer to that, but you have to need to have the prerequisites in place, right? Ten 2000 user interviews every week. But if all you do is sit next to customers and ask them what features should we build, you're not making real progress and not getting valid data. So I think it's it comes back to this core question of what is the thing I need answer like and answer to which data source do I? Do I have? Which one is the best equipped? To actually help me deliver a data so that I can get to inform conviction, yeah, so that that's the way I see it, Allen said. There's no answer unfortunately for that, but I think the good and the bright side, it means your product teams much more flexibility and possibility to be accountable for. When to take a decision based on certain pieces of data?
Thank you so much for having me on - love to relive these moments.