How to Use Customer Research for SaaS Feature Prioritization

At a previous company, we faced a problem that plagues most high-growth SaaS businesses: everyone had strong opinions about which features to build, but nobody really knew what customers actually valued. Developers wanted features that would deliver serious value, but customers might not understand or think they need. Support pushed for fixes that would reduce tickets. Product had a vision for innovation. Free users demanded parity with competitors. And paying customers? They often stayed quiet until renewal time.

So we built a research framework that changed everything. We surveyed different customer groups (paying customers, free users and non-users in our target market) asking them to rank both current features and potential future features. We segmented respondents into personas: agencies, large businesses, SMBs and hobbyists. That research became our compass for years, guiding decisions about which features should be free versus paid, which developments to prioritize on our roadmap and which new concepts deserved exploration. The framework didn’t just inform decisions. It settled arguments with data.

This approach isn’t unique to one company. The best early-stage, high-growth SaaS companies use systematic feature research to avoid building the wrong things. Here’s how to implement it yourself.

The Feature Paradox: Why SaaS Companies Build the Wrong Things

SaaS product teams waste enormous resources building features nobody wants. The problem isn’t lack of ideas. It’s having too many competing priorities with no objective way to choose. According to research from Mind the Product, 49% of product managers don’t know how to prioritize without customer feedback, and their biggest challenge is “prioritizing the roadmap without market research.”

The consequences compound quickly. Teams prioritize based on gut reactions, chase feature parity with competitors or respond to whoever yells loudest (usually the largest customer or most persistent sales rep). Meanwhile, the silent majority of users either struggles with features that don’t address their needs or churns without explanation.

The paradox: as your product grows, you get more feedback, but the signal-to-noise ratio gets worse. A support ticket about one missing feature might represent one frustrated user or a systemic problem affecting thousands. That enterprise prospect’s feature request might unlock a new market segment or distract you from core customers. Without structured research, you’re flying blind.

Early-stage SaaS companies can’t afford this waste. When Kyle Poyar analyzed 450+ software companies for his PLG benchmarks, he found that freemium companies retain only 19% of signups in month one, dropping to 9% by month three. That means nine out of ten new users leave within 90 days, often because the product doesn’t deliver value they care about. Every feature you build that doesn’t address real user needs accelerates this churn.

Feature research solves this by replacing opinions with evidence. But not all research is created equal, and not all customer voices deserve equal weight in your decisions.

The Three-Audience Problem Most SaaS Companies Ignore

Most SaaS companies make a critical mistake: they treat “customers” as a monolithic group. In reality, you serve at least three distinct audiences with conflicting needs and different value perceptions. Understanding these groups separately is essential for smart feature decisions.

The three audiences are: paying customers, free users (or trial users) and non-users in your target market. Each group has unique behaviors, motivations and relationships with your product. Research from Userpilot shows that companies combining multiple segmentation methods (such as company size plus use case plus lifetime value) gain far more actionable insights than single-variable segmentation.

Paying customers reveal what’s valuable enough to exchange money for. They’ve crossed the psychological barrier of pulling out a credit card, which means they see real ROI. Their feature requests typically focus on depth. They want to do more, go faster or solve adjacent problems. But paying customers can mislead you too. They often request features that would benefit only their specific use case, not your broader market.

Free users demonstrate what attracts people and gets them engaged, but they haven’t found sufficient value to convert. This group is critical for two reasons: they represent your conversion opportunity, and they show you where your free offering fails to demonstrate value. When Intercom redesigned their Messenger product, they conducted eight rounds of usability testing specifically focused on first-time user experiences. The result? They eliminated the reflexive closing behavior where users would dismiss messages on sight, achieving a 200% improvement in positive anecdotes about the App UI.

Non-users in your target market expose blind spots in your positioning and feature set. They chose a competitor or alternative solution, which means something about your product didn’t resonate. This audience is hardest to reach but provides invaluable perspective on competitive positioning and unmet needs in your market.

Beyond these three audiences, behavioral personas provide a second layer of segmentation that reveals what customers actually do versus what they say they want. The most effective approach combines audience type with personas based on company size, role or use case. At my previous company, we found that agencies and large businesses often wanted opposite things: agencies needed white-label options and multi-client management, while large businesses wanted deep integration with enterprise systems. Without segmenting by persona, aggregate data would have hidden these conflicting priorities.

Cognisaas recommends a dynamic 2×2 framework that plots customers by current value versus growth potential, creating four segments: Grow, Protect, Re-evaluate and Manage. For early-stage SaaS companies, the “Grow” segment (recently onboarded customers with high expansion potential) should receive 80% of your attention. These customers are most likely to provide feedback that indicates future mainstream needs.

The key insight: different audiences and personas should influence different types of decisions. Paying customers in your “Grow” segment should heavily weight roadmap priorities. Free users should inform conversion optimization and onboarding improvements. Non-users should guide positioning and competitive feature decisions. When you survey all three audiences with persona segmentation, patterns emerge that clarify what to build and what to skip.

Building the Research Framework

Creating an effective feature research framework doesn’t require expensive consultants or complex statistical models. The core methodology is surprisingly simple: survey different customer groups about both current features and potential future features, then analyze patterns across audience types and personas.

The foundation uses what IBM Design calls a mixed-methods approach, combining qualitative interviews with quantitative surveys. Start with 10-15 customer interviews to surface language, pain points and feature ideas you haven’t considered. These conversations ensure your survey includes features “not in discussion for the product team originally,” as IBM’s research team discovered when working with Watson Media.

For the quantitative survey, the Kano Model provides an elegant structure. Ask two questions about each feature: “How would you feel if you HAVE this feature?” and “How would you feel if you DON’T HAVE this feature?” Response options range from “I like it” to “I dislike it” to “I expect it” to “I’m neutral.” This dual-question format categorizes features into five types: must-be (expected basics), performance (proportional satisfaction), attractive (unexpected delights), indifferent (no impact) and reverse (better without them).

The survey structure should include three sections: current feature rankings, future feature rankings and demographic/persona questions. For current features, ask respondents to rate importance and satisfaction on 1-10 scales. This creates what Product School calls Opportunity Scoring. Features with high importance but low satisfaction represent your biggest opportunities for improvement. For future features, use the Kano dual questions or simpler ranking exercises where respondents distribute points across options like a budget.

Persona segmentation questions should appear at the end to avoid priming responses. Include company size (or project size for hobbyists), role, industry and use case. For SaaS products, behavioral segmentation often matters more than demographics: How frequently do they use your product? Which features do they use most? What’s their primary job-to-be-done? As Userpilot’s research on behavioral segmentation demonstrates, power users and light users often want completely different things from the same product.

Distribution strategy matters as much as survey design. Send the survey to all three audiences, but expect different response rates. Paying customers typically respond at 15-25%. Free users respond at 5-10%. Non-users (who you’ll need to recruit via ads, communities or lead lists) respond at 2-5%. Plan sample sizes accordingly. You want at least 50 responses per major persona to identify statistically significant patterns.

Ideally you should run this survey annually, eventually building a dataset with hundreds of responses across four personas and three audience types. But you don’t need years of data to benefit. Even a single survey with 120 responses (the sample size IBM used) provides enough signal to make confident prioritization decisions.

The final framework component: create a simple scoring model that weights responses by audience and persona importance to your business. If enterprise customers represent 80% of revenue, their responses should carry more weight than hobbyists. If converting free users is your top priority, weight their responses accordingly. This weighting transforms raw survey data into actionable prioritization scores.

What the Data Reveals (Using Anonymized Patterns)

When you analyze feature research across different audience types and personas, three patterns consistently emerge that challenge conventional product wisdom. These patterns explain why companies without systematic research so often build the wrong things.

Pattern one: paying customers and free users want fundamentally different feature progressions. Paying customers prioritize depth and efficiency. They want features that help them work faster, integrate with existing tools or unlock advanced capabilities. Free users prioritize breadth and ease. They want basic functions to work flawlessly and appreciate features that demonstrate immediate value. One company might see paying customers rank “API access” and “advanced reporting” in their top five, while free users rank “better onboarding” and “mobile app” highest. Without segmented research, you’ll either over-build for one group and alienate the other, or try to satisfy everyone and satisfy no one.

Pattern two: the features customers claim are most important often aren’t the features that drive conversion or retention. This disconnect appears repeatedly in research. Customers say they need feature parity with competitors, but conversion data shows they choose products based on ease of use and specific workflow fits. Slack’s redesign research revealed this perfectly: their team initially focused on adding features, but usability testing showed that people were closing messages “almost on reflex” because the interface was cluttered. The solution wasn’t more features. It was radical simplification that made existing features discoverable.

This is why mixed-methods research matters. Qualitative interviews reveal what customers do and why, while quantitative surveys reveal what they think they want. When these conflict (and they often do), behavioral data should win. As one product leader at Captain Experiences puts it: “If you aren’t using qualitative and quantitative research to inform your product strategy, you will fail.”

Pattern three: different personas need different feature sets, and trying to serve all personas equally dilutes your product. At my previous company, agencies consistently ranked white-label features and multi-site management tools highly, while small businesses barely cared about these capabilities. Large enterprises wanted SSO and advanced permissioning, which hobbyists saw as unnecessary complexity. The data made clear that we couldn’t be everything to everyone. We needed to choose primary personas and accept that secondary personas would find some features irrelevant.

The most valuable insight from research often isn’t which features to build. It’s which features NOT to build. HubSpot co-founder Brian Halligan learned this the hard way: early HubSpot said yes to nearly everything, resulting in “half-baked projects all over the place” that nearly sank the company. Learning to say “no” saved the business. Feature research provides objective criteria for those “no” decisions.

Another revealing pattern: features that are “attractive” (delighters) for one persona are often “indifferent” or even “reverse” (causing dissatisfaction) for another persona. A feature that wows enterprise customers might intimidate small businesses. An automation that helps agencies might confuse hobbyists. Kano analysis, when segmented by persona, reveals these conflicts and helps you decide whether to build persona-specific experiences, provide simplified modes or skip features that help one group but hurt another.

Finally, the data often reveals opportunity gaps that nobody requested explicitly. When customers rate current features low on satisfaction but high on importance, they’re signaling pain without necessarily articulating solutions. These gaps represent your best opportunities for differentiation. Intercom discovered this when they noticed customers were frustrated with business messengers being annoying and interruptive. Nobody asked for “a messenger as familiar as consumer messengers,” but that insight led to their successful redesign.

Applying the Research: Three Strategic Uses

Feature research becomes powerful when you apply it systematically to three distinct decision types: determining which features should be free versus paid, prioritizing roadmap development and deciding which new concepts deserve deeper exploration. Each use case requires different analytical approaches and different weights for audience segments.

Free vs. Paid Feature Decisions

The freemium model’s central question (which features to give away and which to charge for) has extraordinary business consequences. Patrick Campbell, founder of ProfitWell, emphasizes that “freemium is an acquisition model, NOT a revenue model”. Companies implementing freemium see 50% lower customer acquisition costs and nearly double the NPS scores compared to sales-acquired customers, but only if they get the free-paid boundary right.

Feature research clarifies this boundary by revealing three critical data points: which features free users rank highest (showing what attracts people), which features paying customers rank highest (showing what drives conversion) and which features differentiate between high-value and low-value customers. At my previous company, we found that certain security features appeared in free users’ top five but ranked even higher for paying customers. This pattern suggested these features were “performance” features in Kano terms. They scale satisfaction proportionally, making them perfect upgrade triggers.

The framework Kyle Poyar outlines in his PLG research provides specific benchmarks: freemium products typically convert 5% of free users to paid, while free trials convert 17%. But he notes that “multi-player” products (those requiring team collaboration) achieve 80% retention compared to 40-60% for single-player products. This insight transforms feature decisions: if your research shows team collaboration features rank highly across personas, those should be upgrade triggers because they improve retention economics.

The best approach: free features should demonstrate core value but reserve depth, scale and collaboration for paid tiers. Your research will show where these boundaries naturally fall for your specific product. Features that rank as “must-be” (basic expectations) should generally be free. Charging for them creates friction. Features that rank as “attractive” (delighters) can be paid if they target high-value personas, but might be better as free features if they drive viral growth.

Roadmap Prioritization

Roadmap decisions require balancing what customers want with strategic business goals. Feature research provides one input (customer demand), but smart prioritization weights this against reach, implementation effort and confidence levels. The RICE framework from Intercom provides a proven formula: (Reach × Impact × Confidence) ÷ Effort.

Translate your research into RICE scores by mapping survey rankings to impact scores (high-ranking features score 3, medium score 1.5, low score 0.5), estimating reach based on how many customers in each segment use related features and setting confidence based on sample sizes and consistency across respondents. If agencies and large businesses both rank a feature highly but with different use cases, that might warrant 80% confidence. If only one small persona cares, drop confidence to 50%.

The frameworks covered by Product School and Productboard offer alternatives when RICE doesn’t fit. Value vs. Complexity plotting works well for early-stage products with limited resources. Anything in the “high value, low complexity” quadrant becomes a quick win. MoSCoW (Must have, Should have, Could have, Won’t have) helps communicate priorities to stakeholders by categorizing features based on research rankings combined with strategic importance.

The critical insight: research shows customer demand, but you must apply strategic filters. Features that rank highly with low-value personas might go in the backlog. Features that rank moderately but align perfectly with your market positioning might jump to the top. At my previous company, certain enterprise features consistently ranked in the middle for overall importance, but because enterprise customers represented our highest lifetime value, we weighted these features more heavily in roadmap decisions.

Hotjar’s research emphasizes using customer insights to decide what NOT to do: “Be ruthless about removing low-priority items” but keep them in a repository. Features that rank low today might become important as your product and market mature. The annual survey cadence helps you track these shifts. A feature that was “indifferent” two years ago might become “must-be” as your market evolves.

Concept Exploration

The third strategic use (deciding which new concepts to explore) requires analyzing research data differently. Instead of looking for high-ranking features, look for patterns, pain points and gaps that suggest new directions. This is where qualitative interviews combined with quantitative validation becomes essential.

When Slack conducted their redesign research, they didn’t ask customers “do you want a simpler interface?” Instead, they observed through benchmarking studies that new users couldn’t complete basic tasks. They tested people who had never used Slack and watched them struggle to find features “buried away in odd places.” This behavioral observation led to their radical simplification concept, which then required multiple rounds of testing to validate.

Feature research guides concept exploration in three ways. First, low satisfaction scores on current features signal opportunities for reimagining rather than incrementing. If customers rank a feature as important but satisfaction is low, don’t just improve it. Question whether the underlying approach is wrong. Second, look for “attractive” features that excite specific personas but haven’t been built yet. These often represent white space in your market. Third, analyze patterns across features to identify systemic needs rather than individual requests.

At my previous company, we noticed that multiple feature requests from agencies all related to client communication and reporting. Rather than building each requested feature separately, we explored a more comprehensive client management concept that addressed the underlying job-to-be-done. The research didn’t tell us to build this. It provided the evidence that agencies struggled with a specific workflow, which justified deeper exploration.

The key is using research to identify promising directions, then conducting follow-up studies to validate specific solutions. Intercom’s approach to maximizing user research across the product lifecycle demonstrates this: they use “lean research” for new concepts (embedding researchers with the team to test hypotheses quickly) and “iterative research” for established products that need refinement. For concept exploration, lean research enables you to put six different variations in front of customers before committing to full development, as they did with Smart Campaigns.

Implementation Guide

Building your own feature research framework requires six steps that can be completed in 30-45 days, even with limited resources. The goal isn’t perfection. It’s creating a repeatable process that provides better information than gut feel and stakeholder politics.

Step one: Define your personas and audience segments (week 1). Start by listing your current customer segments by company size, role or use case. At minimum, distinguish between power users and casual users, but aim for 3-5 meaningful personas. Then identify how you’ll reach all three audience types: paying customers (email your customer list), free users (in-app messages or email) and non-users (paid ads to competitor keywords, industry communities or purchased lists). Don’t overthink segmentation. You can refine it after your first survey reveals which distinctions actually matter.

Step two: Conduct qualitative interviews (weeks 1-2). Schedule 10-15 customer conversations spanning your personas and audience types. Use a consistent interview guide covering their goals, frustrations with current features and wish list for future capabilities. The IBM Design team emphasizes that these interviews help surface features not initially considered by the product team. Record and transcribe these sessions to identify common themes and language patterns.

Step three: Design your survey (week 3). Build three sections: current feature rankings using the Opportunity Scoring approach (rate importance and satisfaction 1-10), future feature rankings using either Kano dual questions or simpler “distribute 100 points across these options” exercises and persona/demographic questions at the end. Include 8-12 current features and 8-12 future feature concepts. More than that creates survey fatigue; less than that limits usefulness. Test your survey with 3-5 friendly customers before broad distribution.

Step four: Distribute and collect responses (weeks 3-4). Email paying customers with a personal message from a founder or product leader. This improves response rates significantly. Target 15-25% response rate from paying customers, 5-10% from free users. For free users, consider incentives like extended trials or feature unlocks. Run ads to reach non-users, targeting competitor keywords or industry terms. Budget $500-1000 for 50+ non-user responses if using paid acquisition. Close the survey after 2 weeks or when you hit target sample sizes (50+ per major persona).

Step five: Analyze and visualize results (week 5). Create Opportunity Score charts plotting importance versus satisfaction for current features. Anything in the top right quadrant (high importance, low satisfaction) represents priority improvements. For future features, calculate average scores by persona and audience type. Create heatmaps showing which features each segment values most. Look for patterns: Do all personas agree on must-have features? Where do they diverge? Which features do paying customers rank much higher than free users?

Step six: Build your prioritization framework (week 5). Combine survey data with strategic weights. If enterprise customers are your focus, multiply their scores by 1.5-2x. If converting free users matters most, weight their input heavily for features that enable basic workflows. Use RICE, Value vs. Complexity or Weighted Scoring to create a final prioritization rank. Document your methodology so you can refine it for future surveys.

Common implementation challenges and solutions: Low response rates plague most surveys. Combat this with incentives, personal outreach from leadership and keeping surveys under 10 minutes. If you can’t reach enough non-users, focus on paying customers and free users first. Their perspectives alone provide enormous value. If different personas want conflicting things, that’s a feature, not a bug. The research reveals that you need different experiences or must choose a primary audience.

The Productboard team notes that without research, product teams prioritize based on gut reactions, feature popularity or support requests, all of which lead to suboptimal products. Your framework doesn’t need to be sophisticated to beat these alternatives. Even a simple survey with 100 responses provides more signal than stakeholder opinions alone.

After your first research cycle, commit to annual surveys to track how preferences evolve. At my previous company, we ran this annually for years, building a rich dataset that revealed trends: security features became more important over time as our market matured, while certain advanced features that seemed critical early on became table stakes later. This longitudinal data helped us anticipate market shifts rather than react to them.

Conclusion

Feature research transforms product decisions from political negotiations into data-driven strategy. The framework isn’t complicated: survey different customer groups about current and future features, segment by persona and audience type and apply the results systematically to free-paid boundaries, roadmap prioritization and concept exploration. But the impact is profound. You replace opinions with evidence, build what customers actually value and avoid the waste that kills so many early-stage SaaS companies.

The companies profiled here (Slack, Intercom, IBM and others) all reached the same conclusion: systematic customer research isn’t a nice-to-have for SaaS product teams, it’s essential infrastructure. When Slack put their redesign through eight rounds of testing, they weren’t being perfectionist. They were being scientific. When Intercom embedded researchers directly in product teams, they weren’t adding bureaucracy. They were ensuring every feature decision had evidentiary support.

Your implementation doesn’t need to match these companies’ sophistication to deliver value. Start with a single survey of 100+ respondents across your key personas. Use it to make one consequential decision: which feature should be free versus paid, which roadmap item gets built next quarter or which new concept deserves exploration. Then measure the outcome. Did the feature perform as predicted? Did customers respond as the research suggested? Use these results to refine your approach for the next cycle.

The alternative (building features based on who yells loudest, copying competitors or trusting founder intuition alone) might work occasionally. But it fails systematically over time as your product and market grow more complex. Early-stage SaaS companies can’t afford that waste. You’re racing to find product-market fit, achieve profitability and differentiate in crowded markets. Every feature you build that doesn’t deliver value accelerates your demise.

At my previous company, this research framework became our competitive advantage. While competitors debated features in conference rooms, we had data showing exactly which capabilities mattered to which customers and why. Sales conversations became easier because we could confidently explain our feature decisions. Product strategy became clearer because we knew which capabilities drove conversion and retention. And years later, the framework still worked because we updated it annually, tracking how customer preferences evolved with the market.

Your framework will look different because your product, market and customers are unique. But the core principle remains: ask the right customers the right questions, segment their responses intelligently and apply what you learn systematically. Do this well, and you’ll build products that customers actually want, which turns out to be the best path to SaaS growth there is.


Sources and Further Reading

Mixed methods research for more effective feature prioritization – IBM Design

5 product feature prioritization frameworks and strategies – LogRocket

Customer Segmentation: The Ultimate Guide for SaaS – Userpilot

Behavioral Segmentation in SaaS – Userpilot

Product Prioritization Frameworks – Productboard

Designing the future of Slack with customers – Slack

Maximizing user research across a product’s lifecycle – Intercom

You might also enjoy