Three product management myths affecting customer satisfaction and user adoption

Recently I’ve had the opportunity to study the situation of a few software products from the perspectives of customer satisfaction and user adoption.

Here are the top three myths I’ve found in common in these disparate products from different companies. Together they help explain why these products were not performing as well as expected in their target market.

Myth #1:  If we listen closely enough, customers will offer all the answers on how to create value through our software products.

The reality:  Customers are great at telling us about their habits, problems, and aspirations, but not about how best to address their needs with technology.

Example (fictitious, adapted from a real scenario)

Product: Content management platform used by enterprises to store pieces of content (images, videos, text) used in blog posts and marketing campaigns across social media channels.

Popular request from users: Ability to manually tag content pieces so that it’s easier to tell in the future when and where each piece of content had been used in the past.

What happened once this request was further investigated? A combination of problem interviews and observation of users in action led to a better solution: automated tagging.  Rather than building a capability that would have to rely on users being diligent about tagging pieces of content themselves, a set of automated rules makes the process of flagging and filtering content by various dimensions much more reliable and valuable for the customers.

For example, when a piece of content is published from the platform, the system automatically flags it as published, logging the date/time and publishing channel. Subsequent publishing of the same content increments a counter and adds a new log of date/time and channel. This way, content publishers composing a new blog or social media post can tell whether a piece of content has been already used recently, and content creators  can use the same information to inform their future creative process. Creators and publishers  can trust that the publishing status of each piece is up-to-date, something that would be impossible to guarantee if the system had to rely on users to manually tag published content, as originally requested by customers.

Key takeaway: It’s foolish to expect customers to know the best solution for their problem. Listen to their solution ideas , but don’t take their opinions for granted. Use techniques like problem interviews and observation to study the problem space and come up with alternative solutions that address the essence of the problem to be solved before deciding upon the most appropriate choice.

Myth #2: Being “data driven” greatly increases the chances of product success.

The reality: Quantitative data often fails to provide all the information we need to design the best product or feature.

Example (fictitious, adapted from a real scenario)

Product: Mobile app used to manage and listen to podcasts using a smartphone.

Popular feature: Auto-delete after the user has finished listening to a podcast.

Unintended consequence of the auto-delete feature:  Many podcasts have a portion at the end in which the host asks listeners to write a review or reads a sponsor ad. Many users prefer to skip this last portion of each episode. When these users are listening to podcasts in bed while preparing to go to sleep, they may wake up in the morning with a result that is opposite of what was intended:

  • Podcasts the user listened to entirely are still in their downloaded list because the final portion of the recording was skipped.
  • Podcasts the user didn’t listen to until the end (or didn’t even start listening) have been deleted because the user went to sleep while the app continued to play each episode to the end.

Unless the product manager belongs to this particular user segment, it’s very unlikely that he or she will be able to detect the issue without some serious customer research. Quantitative data may tell us when the app starts losing users (which may be moving to a competitor doing a better job predicting which podcasts should be deleted or preserved) but won’t tell us why.

Key takeaway: Tools like problem interviews and observation are again valuable sources of relevant data about the value delivered by product features to different user segments. Interviews with non-customers and former customers can also be a powerful tool to help uncover issues that are hard to detect without an in-depth understanding of the criteria users use to judge product value.

Myth #3: Once we’ve solved a problem for our users, we can move on to the next problem to solve.

The reality: As systems thinking tells us, our equations hold only until something changes in the system’s structure.

Example (fictitious, adapted from a real scenario)

Product:  Another content management system (this time an internal product used by a large organization to publish knowledge articles to their website)

Enhancement: A capability  added to allow content creators to label content items.

What happened?  Initially, users loved the new capability. Content creators started using labels to flag content for management or legal review, to indicate it was work in progress or ready to use, to associate the content with a marketing campaign, etc. But because no conventions were being used to create new labels, soon the application was cluttered with duplicate labels (“Valentine’s” and “Valentine’s Day”, “legal”, “legal_review”). Finding a label to apply or use to filter content from a dropdown with hundreds of labels became a nightmare, and within a few months, feature usage dropped dramatically.

Key takeaway:  We can’t always predict how a new product or feature will affect the future system state. The way out of the trap is to treat the changes in user behavior happening over time as useful feedback, and to take corrective action when performance starts to degrade. (In the labeling example, the solutions included creating a separate workflow for things like management and legal review, and folders for saving content related to a specific event or marketing campaign. With less use cases requiring labels being applied to content, the capability became valued again, and user adoption increased substantially.)

 

How do you know when a feature customers are asking for is worth building?

One thing I like about digital products is that this kind of product is never finished. Not only they can continue to change over time, change is often critical to keep the product relevant in the market as technology and expectations evolve. For example, a web-based ride sharing application requiring people to plan their ride one day in advance could be extremely popular before the arrival of smartphones, and quickly become irrelevant once it’s possible to open a mobile app and send a request for a ride that a driver nearby can instantly accept.

Yet, something I hear often from startup founders in the digital product space is how uncertain and “gut feeling based” their process to decide what to build next is. And even founders of highly successful startups who are still in charge of product decisions, when interviewed by podcasters interested in learning their secrets to growth, typically describe their decision-making process as something imprecise and opinion-based rather than systematic and evidence-based.

The problem here is that, when you don’t have a solid framework to answer the question, What is the next most important thing to spend engineering time on?, sooner or later you’ll end up falling into one of these traps:

  • Ignoring customer input and building an idea the CEO fell in love with that turns out nobody is interested in using or paying for.
  • Meeting customer requests to the letter and ending up with a “me-too” product, or a bloated product that frustrates users, or a new feature that customers said they would love but now renders their well-established workflows useless.
  • Listening to recommendations of “power users” or “early adopters” and creating a product of limited appeal to your larger audience.

In a recent article for LinkedIn, I gave an example of a time when I almost fell into the trap of listening too closely to customer input (being saved by Eric Sylvia, a colleague from the customer success team who knew better than accepting at face value the feedback received from multiple customers):

I was working as a product manager for a software product that was receiving a significant amount of complaints about response time. Users would often write to customer support to express their dissatisfaction with the time that a page took to finish loading the first time after they opened the application. Some users would also point out that one of our main competitors (which offered the same capability in a free version of their product, making it relatively easy for them to compare) had a much faster load time.

As a product manager with a technical background, my first reaction was to go talk to the engineers to better understand the constraints to make the page load faster. Not Eric! Despite also having a technical background, he didn’t take anything for granted. Eric went through the trouble of setting up the exact same scenario in both our product and the free version of the competitor’s, and timed how long it took for each page to load.

Turns out that our product loaded faster. Upon further investigation, it became clear that our software created the perception of being slower to load because of an animated loading icon that was kept in display while the system retrieved the content. The competitor’s product simply showed the static elements of the page, with a blank space where the content was being loaded. This is a fantastic example of sweating the right small stuff. Before jumping to the idea of making the page faster, this colleague decided to check whether we had identified the right problem to solve–which in fact we hadn’t. In reality, instead of requiring a high-cost, high-effort solution to try and reduce latency on a page that had already been optimized for performance, the solution was trivial: just remove the loading icon to eliminate the perception of slowness.

Anthony Ulwick, in his article for HBR called Turn Customer Input into Innovation, offers another illustrative example of the threats facing companies who don’t know how to interpret customer feedback. His example is of a physical product, but the consequences are equally seen in digital products:

There are several concrete dangers of listening to customers too closely. One of these is the tendency to make incremental, rather than bold, improvements that leave the field open for competitors. Kawasaki learned this lesson when it introduced its Jet Ski. At the time, the company dominated the market for recreational watercraft. When it asked users what could be done to improve the Jet Ski’s ride, customers requested extra padding on the vehicle’s sides to make the standing position more comfortable. It never occurred to them to request a seated watercraft. The company focused on giving customers what they asked for, while other manufacturers began to develop seated models that since have bumped Kawasaki—famed for its motorcycles, which are never ridden standing—from its leading market position.

This type of disappointment can be easily avoided if you use an approach like the one I described in this other article: When customers ask for a feature or product enhancements, instead of taking their requests at face value, ask them to explain the context in which they realized they needed the feature, and what is is that they will be able to do when they get their request that they can’t do now.

I don’t know what answers Kawasaki would have gotten from asking these questions to customers asking for extra padding on the sides, but from their initial request I can imagine that the “job” customers were hiring the jetski to perform wasn’t a challenging and rewarding workout (which would be well-served by a standing watercraft), but probably something like touring and taking people for a ride. After understanding the problem space (“I want a more comfortable ride”) it would be easier to form a robust problem-space definition and then evaluate the candidate solutions (extra padding, seated model, etc.) based on their feasibility, cost, and ability to deliver value.

There are different frameworks that you can use to avoid wasting time, money and limited resources on product ideas that turn out not to be valued by customers. The ones that I’ve seen work best focus on getting answers for the following questions:

  • Who am I trying to serve?
  • What set of underserved needs from my target customers do I aspire to meet with my product?
  • What criteria do my target customers use to judge how well their needs and expectations are being met?
  • What potential solutions–obvious and non-obvious(*)–exist to meet my customer needs and preferences?
  • How will my product be better than the others in the market? What unique value will it deliver?

(*) Non-obvious solutions like “removing the loading icon” that a customer would be unlikely to come up with on their own but you can invent after asking customers probing questions to illuminate the problem space.

When your product prioritization process is based on these kinds of questions, it’s much easier to avoid the common traps listed at the top. Instead of talking about features, you’ll be articulating customer needs and desired outcomes. Instead of wondering if a product idea will “fly”, you’ll be describing the value to be delivered to the customer, and objectively measuring the candidate solutions against the benefits they are capable of providing.

It also helps to reframe the question from What should we build next? to What is the next most important thing to spend engineering time on? Sometimes the best opportunity lies not on building a shiny new feature, but on improving response time, or removing unnecessary features that are cluttering the user experience, or redesigning a user flow to make tasks easier to complete. It bears repeating what I wrote in the article Stop Prioritizing Features:

The fixation on productivity and feature throughput is as likely to lead to “bloatware”, customer aggravation, and quickly losing relevance in the market, as to produce the expected growth.

 

Photo credit: Vimal Kumar (Creative Commons)

Get infrequent updates from us






 

Use this simple trick to become a better stakeholder interviewer

chatting

In order to excel in their role, business analysts need to be clear on the business goals their project is meant to support, and how success is going to be measured. Yet, oftentimes stakeholders get impatient to kick off a project without first clarifying the strategic direction. The typical consequence is poor requirements definition and waste of time and money building the wrong solution.

The ability to ask the right questions to the right people is a critical competency for any business analyst. The right question asked at the right time can help you get to the bottom of why a solution is needed, what benefits it will bring compared to a different approach, and any potential issues that if left unchecked would create a project roadblock.

There is a simple trick that can drastically improve the type of results you get when you’re trying to elicit information from stakeholders for a software project:

Separate what you need to learn from the questions you ask, and try to get your stakeholder into “storytelling” mode to get to the bottom of the business need.

Let’s say you’ve been assigned to a project that has nebulous objectives. Your first reaction might be to ask your stakeholders, “what are the objectives for this project?”, or “what are the desired outcomes?”. But in reality, it’s likely that your busy stakeholders didn’t spend a lot of time trying to articulate in a few sentences what they are trying to achieve. They’ll be inclined to describe the solution (“the objective is to build an executive dashboard that lets executives monitor our KPIs”) rather than present a well-articulated justification for the project that is clearly defined and aligned with overall business goals.

In order to get better answers for your questions, instead of asking directly, “what is the business motivation for this project?” or, “what business problem are we trying to solve?”, focus on getting your stakeholders to tell you their stories. Here are some examples ofquestions you can use to get your stakeholders into “storytelling mode”:

“OK, let’s imagine the future. Let’s assume for a minute that our solution is live, and let’s talk about a day in the life of someone who uses it.  What will they be able to do then that they can’t do now?

What challenges are you facing that made you decide to build this capability?”

“What happens if we don’t build this solution?”

Notice how this type of question helps your audience take a step back from discussing the solution (e.g., “build an executive dashboard”) to focus on the specific people who will use the solution, and what they’re trying to accomplish  (e.g., “quickly identify performance anomalies without having to wade through lengthy reports”).

Here’s another example. Imagine you’re documenting the process for employees of a company to submit requests for IT purchases (printers, laptops, monitors). You’re trying to figure out what needs to happen before an employee is notified that one of their requests has been denied. In books teaching people how to document a business process, you’ll find advice to ask something like, “what should already have been produced before an employee is notified that his request has been denied?”. That’s a perfectly valid way of stating your question for the purpose of understanding what you need to learn. But it’s not the best way to present the question for your stakeholders, who most likely aren’t trained to think in terms of process models that transform inputs into outputs.

You can reword the question into a narrative to stimulate your stakeholders to get into “storytelling mode” and give you the information you need without getting lost in the details of business process modeling:

“We talked about how an employee request for new equipment can be denied for various reasons: the company is changing providers and want employees to only use a different brand, the department the employee belongs to has exhausted their budget for the quarter, etc. Let’s look at a specific scenario. Imagine that Mary from the Marketing Department submitted a request for a new monitor right before the vendor she selected is removed from the approved list. Can you tell me what, exactly, would happen with Mary’s request at this point?”

By avoiding process modeling jargon (“what should already have been produced before this step happens?”) and framing your question using storytelling, you’re making it easier for the stakeholder to connect the dots from what you’re asking to what they know happens in real life. Instead of having to think in generic terms about the process steps that happen prior to the output “employee notification”, he can think in terms of a real situation to explain the process.

The interviewee could answer your question with something like, “Well, Mary would be automatically notified that her request was denied as soon as the vendor is removed from the list. The ‘request denied’ notification would include instructions on how to resubmit the request after choosing a different brand or model of equipment that’s still in the catalog.” You could then continue probing through storytelling until you had all necessary pieces of information to understand the solution requirements.

When you need to ask a question to clarify a business process or system requirement, remember to make things easier for your subject matter expert, not for yourself. That means never asking jargon-laden questions such as, “What are all the steps an item must go through to achieve the outcome of the workflow?”. Take the time to frame your questions in a way that starts a conversation and gets your stakeholders in the mood to tell you their stories about who the solution is for, what they do now, and how things will change for them later. This is guaranteed to get you much closer to finding the answers you need to specify a winning solution.

###

Interested in learning more? Tested Stakeholder Interviewing Methods offers a straightforward blueprint for you to get to the bottom of the real business need and demonstrate you value as a business analyst with better requirements.

Stop prioritizing features

By Adriana Beal

What do you think of our process to prioritize new features?

Whenever I hear this question, my first thought is that I don’t even need to look at the process to conclude that it is wrong.

sticky

Whether you are working on a small custom software application, or a product sold to millions of customers, your primary focus should never be new features, but rather value creation

We all understand where this obsession with feature prioritization comes from. Managers get nervous when product owners are not writing new requirements and developers are not producing more code. There’s always a pressure to “come up with themes for the next release”, “deliver more user stories per sprint”, and so forth.

Sadly, the fixation on productivity and feature throughput is as likely to lead to  “bloatware”, customer aggravation, and quickly losing relevance in the market, as to produce the expected growth.

The way to avoid this trap is to shift the focus away from feature prioritization and toward desired business outcomes.

What are the top business priorities for your company?

As Richard Rumelt writes in Bad Strategy, Good Strategy,

A leader’s most important responsibility is identifying the biggest challenges to forward progress and devising a coherent approach to overcoming them.

In practice, it’s common for leaders to fail to acknowledge the most important challenges facing the business. They ignore the power of choice, trying to accommodate all types of conflicting demands and interests across the organization.

But good product owners and business analysts demand more from those who lead. They’ll insist on gaining a clearer picture of what fundamental problems the company is trying to address, and will not rest until there’s an agreement on a focused and coordinated set of actions to tackle those problems.

How do these business priorities affect the priorities of the product or project you’re currently working on?

Let’s say you are a product manager in charge of the requirements for the next release of a SaaS (software-as-a-service) application that is offered by your company in a subscription model. And the high stakes challenge facing the business is customer churn: the company is being able to attract a large number of customers to try the software every week, but the majority of them cancel the subscription a few days after registration.

Now, instead of immediately starting to brainstorm new features that could be added to encourage customers to stick around after trying the software, you would dedicate time to understanding the root cause of the problem. You might learn from support tickets and feedback from former customers, that the main reason for customer churn is how difficult the product is to learn and how long it takes to set up.

Armed with this information, you’d be able to prioritize things like user interface redesign and improvements to the onboarding process as the most critical actions to support the goal of reducing customer churn.

Or, imagine you’re a business analyst in charge of an internal application that helps the sales team manage the sales pipeline. And the biggest challenge the sales team is facing is time wasted with unqualified leads, which is slowing sales down. Upon investigation, you may identify the scoring method used to predict the lead’s “likelihood to close” as the top obstacle to qualified leads, and prioritize enhancements to the scoring algorithm to support the goal of making the sales organization more efficient.

Note that in both examples, the deliverables that most contribute to the desired outcomes have nothing to do with building new features. In the first scenario, improvements in usability and the onboarding process could be the best alternatives to reduce customer churn. In the second scenario, improving the lead scoring algorithm might provide the highest impact in helping the sales team spend time only on the leads that are the most sales-ready.

These examples highlight the biggest problem with feature prioritization models: they start from the faulty assumption that the solution to business challenges must reside on new features, when the best opportunities for improvement may be found elsewhere. When you start from the question of where the best opportunities lie, new features become one of many alternatives to achieve the business goals. They’ll be rightfully competing with bug fixes, performance enhancements, process improvement activities, user interface redesign, marketing efforts, and many other actions capable of delivering business outcomes.

An effective prioritization process doesn’t revolve around features and release scope. It starts from shedding light into business goals and advance toward a commitment on the outcomes the team plans to support, before arriving on software features it plans to implement.

—-

Thoughts or comments? Share them on LinkedIn

Photo Credit: Kelly McCarthy

Get infrequent updates from us