Three product management myths affecting customer satisfaction and user adoption

Recently I’ve had the opportunity to study the situation of a few software products from the perspectives of customer satisfaction and user adoption.

Here are the top three myths I’ve found in common in these disparate products from different companies. Together they help explain why these products were not performing as well as expected in their target market.

Myth #1:  If we listen closely enough, customers will offer all the answers on how to create value through our software products.

The reality:  Customers are great at telling us about their habits, problems, and aspirations, but not about how best to address their needs with technology.

Example (fictitious, adapted from a real scenario)

Product: Content management platform used by enterprises to store pieces of content (images, videos, text) used in blog posts and marketing campaigns across social media channels.

Popular request from users: Ability to manually tag content pieces so that it’s easier to tell in the future when and where each piece of content had been used in the past.

What happened once this request was further investigated? A combination of problem interviews and observation of users in action led to a better solution: automated tagging.  Rather than building a capability that would have to rely on users being diligent about tagging pieces of content themselves, a set of automated rules makes the process of flagging and filtering content by various dimensions much more reliable and valuable for the customers.

For example, when a piece of content is published from the platform, the system automatically flags it as published, logging the date/time and publishing channel. Subsequent publishing of the same content increments a counter and adds a new log of date/time and channel. This way, content publishers composing a new blog or social media post can tell whether a piece of content has been already used recently, and content creators  can use the same information to inform their future creative process. Creators and publishers  can trust that the publishing status of each piece is up-to-date, something that would be impossible to guarantee if the system had to rely on users to manually tag published content, as originally requested by customers.

Key takeaway: It’s foolish to expect customers to know the best solution for their problem. Listen to their solution ideas , but don’t take their opinions for granted. Use techniques like problem interviews and observation to study the problem space and come up with alternative solutions that address the essence of the problem to be solved before deciding upon the most appropriate choice.

Myth #2: Being “data driven” greatly increases the chances of product success.

The reality: Quantitative data often fails to provide all the information we need to design the best product or feature.

Example (fictitious, adapted from a real scenario)

Product: Mobile app used to manage and listen to podcasts using a smartphone.

Popular feature: Auto-delete after the user has finished listening to a podcast.

Unintended consequence of the auto-delete feature:  Many podcasts have a portion at the end in which the host asks listeners to write a review or reads a sponsor ad. Many users prefer to skip this last portion of each episode. When these users are listening to podcasts in bed while preparing to go to sleep, they may wake up in the morning with a result that is opposite of what was intended:

  • Podcasts the user listened to entirely are still in their downloaded list because the final portion of the recording was skipped.
  • Podcasts the user didn’t listen to until the end (or didn’t even start listening) have been deleted because the user went to sleep while the app continued to play each episode to the end.

Unless the product manager belongs to this particular user segment, it’s very unlikely that he or she will be able to detect the issue without some serious customer research. Quantitative data may tell us when the app starts losing users (which may be moving to a competitor doing a better job predicting which podcasts should be deleted or preserved) but won’t tell us why.

Key takeaway: Tools like problem interviews and observation are again valuable sources of relevant data about the value delivered by product features to different user segments. Interviews with non-customers and former customers can also be a powerful tool to help uncover issues that are hard to detect without an in-depth understanding of the criteria users use to judge product value.

Myth #3: Once we’ve solved a problem for our users, we can move on to the next problem to solve.

The reality: As systems thinking tells us, our equations hold only until something changes in the system’s structure.

Example (fictitious, adapted from a real scenario)

Product:  Another content management system (this time an internal product used by a large organization to publish knowledge articles to their website)

Enhancement: A capability  added to allow content creators to label content items.

What happened?  Initially, users loved the new capability. Content creators started using labels to flag content for management or legal review, to indicate it was work in progress or ready to use, to associate the content with a marketing campaign, etc. But because no conventions were being used to create new labels, soon the application was cluttered with duplicate labels (“Valentine’s” and “Valentine’s Day”, “legal”, “legal_review”). Finding a label to apply or use to filter content from a dropdown with hundreds of labels became a nightmare, and within a few months, feature usage dropped dramatically.

Key takeaway:  We can’t always predict how a new product or feature will affect the future system state. The way out of the trap is to treat the changes in user behavior happening over time as useful feedback, and to take corrective action when performance starts to degrade. (In the labeling example, the solutions included creating a separate workflow for things like management and legal review, and folders for saving content related to a specific event or marketing campaign. With less use cases requiring labels being applied to content, the capability became valued again, and user adoption increased substantially.)


Leave a Reply

Your email address will not be published. Required fields are marked *