Skip to Main Content

Top “product smells”: A primer on assessing product performance

Something isn’t working.

Maybe you’ve launched a new product and not seen much engagement. Perhaps you’ve rolled out a big feature but it’s not driving the adoption or conversion you expected. Or maybe a long-running product you operate has been on a slow decline in terms of usage, speed, security, scalability, or (gasp!) all four.

While the code running your product is a good thing to investigate, we just as often see that issues related to usability, utility and accessibility are driving factors in under-performance. And so we tend to assess product performance from various perspectives, looking well beyond your underlying code and infrastructure).

Before you can assess performance, you first need to define performance. So, let’s start there.

What constitutes strong performance?

A few years ago, the fine folks at Marvel (not that Marvel, this Marvel) published a UX Pyramid. It’s a great approach to outlining the fundamentals of good user experiences, beginning at the foundational level and continuing all the way up to the delightful level.

We’ve taken inspiration from their hierarchy, adapting and expanding it to focus more broadly on overall product performance. We believe that understanding where your product is strong and where your product is struggling are essential aspects of elevating its performance.

Approaches to consider
There are many ways to assess how well your product is critically or isn’t performing. At TXI, we tend to mix and match tactics to paint a comprehensive picture quickly.

Our go-to formats and frameworks include:

  • Customer feedback and support requests to reveal problem areas and misalignment

  • Team input to surface chronic pain points, known issues, and vulnerabilities

  • Product analytics to highlight bottlenecks, drop-off points, and missed opportunities

  • Accessibility reviews to improve how your product is understood and navigated

  • Heuristic analysis to identify usability and UX problems and barriers to engagement

  • User interviews to spot gaps between user motivations/behaviors and product features

  • Technology audits to isolate functionality, readability, and maintainability issues

In most cases, we start with a couple of these techniques, leaning on what we learn in the initial assessment(s) to prioritize what/how we investigate next or where we begin to focus the work of addressing the issues uncovered.

Deciding which types of analysis make sense as a starting point is as much art as it is science, but we’ve seen enough common patterns to share some best practices.

Product Performance Assessment

Focus on the easily observable issues.

Given the list and framework above, you might already know a good analysis to run. If so, great! Start there. But if not, see if any of these scenarios sound familiar…

Low adoption rates

If prospective customers/users are finding your product and considering your product but not adopting your product, that’s an area worth investigating. We’d typically glean what we can from your analytics to understand how well the prospects in your funnel match up with your core customer archetype. You’ve got a core customer archetype, right?

If there’s a mismatch, it might be time to revisit (or create) your customer acquisition strategy so that you’re garnering attention and consideration from a segment you can win within.

If your prospect pool and core customer mostly match, it’s time to closely examine how your feature set and value proposition compare to your competitors.

Low engagement rates

When customers are signing up but not sticking around, it’s a good idea to dig into a few areas:

  • Reviewing feedback, especially from churned customers, is a great place to start

  • Product analytics may reveal that customers aren’t finding (or “getting”) key features

  • A review of your new user onboarding process may highlight opportunities to improve

  • 1:1 interviews with super-users could help you understand their motivations more broadly, then get creative about how to translate what’s working for them to others

  • A UX review focusing on consistency could also illuminate if your user has to parse an array of interaction patterns or keep straight a confusing array of language and labels; you can likely simplify things for them.

Bug prioritization meetings
Every product development team has a backlog of bugs…some are worth fixing, and some can be lived with. But it's time to look deeper if your QA team has to set up special meetings to triage and prioritize a long list of bugs. TXI’s technology audit looks for root causes of persistent bugs. Upgrading technology or making simple tweaks to development processes can reduce that bug list to a manageable size and improve overall product quality.

Long wait times to release new features
Do minor enhancements and new feature launches take more time and effort than feels reasonable? If so, your product may have incurred significant tech debt over its lifetime. While many factors contribute to tech debt piling up, the result of a significant balance (deficit?) is that it simply takes a lot more engineering time and effort to contribute code and test its performance. A code audit and/or DevOps assessment are worth considering.

Poor or low ratings in the app store or customer surveys
An upside to earning low marks from customers in their ratings and reviews is that they’re often pretty vocal about exactly what’s missing, confusing, or frustrating about your product.

And that feedback is often a valuable input when prioritizing your focus. If you’re seeing a chorus of reviews calling out tricky interfaces, inefficient flows, or unexpected outputs, we’d suggest conducting a heuristic analysis, an accessibility review, or both. These tactics are great ways to highlight the exact places giving users issues.

Cart or task abandonment
If users are consistently bailing out of key user flows without completing the tasks they’ve started, it could be that they’re unclear on how or why to complete the task. Leveraging an analytics tool that offers session replay is a straightforward way to see where (and begin to infer how / why) things aren’t working as hoped.

Reviewing footage of exactly what your users are doing in the moments leading up to abandoning a task can help you understand whether the issue is related to usability, your competitive positioning, or a misalignment between the users’ goal and your product.

Extensive job aids
Does your product documentation need to be exhaustive because the experience of using your product is exhausting? That’s an issue that’s likely causing your performance to suffer. As the saying goes, a user interface is like a joke… if you need to explain it, it’s probably not very good.

We encourage clients to aim for products and interfaces that can be easily used with minimal required documentation. It’s often the case that first-use tutorials, smart defaults, thoughtful empty-states, and helpful in-context tooltips can obviate the need for hefty software manuals.

If you’re not sure where your users’ biggest knowledge gaps are, some 1:1 interviews with expert users and novice users could be a lightweight way to understand better both how they use the product and where you might rethink things.

Talk with a product strategist

DIY or outside help?
If one or more of the hypothetical scenarios outlined above feel relevant to where you find your under-performing product, and you have the required skills and abilities in-house, we’d encourage you to get started on your own.

And if you’d like some outside expertise, perspective, or simply a sounding-board to talk things through… drop us a line. We’d love to explore how we might help you get things into shape.

Published by Patrick DiMichele , Gayle Silverman , Gilad Shanan , Katie Wolf in code audit

Let’s start a conversation

Let's shape your insights into experience-led data products together.