Success metrics - which ones do you keep track of?



I came from the world of e-commerce and online marketing, where we measured “success” in page views, click-throughs and money.

I manage a support site for a tech company now and coming up with key performance indicators is a lot trickier (at least for me). High page views and low bounce rates aren’t necessarily the goal anymore. We’re not selling anything so I can’t measure revenue.

We do keep track of article votes, but it turns out that a lot of users will say that a poorly written article is helpful as long as it has the information they’re looking for.

User testing shows that the article pages need better navigation and layout. How do I translate that into numbers though so I can justify it to my team? Right now, all I have are user stories.

What success metrics do you keep track of?


This is the classic tech comm problem. The metric that matters is easy to describe: the user successfully completing their task. The problem is:

  • There is no sure way to know if they successfully completed their task or not.
  • There is no sure way to know how much they needed to learn to complete their task, meaning that the number of pages they viewed per visit is not comparable.
  • There is no sure way to know if your content helped or hindered them in completing their task, or if they found their answer in your content or elsewhere.

With e-commerce, where you are trying to guide users to an action of your choosing, and that action happens on your site, your do have a sure way to know these things. In tech comm, your measurements will be much less certain. So it is important to remember that no metric you are going to be able to find will ever be as precise as what you are used to in e-commerce.

Also, it is important to realize that some content answers questions that are low value but occur frequently, while others answer questions that are high value but occur seldom. The topic on how to restore a server after a crash may be the most valuable – and hopefully least used – topic in your entire doc set. So unless you can attach a value to the content, no frequency measurement is telling you anything real.

Anyway, since you can’t measure customers completing their tasks successfully, tech comm has to fall back on proxy variable, and, to be truthful, we really don’t know a lot about how accurate most of them are. Asking users to rank topics is highly dubious. “Did this topic help your?” No, because I was asking a different question. But is that a fault in the topic?

When it is hard to get reliable metrics, it may be better to look at other ways of assessing quality. Stack Overflow provides a massive social proof engine to assess technical support content. Can you find similar content to yours on Stack Overflow and compare the key points that are covered in the highest ranked answers?

Finding successful patterns and emulating them is often a better approach than trying to collect proxy metrics for an activity that is simply outside of what you can measure.


This is the topic of a paper I’m presenting at HCII 2015 in a couple of weeks and here in Seattle shortly afterwards (Seattle WriteTheDocs Meetup - August 2015).

@mbakeranalecta mentions a lot of the challenges to collecting the data and I can’t say that I have all the answers (for as much as I’d like to say that). But, I can say (and what I talk about in the paper) is that you need to understand how the readers’ goals align with the content and set your expectations (and metrics) accordingly. The paper presents a taxonomy of these relationships to use as a way to discuss them with the stakeholders.

Mark’s example of high-value/low-frequency content is probably the hardest type of content to discuss with people who come from e-commerce-land where pageviews and value correlate positively. My favorite example of important content that breaks that relationship is that of the emergency instructions you find in the seat pocket of an airliner. Like Mark’s server-crash example, that’s content that is hopefully never needed, but if it is ever needed, it’s extremely valuable.

A lot of help topics fall into that category, such that their value is almost inversely proportional to how often they’re viewed (as opposed to directly proportional). So, understanding that relationship helps you shake off the “e-commerce” view of success and look for more suitable success metrics.


I’ll post a link to the paper I’m presenting after the conference.


Thank you, Bob and Mark. I’ve started to think about high-value/low-frequency content. In our case, the high attrition issues have to do with crashes and data loss. Those articles are rated poorly, so I’m going to compare them with similar content to Stack Overflow and think of ways to improve them.

Bob, I’m going to attend your session here in Seattle. I look forward to it!


@jonisavage I look forward to meeting you!

Another thing to try on the low-rated topics is a quick-and-dirty usability test. If the topic is a reasonably high value (e.g. more than a cup of coffee), find someone who is in (or is a reasonable facsimile of) the target audience of the topic to test it for you. Ask them to meet you for a half an hour, buy them a coffee, and have them to sit down with you and read the topic as you observe them. Ask them to tell you what they are thinking and/or expecting to find as well as any questions that come up as they read it. You can ask them questions as well: e.g. “Where would you expect to find the information about X?” or “After you read this section, tell me about what you would do or look for next?”

I’ve used that technique a few times and it has always been eye-opening! (I keep a Starbucks gift-card or two handy at all times, just in case I need to do one on short notice).

Between a real user’s feedback and some competitive analysis on StackOverflow, you should come up with a pretty clear idea of what to do next.


I’m looking forward to the Seattle Meet-up, this Thursday! I’m in the process of putting a more practical spin on my presentation to make it a little more relevant and useful to writers. The presentation I gave at the conference had a more academic orientation.

The paper I presented at the HCII 2015 conference is now on my publications page. Over the course of the next week or two, I hope to post the content to my blog and throw in some more practical notes and ideas.


The reader goals are up on my blog at