This is the classic tech comm problem. The metric that matters is easy to describe: the user successfully completing their task. The problem is:
- There is no sure way to know if they successfully completed their task or not.
- There is no sure way to know how much they needed to learn to complete their task, meaning that the number of pages they viewed per visit is not comparable.
- There is no sure way to know if your content helped or hindered them in completing their task, or if they found their answer in your content or elsewhere.
With e-commerce, where you are trying to guide users to an action of your choosing, and that action happens on your site, your do have a sure way to know these things. In tech comm, your measurements will be much less certain. So it is important to remember that no metric you are going to be able to find will ever be as precise as what you are used to in e-commerce.
Also, it is important to realize that some content answers questions that are low value but occur frequently, while others answer questions that are high value but occur seldom. The topic on how to restore a server after a crash may be the most valuable – and hopefully least used – topic in your entire doc set. So unless you can attach a value to the content, no frequency measurement is telling you anything real.
Anyway, since you can’t measure customers completing their tasks successfully, tech comm has to fall back on proxy variable, and, to be truthful, we really don’t know a lot about how accurate most of them are. Asking users to rank topics is highly dubious. “Did this topic help your?” No, because I was asking a different question. But is that a fault in the topic?
When it is hard to get reliable metrics, it may be better to look at other ways of assessing quality. Stack Overflow provides a massive social proof engine to assess technical support content. Can you find similar content to yours on Stack Overflow and compare the key points that are covered in the highest ranked answers?
Finding successful patterns and emulating them is often a better approach than trying to collect proxy metrics for an activity that is simply outside of what you can measure.