Sunday 16 October 2016

5 Ways to Measure Technical Writers

In the past I've talked about how measuring documentation is one of the hard problems of documentation and to assist in that I wrote a list of things you can measure about technical documentation, but now it's time to talk about measuring technical writers.

Obviously there are a huge number of contexts and situations a technical writer could be working in, so I'm going to focus on what this blog is about - agile documentation - and therefore look at a writer working in a scrum team.  Hopefully the general concepts and ideas will give you something to think about even if this doesn't describe the situation you're in.

An obvious problem for testing a writer is that you can't use some of the simple development measures such as percentage of unit test or automated test coverage.  Developers can be measured by whether they've got the appropriate test coverage for their code, thus ensuring that at the very least, their new code doesn't break the existing code.  But the concept of testing documentation at anything other than the syntactical level is (currently) too complicated for a computer to perform, by which I mean a computer is not able to ascertain whether new documentation semantically "breaks" the old documentation.  (There is quite a lot to be said about the difference in semantic load between natural and artificial language, but I'll leave that for another time.)  So computer-driven test coverage isn't really going to tell you much, other than that the writer has written something.  We're not really interested in that, because it's a trivial measure that is only a small degree away from the reviled "how many words has the writer written" measuring mentality.  (If you don't know why being measured by number of words is a bad thing, Tom Johnson has an excellent post on the subject, in which he explains why it leads to bad outcomes.) 

This brings us to an important secondary question: If we're not interested in measuring absolute output, what are we interested in measuring?

Generally, we're interested in measuring productivity and quality.

There's an interesting article on measuring productivity where the authors talk about using an algorithm to calculate a normalized productivity score for each writer.  I really like this approach and there's a lot of useful ideas there that are worth investigating.  However, it's an approach that works best when writers are working on the same kind of things in the same kind of situations (although to an extent you can weight scores to balance this out a bit) and the algorithm is - naturally - specific to their situation.  So let's look at some generic ways you can measure writers that are more useful than "how many words they've written".  Not all of these will be right for your situation, but at least they should give you some ideas.

1. Lead time

A classic operations measurement, lead time is the time it takes a set of inputs to move through a process.  This can be weighted against the complexity of the task as measured in story points, although it does require documentation tasks to be measured against each other which might be difficult if each writer is working in scrum teams with significantly different types of work. An obvious objection to this is that teams should not be compared by story points; you can't say that a team that does 40 points of work in a sprint is less productive than a team that does 80 points, because estimation is a matter of comparison to other items in the sprint, not an absolute measure.  Nonetheless, if your writers are generally working on the same things - e.g. topics for a help file that covers work from all teams - you should be able to weight appropriately based on historical averages.  This will also help you spot information pipeline problems if a writer regularly has to take additional time to get information that other writers get as standard from their teams.

2. Peer review scores

People make mistakes and documentation has a high semantic load, so this isn't a "100% or fail" measurement.  But if a writer regularly fails peer review for things which you would expect them to get right, such as proscribed terminology or incorrect formatting styles, this is a sign that there is a legitimate problem.  More concerning than these kind of errors (because the writer just needs to learn the rules and follow them) will be errors of comprehension or explanation.  If the peer reviews reveal that a writer regularly writes unclearly or doesn't appear to understand the topic, you've got a problem that needs to be resolved.

3. Support tickets

Related to peer review scores, the number of support tickets that could have been dealt with in the documentation indicates how much the writer understands the user story that they're documenting.  There are a myriad reasons that you could have increased support tickets around an area a writer was working on, so be very careful here.  Perhaps the subject matter is extremely difficult, perhaps 1 or 2 customers haven't read the documentation and are flooding support with tickets that they shouldn't, perhaps the user story didn't capture the user pain correctly, and so on.  However, if the support tickets simply indicate incorrect, incomplete or poorly written documentation then there is a de facto problem to be addressed.

4. Technical debt

Take a measure of the existing technical debt, to cover known missing, incomplete or out of date documentation that the writer is responsible for.  Measure this on a regular schedule that is short enough to catch a problem but long enough to allow for short term fluctuations such as holidays or sickness, such as once a quarter or every release.  You're looking for a trend that at a minimum doesn't go up.  This means that the writer is not making the technical debt worse.  Whenever a writer is working in a topic with known technical debt they should be able to improve it (as a general rule of thumb, not a cast-iron certainty) and the trend should go down over time.  If the trend stays level, the writer is either very busy or adding as much debt as they're paying; if the trend is upwards then the writer is adding debt and you've got a serious issue to deal with before it gets any worse.

5. Commitment vs complete

Does the writer complete the documentation they've committed to at the beginning of the sprint?  There's an assumption here that your company's culture is such that the writer feels comfortable refusing to commit to a sprint if there's too much documentation, or not enough information about the documentation requirements, to commit, although most writers I know work in a dirty agile scenario, so maybe this is unrealistic.  However, if you can use this measurement, it will tell you about the writer's ability to estimate and their willingness to push through to get things done.  Of course, it might also tell you that the whole team struggles to complete, which is a different issue.  But it's important for a writer to be a "completer", so make sure you know whether your writer is getting things finished.


All 5 of these measures could be expanded on, but every situation is different and these are intended to be generic measurement points for agile writers.  All of them allow you to measure trends over time, which is essential for performance management.  But they're not the only things you can measure, so what other measurements do you find useful?  Share them in the comments.



No comments:

Post a Comment

Note: only a member of this blog may post a comment.