Sunday 30 October 2016

Some Notes on Moving Content to Confluence

Confluence is a super-charged wiki from Atlassian, the same company that makes the popular issue tracker, JIRA.  As you'd expect with a normal wiki, its most popular function is as a knowledge base, documentation hub and FAQ location, but due to it's tight links to JIRA (and other Atlassian products) it's also used for showing sprint reports, burn downs, burn ups, work in progress, and lots of other reports using data taken straight from JIRA.

In this post we're going to focus on the traditional wiki usage: documentation and knowledge management, and specifically some do's and don'ts when moving existing documentation and knowledge assets into Confluence.  Some of the points below are specific to Confluence, some are best practise whenever you're moving knowledge from one place to another, but they're all born of experience and hopefully they'll help you avoid some of the pitfalls.

The most important points first:

  • If you're a technical person, get a content person in before you move anything.  They'll spot issues you won't. 
  • If you're a content person, get a technical person in before you move anything.  They'll spot issues you won't.
I'm more of a content person, and having some technical people (i.e. developers) around helps immensely.  A developer's first thought is always "How can I do this through code?", which means they're much better at spotting situations where a batch file or a regex or some CSS will make everything a lot quicker and less manual.  When I was looking at moving documentation from some internal wikis, it was developers who helped me find the appropriate export functionality and worked out whether I could take that format and import it into Confluence, potentially saving me weeks of work.

Which brings me neatly on to:

  • Automate as much as you can.
This means using export tools in your current location (e.g. wikis, CMS, document repositories, etc), and also Confluence's excellent built-in Word import functionality.  There is a Universal Wiki Converter which is not supported by Atlassian because it's a 3rd party tool, but the fact that the link for it takes you to a place on the Atlassian domain should tell you they think it's useful.  It doesn't work on every wiki, but if it does work for your wiki it will save you a lot of time.  If instead of, or as well as, wikis you've got lots of Word documents to import, the Confluence Word importer is brilliant.  It's really good at importing formatting and layout, as well as features like tables, images, links, headers, footers and diagrams, and it's really quick to boot.  Oh, and it will create new pages every time there's a heading in your document, if you want it to, even down to being able to set the level at which new pages are created (e.g. it will create new pages every time it finds a Level 1 or Level 2 heading, but ignore any other heading levels).  The Word importer has saved me huge amounts of time. 

Before you start importing things:

  • Plan your space structure before you move things in. 
It's pretty annoying having a structure set up and working only to find it doesn't scale to accommodate what you're transferring and you have to move things around again.  This is where a content person is really helpful if you're a technical person.  Content people are good at the structure and layout of large bodies of information, and we'll help you analyse the user needs and get it right.  Confluence's space and page structure is deceptively simple because this simplicity means it's very easy to create monolithic spaces with one massive list of alphabetically ordered pages.  But people don't connect information alphabetically, so creating a space directory, page trees and label taxonomy using the guiding principles of good information architecture will make it much easier for people to navigate.
  • If multiple people are bringing stuff in, agree on common naming conventions, page structure, and labels.
There's no getting away from the fact that if a team of technical communicators will all have slightly different ideas about conventions and structures, then a motley crew of various resources will all have very different ideas about conventions and structures.  Even if all the people importing are technical communicators, and especially if they're not, set agreed standards for page naming conventions, page structure and labels BEFORE anyone imports anything.  Otherwise it'll require a massive remedial exercise later on to standardise everything, or if this isn't done, your Confluence instance will be a mess.
  • On the subject of labels, use them to say where a Confluence page came from, e.g. wiki name, shared drive, SharePoint, or wherever.
If you've never gone through this kind of process before this might seem superfluous, but believe me, it's not.  No matter how careful you are when you're importing, you or someone else will want to check the original source because "it doesn't look right" or "I'm sure we used to have more information on this in the old system".

And while we're talking about it:

  • Keep your old repositories for at least 6 months, just in case. 
Transfer them to a portable hard drive that an admin locks in a secure cupboard if necessary, but don't "move and delete" because you'll regret it (even if no-one needs the back up you'll always be fretting that someone will need the back up).  If after 6 months (or whatever time frame you're comfortable with) you haven't needed the back up, get rid of it.  But in the meantime, keep them so that you can answer queries about "it doesn't look right" or "I'm sure we used to have more information on this in the old system" (see above) and so that you can do "idiot checks" to make sure you've got everything.  Pro tip: Every time you import something, move the original to a new location that mirrors the structure of the original location.  That way you can be sure everything's been imported.  If you can't realistically move it, mark it with (something like) an underscore at the beginning of the title.  This is also helpful when multiple people are importing things as it stops people importing the same thing twice.

Despite the fact that you're keeping your old repositories for a while:

  • Bring as much metadata over as possible, especially who last edited [whatever you're importing] and when.
When you create a page in Confluence, your name is sat under the title as the creator.  This isn't useful for people who want to ask questions of whoever created the original content.  Pull the metadata from wikis or documents (manually if necessary) and add it to the page, preferably in a default location such as just under the title. 

Having said that:

  • Consider adding smaller documents as attachments rather than extracting the contents. 
This will greatly reduce your import time and allow you to turn off your old system much quicker.  You can then turn the attachments into actually pages over time if you want to.  I wouldn't advocate using Confluence as a document library because really it's very poor at that.  But in terms of speed, you can drag and drop multiple documents at once into the Attachments page (or the Attachments macro) and if you're pushed for time to remove things from the old system this will work as a temporary measure.

Finally, a couple of "human" issues:

  • Manually porting things over can be boring and a lot of people won't do it right because of this.  Only get people who really enjoy doing this kind of repetitive, finicky work, otherwise you'll spend huge amounts of time correcting the work of people who got bored 10 minutes after they started.
  • As soon as you can turn off the old systems, do it.  Or at least restrict access to them.  People are creatures of habit and lots of them will keep using the old systems until it's literally not possible,
  • It's going to take longer than you thought.  Take a deep breath, settle in for the long haul and don't get downhearted.  You can and will do this, and it can and will be a success.

Sunday 23 October 2016

The 5 Reasons Developers Don't Like Writing Documentation

A popular gripe amongst technical writers is that a lot of developers don't like writing documentation.  In fact, a small cottage industry of tool makers has sprung up specifically to help companies get developers to provide some form of documentation by making the process as easy and coding-like as possible.  If you've used tools like OpenAPI (formally Swagger), JavaDocs or Jekyll then you'll be familiar with this kind of tool.  The aim of this cottage industry is to make writing documentation as much like writing code as possible, which means allowing users to focus as much as possible on the syntax and not the semantics.

These tools are all very useful, but whilst they may be efficient ways of creating certain types of documentation, they treat the symptom and not the cause.  A lot of (most?) developers don't like writing documentation.  Why not?

Having worked with my fair share of developers over the years, I've seen and heard many answers to that question, but they all reduce down to one of the 5 following reasons:

  1. It's boring
  2. It's difficult
  3. It's messy
  4. It's dangerous
  5. It's unmanly
Let's unpack these a bit.  I'm going to refer to "developers" below, and by that I mean "the developers who would say that this is one of the reasons they don't like writing documentation".

1. It's boring

Of all the reasons, this is the most obvious and overarching.  Developers find writing documentation to be a boring task.  Perhaps a more accurate word would be uninteresting, because writing documentation is not an interesting task.  If you don't find explaining and teaching interesting, if you have no interest in understanding the mental perspective of a target audience, if you think formatting and writing styles are dull or unimportant, then writing documentation is not going to grab your attention.

2. It's difficult

This is equally as obvious, but a lot less admitted because people don't like to admit they can't do something.  This is especially true for that sub-set of developers who think that documentation is unmanly (see below) or who generally pity and/or mock non-developers as being somehow less intelligent than developers.  If you've worked in software for any length of time you'll know exactly the type of developer I mean.  The fact remains that many developers simply lack the writing skills to write documentation well, or if they have the basic skills they don't have the confidence to use them for fear of looking stupid.  In fairness that's not an attitude specific to developers; lots of people don't try to do things because they're scared of looking stupid, but if you've ever mocked someone, even gently, for being "just" a technical writer, you're probably not looking forward to a chance to demonstrate that you can't even remotely do their job.  Better to sneer than look dumb.

3. It's messy

A big part of the reason that documentation is difficult is that natural language is organic and messy and often follows illogical rules.  Developers have spent their careers, and often many years before, learning how to use languages that are clean, designed and above all logical.  Developers only have to worry about syntax, but semantics?  No.  No, no no.  Semantics can often seem to preclude right and wrong, so for a developer who's used to the binary test of "compile/doesn't compile" or "passes unit test/doesn't pass unit test", writing in natural language for others to critique without having any real sense of what's correct and what's incorrect is a deeply terrifying thought.  The fact that only the very best and brightest end up working on natural language processing at the companies that are any good at it just solidifies the notion that writing in natural language is a one-way ticket to looking stupid unless you're brilliant at it.

4. It's dangerous

Not in the sense of physical danger (especially since everything's electronic now and you no longer have to worry about paper cuts or dropping a heavy manual on your foot - shout out to my old-school veterans!), but more in the sense that once you've written documentation people will a) associate you with that documentation and possibly product, and b) people might actually want you to write more documentation, possibly even on other bits of code.  Worse case scenario: People see you as someone who can write documentation so you'll have to do a lot more of it.  There are plenty of developers who would genuinely consider jumping out of the window rather than be the developer that people go to for documentation.  Writing documentation can lead to writing more documentation, and very soon that can lead to having a reputation for writing documentation.  This is not good, so it's best to be very bad at it or very awkward to get any documentation from.

5. It's unmanly

Correctly or incorrectly there is an attitude amongst some developers that writing documentation is not a manly occupation.  For those people that aren't developers this might seem at first sight to be a completely bizarre attitude, but whilst I think it's wrong, it's also not an attitude that comes out of nowhere.  For a start, men are vastly overrepresented in development (I don't need to link to the stats on this; Google will find you a large number very quickly) and women are vastly overrepresented in technical writing (refer to this article as an example of the trends in writing).  Whether this is cause or effect can be debated ad nauseam, but the point is that developers are much more likely to have worked with female technical writers and male developers.  The second reason is that technical writing requires a person to collaborate, think about the needs of others, step into the user's shoes and above all display empathy.  This sounds similar to what are known as the "caring professions" such as nursing, and as such stereotypically seen as being more appealing to women then men.  It's the equivalent for some people of doctors and nurses; why be a nurse who has to change bandages, empty colostomy bags and give bed baths when you could be a doctor who gets to analyse, diagnose and fix people, maybe even save a life.  And notice that a nurse "has to" whereas a doctor "gets to".  There is an alpha-male competitiveness to many developers, and that doesn't fit well with putting the needs of other people first. 

Entirely anecdotally, I've found that the female developers I've worked with over the years generally have no problem writing documentation, and the ones that do only have a problem because other (male) developers sometimes see writing documentation almost as a sign of weakness.  In some development teams writing the documentation can have almost as much of a gender bias as the old school "women make the tea" attitude, so female developers in that scenario would rather not write documentation as it will pander to the idea that they're not "technical" enough like a "real" developer.  Conversely, if I think of the 20 most vociferously anti-documentation developers I've met, they've all been alpha-male types.  Correlation is not causation, and all that, but I do find it a very interesting concurrency.

Every reason I've ever seen or heard for developers not writing documentation can be reduced to one of these 5 reasons.  I don't want to tar all developers with the same brush; many developers write documentation happily, and many of those do a great job.  Equally, I don't want to accuse all developers who don't like writing documentation as thinking that it's unmanly; I suspect this is a minority view that will only become rarer in the years to come.  If you think I've missed a principle reason or you've got an opinion on which reason is the biggest prevent of developers writing documentation, add something to the comment below.

Sunday 16 October 2016

5 Ways to Measure Technical Writers

In the past I've talked about how measuring documentation is one of the hard problems of documentation and to assist in that I wrote a list of things you can measure about technical documentation, but now it's time to talk about measuring technical writers.

Obviously there are a huge number of contexts and situations a technical writer could be working in, so I'm going to focus on what this blog is about - agile documentation - and therefore look at a writer working in a scrum team.  Hopefully the general concepts and ideas will give you something to think about even if this doesn't describe the situation you're in.

An obvious problem for testing a writer is that you can't use some of the simple development measures such as percentage of unit test or automated test coverage.  Developers can be measured by whether they've got the appropriate test coverage for their code, thus ensuring that at the very least, their new code doesn't break the existing code.  But the concept of testing documentation at anything other than the syntactical level is (currently) too complicated for a computer to perform, by which I mean a computer is not able to ascertain whether new documentation semantically "breaks" the old documentation.  (There is quite a lot to be said about the difference in semantic load between natural and artificial language, but I'll leave that for another time.)  So computer-driven test coverage isn't really going to tell you much, other than that the writer has written something.  We're not really interested in that, because it's a trivial measure that is only a small degree away from the reviled "how many words has the writer written" measuring mentality.  (If you don't know why being measured by number of words is a bad thing, Tom Johnson has an excellent post on the subject, in which he explains why it leads to bad outcomes.) 

This brings us to an important secondary question: If we're not interested in measuring absolute output, what are we interested in measuring?

Generally, we're interested in measuring productivity and quality.

There's an interesting article on measuring productivity where the authors talk about using an algorithm to calculate a normalized productivity score for each writer.  I really like this approach and there's a lot of useful ideas there that are worth investigating.  However, it's an approach that works best when writers are working on the same kind of things in the same kind of situations (although to an extent you can weight scores to balance this out a bit) and the algorithm is - naturally - specific to their situation.  So let's look at some generic ways you can measure writers that are more useful than "how many words they've written".  Not all of these will be right for your situation, but at least they should give you some ideas.

1. Lead time

A classic operations measurement, lead time is the time it takes a set of inputs to move through a process.  This can be weighted against the complexity of the task as measured in story points, although it does require documentation tasks to be measured against each other which might be difficult if each writer is working in scrum teams with significantly different types of work. An obvious objection to this is that teams should not be compared by story points; you can't say that a team that does 40 points of work in a sprint is less productive than a team that does 80 points, because estimation is a matter of comparison to other items in the sprint, not an absolute measure.  Nonetheless, if your writers are generally working on the same things - e.g. topics for a help file that covers work from all teams - you should be able to weight appropriately based on historical averages.  This will also help you spot information pipeline problems if a writer regularly has to take additional time to get information that other writers get as standard from their teams.

2. Peer review scores

People make mistakes and documentation has a high semantic load, so this isn't a "100% or fail" measurement.  But if a writer regularly fails peer review for things which you would expect them to get right, such as proscribed terminology or incorrect formatting styles, this is a sign that there is a legitimate problem.  More concerning than these kind of errors (because the writer just needs to learn the rules and follow them) will be errors of comprehension or explanation.  If the peer reviews reveal that a writer regularly writes unclearly or doesn't appear to understand the topic, you've got a problem that needs to be resolved.

3. Support tickets

Related to peer review scores, the number of support tickets that could have been dealt with in the documentation indicates how much the writer understands the user story that they're documenting.  There are a myriad reasons that you could have increased support tickets around an area a writer was working on, so be very careful here.  Perhaps the subject matter is extremely difficult, perhaps 1 or 2 customers haven't read the documentation and are flooding support with tickets that they shouldn't, perhaps the user story didn't capture the user pain correctly, and so on.  However, if the support tickets simply indicate incorrect, incomplete or poorly written documentation then there is a de facto problem to be addressed.

4. Technical debt

Take a measure of the existing technical debt, to cover known missing, incomplete or out of date documentation that the writer is responsible for.  Measure this on a regular schedule that is short enough to catch a problem but long enough to allow for short term fluctuations such as holidays or sickness, such as once a quarter or every release.  You're looking for a trend that at a minimum doesn't go up.  This means that the writer is not making the technical debt worse.  Whenever a writer is working in a topic with known technical debt they should be able to improve it (as a general rule of thumb, not a cast-iron certainty) and the trend should go down over time.  If the trend stays level, the writer is either very busy or adding as much debt as they're paying; if the trend is upwards then the writer is adding debt and you've got a serious issue to deal with before it gets any worse.

5. Commitment vs complete

Does the writer complete the documentation they've committed to at the beginning of the sprint?  There's an assumption here that your company's culture is such that the writer feels comfortable refusing to commit to a sprint if there's too much documentation, or not enough information about the documentation requirements, to commit, although most writers I know work in a dirty agile scenario, so maybe this is unrealistic.  However, if you can use this measurement, it will tell you about the writer's ability to estimate and their willingness to push through to get things done.  Of course, it might also tell you that the whole team struggles to complete, which is a different issue.  But it's important for a writer to be a "completer", so make sure you know whether your writer is getting things finished.

All 5 of these measures could be expanded on, but every situation is different and these are intended to be generic measurement points for agile writers.  All of them allow you to measure trends over time, which is essential for performance management.  But they're not the only things you can measure, so what other measurements do you find useful?  Share them in the comments.

Sunday 2 October 2016

TCUK16 Awards

The ISTC held their annual Technical Communication UK Conference at the Wyboston Lakes Executive Centre, Cambridgeshire, from September 13 - 15 this year.  I attended the gala dinner on the evening of the 14th, and whilst I was there the UK Technical Communication Awards winners for 2016 were announced.

I'm honoured and (genuinely) humbled to say that the ISTC have seen fit to award me the 2016 Best Procedural Communication award for the How to Scale Documentation series that I recently wrote on this very blog:

The judges remarks were:

"Really well organised with appropriate and consistent details. The structure aids readability. Great typography choices, used to enhance the content, and to visually chunk information. Great use of space and colour in titles. The entry’s distinct conversational style works extremely well with the chosen subject and audience.

This entry is so effective because it makes everything seem so manageable. The user is guided through the series of articles and can be confident of finding explanations and at the right level."

To say that I was surprised when the ISTC told me I'd won would be a huge understatement.  Up until that point my entire trophy collection consisted of a bronze medal in a judo competition when I was 16 and a plastic cup as a member of a league and cup double winning cribbage team when I was 18.  I didn't need a trophy cabinet so much as a trophy shoebox.  Now I've got a very handsome (and heavy) glass trophy that's got pride of place on the bookshelf in my office.  To say I'm happy doesn't begin to describe it.

Congratulations to all of the winners, especially Peter Anghelides as the winner of the Horace Hockley Award for Outstanding Contribution to the Profession.  You'll be able to see a list of all the winners and read the winning entries in December's issue of Communicator.  Also, my personal thanks go, in no particular order, to:
  • Elaine Cole, for guiding me through the evening with courtesy and commendable patience, as well as for putting me at ease before I had to go on stage in front of a room full of more or less sober peers to collect my award;
  • Alison Peck, for making a conference debutant feel so welcome by chatting to me all through dinner, despite everyone in the room wanting to talk to her;
  • Galyna Key, who couldn't attend the conference this year, but does such a wonderful job organising, promoting and running the UKTC awards.
And to everyone else I met at the Conference this year, thank you for making me feel so welcome.  It was a pleasure to meet so many dedicated, passionate and skilled technical communicators.  I hope to see you all next year!