Tuesday, 3 May 2016

How to Scale Documentation

Firstly, what does "scale" mean?

Scaling applies to systems and processes.  It is most frequently associated with computer hardware systems, networks and algorithms, although it applies to anything which can grow to accomodate more work.  As an example, a network is said to be scalable if it can handle an increased number of nodes and increased traffic without requiring anything more than perhaps some additional hardware to be bolted on. 

In practice, if there are a large number of things (n) that affect scaling, then resource requirements (for example, algorithmic time-complexity) must grow less than n2 as n increases."  

Interesting.  Can you scale documentation?

Yes.  The processes that you use to generate and deliver your documentation can either be scaled - i.e. you can take on the production and delivery of more documentation with no loss of speed and quality - or they can't.  Where a computing process may need more disc space or processor power in order to scale, your documentation process might need more writers in order to scale.  That's fine, a process that can handle more work if you add more people is scalable.  A process that can't handle more work by adding people is not scalable.  (There are other things like bandwidth you might need more of to scale your documentation and we'll talk about that a bit later on.)

Great, ok, like it.  What processes need to scale?

There are 3 documentation processes that need to be scaled:

  • Input (creation and maintenance)
  • Delivery (released to customers)
  • Feedback (bugs, change requests, stakeholder enhancements)
This can be visualised as:

That's it? Just those 3 processes?

Yup.  Those 3 processes cover the whole documentation lifecycle.  Everything that happens to documentation comes under one of those 3 processes.  The end of life of a piece of documentation - e.g. when a product is discontinued - is covered by Delivery: you deliver the last iteration of the documentation, place the documentation into the standard repository and then start work on something else.  In this specific case the cycle ends there as there will be no feedback.

Fair enough, sounds good.  Any high-level principles or rules of thumb to follow?

Only one:

                         You can't scale past a bottle neck.

This means that if there is a bottle neck in one of the 3 processes, your documentation won't scale. Even if the Input and Delivery both scale with perfect efficiency, and 99% of the Feedback scales perfectly, if that 1% causes a bottleneck then your processes don't scale.  It's all or nothing.  That doesn't mean you have to have an entire system set up to be 100% scalable from the start (that's unrealistic) but until all 3 processes scale completely, you haven't scaled your documentation.  Nature abhors a vacuum, scalability abhors a bottleneck.  Both will get filled extremely quickly.

Are you quoting an Ancient Greek philosopher just to look clever?

Partly.  But there's a serious point to be made about bottlenecks, which is that they will fill up very quickly as you start to scale and thus prevent you delivering more documentation, even if you have more resources.  I'll say it again: You can't scale past a bottle neck.

Right, bottlenecks are bad, I understand, stop banging on about it..... 

As long as you're sure you get it. It's critical.

Critical, roger that.  Let's move on.  How do you make your processes scale?

We'll start looking at that in the next part.  For now, have a think about your own processes and try to identify where they sit in the Input > Delivery > Feedback cycle, and what bottlenecks you've got in those processes.  That'll help you apply what we're going to talk about to your own situation.

You're giving me homework?

I am.  Get on with it, I'll see you in part 2.