An occasional series looking at best practice for common documentation tasks and situations.
Writers who document software will invariably have to document a code snippet or two in their careers. This could range from the occasional line of CSS in a configuration guide every few years, right up to the full-on, nothing-but-code explanations of the full-time API documenter.
I'll be honest, this post isn't for the API specialists, but it is for those who only have to occasionally format a code snippet, or for those who are looking for a better way. Those writers who document APIs will need to follow proscriptive standards and understand the code in order to provide clear, coherent information to their readers. Those writers for whom code is something that people who can't write in a natural language write in instead - you know who you are! (and you're very welcome here) - may however find formatting code snippets a little daunting.
(In case you've never formatted code before, it's not as simple as just changing it to COURIER NEW. Well, it can be that simple, but not if you want to do a proper job. Code snippets will be read by people who code, so it should look like code.)
Fear not, my friends, for there is a simple way to format code.
Your immediate option, if you have access to them, is to use the same tools as the developers. So for formatting SQL you open up a query writer like SQL Server or dbForge Studio for Oracle, drop the SQL into a New Query window, and then copy and paste into your document. You can do the same with code if you have access to an IDE like Eclipse or Visual Studio.
However, the problem with industrial tools like these is two-fold: Speed and Cost. Industrial tools are not built to open in half a second 30 times a day; they're designed for heavy duty coding, and that makes them less than rapid at opening. And although Eclipse is free, Visual Studio, SQL Server and dbForge Studio are not. As I've discussed elsewhere, people should have the licences they need, and it doesn't seem a good use of your budget to spend large amounts of money on licences just so a writer can format code a bit quicker.
Besides, who wants to have to open a SQL application, and an IDE, and an HTML editor, and an XML editor, and a CSS editor several times a day? If only there was a lightweight, fast, free, easy to use application that would open these files with the odd extensions automatically and quickly, and just format the code for us!
Well, there is. It's called Notepad++ and seriously, it's awesome.
If you're not a particularly technical person, Notepad++ might have passed you by, but it is essentially the Swiss-army knife of coding and it is free, lightweight, very quick, and it will format whatever legitimately extensioned code file you give it with ease. Well, let me rephrase that: The list of programming language file extensions it supports is very large, and the chances of you working on one that isn't supported by Notepad++ is low. All you have to do it right-click a file and Open with Notepad++ - or set Notepadd++ to be the default program for opening the code files you need to format - and it'll do the rest.
Here's an example .xml file opened in standard Notepad:
And exactly the same file in Notepad++:
That is out of the box, default functionality. I created a new text file, wrote some XML in it, saved it, renamed the file to XML.xml, and opened it in Notepad++. All you have to do at this point is cut and paste the code into your document, and you're good to go.
I'll come clean with you on one thing: If you want it to auto-indent code that hasn't already been indented, you will need to install a plugin. But that's not too tough an ask, especially if you deal with existing, indented code. By default Notepad++ will change the indent of the TAB button to the standard indent for whatever code you're working in, so as long as you haven't got long screeds of unformatted code to work with, indenting it yourself is very easy. There is also a very active user community, as Notepad++ is used by millions of people around the world, so if you have questions there's a good chance the answer is available already, or someone will answer it for you if you ask on a forum.
If you've got another tool which you think is as good as or maybe even better than Notepad++, please share it in the comments.
Sunday 21 June 2015
Improving Documentation ROI: Part 3 - Financial Costs
In Part 1 we looked at what ROI is, and how to calculate it.
In Part 2 we looked at the broad picture of cost areas, specifically money and time.
In Part 3, we're going to look in more detail at reducing financial costs.
You may remember that at the end of Part 2 we talked about the broad areas in which you could reduce costs, and split them into direct and indirect costs:
1 - Reducing department costs (e.g. cost of tools)
2 - Reducing company costs (e.g. lowering the number of support calls)
3 - Improving department efficiency (e.g. single sourcing, homogeneous documentation suites)
4 - Improving company efficiency (e.g. centralised documentation management using taxonomies)
Direct costs (1 and 2) are those that relate to a specific product or team, whereas indirect costs (3 and 4) are spread over a number of products or teams. A cost may be direct in one situation or indirect in another. For example, if you have one product and you need to generate a help file for it, you can purchase a tool that will help you do that. That would be a direct cost. But if you have several products and you'll use the tool to generate help files for all of them, that would be an indirect cost.
You don't need to get too hung up on the difference between direct and indirect costs, but you should keep them in mind because they have the following important differences:
With both direct and indirect costs you can save 2 things: Time and money. As discussed in Part 1, every activity has an opportunity cost, which is the same thing as saying time equals money. So in this post we'll look at ways to reduce financial costs and/or improve the ROI of financial costs, and in the next post we'll look at time costs.
The financial costs for documentation - the things your company actually has to pay cold hard cash for - are normally pretty limited, whether they are direct or indirect. There are generic cost saving measures that a company can employ, such as using video conferencing to save on travel costs, having enforced expense limits, and so on, but these don't particularly apply to documentation anymore than they do for other areas of your business. So we won't look at those except to say that if your company is a bit profligate and you think the Documentation team can take a lead in reducing costs without sacrificing the quality of your output, go right ahead. Anything which improves your reputation and value to the company is a good thing.
The specific expenses for documentation, over and above the generic fixed costs like office space and computers, are normally Technical Writers and tools. As stated in Part 1 I'm not interested in helping companies reduce headcount, so that leaves tools as the one financial cost. Depending on what documentation you need to produce, you might have some or all of the following types of tools:
The cost of these tools comes down to licensing. To reduce this cost, firstly you can make sure that you have all of, and only, the licences you need (you'd be surprised how many people just assume that they need the same number of licences with the same level of support, every time the renewal comes up), and secondly you can look into free or cheaper alternatives. Now I'm not saying that you have to take the cheapest option and ignore functional requirements. You'll prise my copy of MadCap Flare out of my cold, dead hands, but Photoshop? Do you really need that? The answer might be "Yes", but there are excellent free alternatives like Paint.NET and The Gimp that provide large swathes of the functionality that PhotoShop does without the burden of the cost. Similarly, do you really need SnagIt when you can click PrtScn to capture whatever is on your screen (or ALT+PrtScn to capture just the application that has focus) and crop it in Paint.NET? Again, the answer might be "Yes", but it might be "No". And if there's one person on your team who legitimately can't do their job without Photoshop, but everyone else can, then you can still reduce your licence number down to 1, which will be a significant saving given the cost of some of these tools.
Reducing your licence costs can be seen as not so much an improvement on the return on investment, so much as a reduction of the investment itself. But if as Technical Writers you are doing the same (or more) with less, that can also be seen as an improvement on the ROI. Either way, reducing the financial costs of documentation whilst maintaining productivity and quality is always something worth aiming for. Let's look at an example of this in budget terms (all items costs are examples):
Cost of 1 help authoring licence: £750 per year.
Cost of 1 image editing licence: £750 per year.
Cost of 1 screen recording licence: £400 per year.
Cost of 1 screen shot capturing licence: £100 per year.
You have 4 Technical Writers, all of whom have 1 licence for each of these 4 things.
Total licence cost: (750 + 750 + 400 + 100) x 4 = £8000 per year
Following a review of who uses what, you decide the following:
4 x help authoring tools @£750
1 x image editing tool @£750
3 x screen recording tools @£400
Total cost: (4 x 750) + (1 x 750) + (3 x 400) = £4950
This is a reduction of £3050, or a 38.125% reduction in financial costs. And it gets better: This cost reduction will apply every year until or unless circumstances change, so over the course of 3 years your company will spend £9150 less, simply by you performing a one-time review of your licences.
Admittedly, it is often the case that the savings won't be this dramatic, but this example is used because it is an entirely plausible scenario. In technical divisions there can often be a certain amount of "keeping up with the Joneses", which can lead to a culture where if one person is given a licence for a tool it can be seen as an indication that they are more senior or more important or working on something cooler. Sometimes this is true, but a lot of the time it's not - a person is given what they need to do the job. But the whole team will be given the tool to prevent grumbling. After a pretty short period of time, most people realise that they don't actually need the tool, so the money spent has essentially been wasted.
(There is an argument that an investment in something that keeps team morale up and unhappiness down is a valid investment, and I have sympathy with that. No-one wants to work at a company that won't provide free coffee because they're too tight to spend £20 a month on a tin of Nescafe. However, there is a difference between providing benefits that make the workplace a pleasant place to be, and wasting money to appease a few grumblers. Paying hundreds or thousands of pounds a year for licences to make sure everyone has access to the new toys, even if they never need them, is not a benefit, it's a waste of money.)
So, regardless of why people have got a licence for something, periodic reviews of your licensing will help you keep costs to the (sensible) minimum.
On top of the licences themselves, there is often a maintenance package attached. In some cases the maintenance package is an additional cost to the licence, but never provides anything you use. As an example, in many cases the maintenance fee solely provides access to 24 hour telephone support, which sounds great, but have you ever actually used it? How many problems have you had with your image editing tool that you haven't been able to solve through a combination of help files, forums, and Google? Probably not many. So if you're paying for support, have a think about whether you really need it, or if only one of you can have it as a backstop in case you need to phone support.
The flip side of this is that the support is often bundled with free upgrades, and that's something you'll want to keep. Also, from my own experience, a member of my team once had a sudden and complete motherboard failure on his computer. MadCap Flare, which we used day in, day out, is linked to the specific machine as part of the licensing, and although you can deactivate the licence on one machine and reactivate it on another, if the motherboard fails that isn't an option. I used the support we got as part of the maintenance package to phone MadCap, explain the situation and get the licence deactivated, and when my colleague's computer was up and running again an hour later after the motherboard had been changed, he just reinstalled Flare, entered his licence code, and he was good to go. That alone was worth paying the maintenance for (and fair play to MadCap, they answered the phone right away and sorted it out within minutes). So choose your path carefully, and don't go the cheap route unless you are legitimately confident that cutting maintenance will have no impact.
All of the above is mainly about reducing the investment. What about increasing the investment to get a better return?
This is going to be a very contextual decision, and one that needs to be made in terms of increasing your ROI. I can evangelise about the benefits of using a good help authoring tool instead of a word processing package all day, but the important question is: Can your investment provide a return that is greater than the cost?
Obviously your investment in a software application is unlikely to return a direct financial benefit, so you need to understand what benefit it will bring. Let's look at an example.
Your help desk gets 200 tickets a week, of which 10 tickets (5%) are questions about how to perform a particularly complex function in your application. It takes the help desk on average 15 minutes to talk the person through the function. 15 minutes x 10 tickets = 150 minutes (2.5 hours). It will cost you £400 to buy the screen recording tool that you can use to make a video that will provide this walk through, and 4 hours to make the recording and embed it in the help file, thus freeing up the help desk to deal with other tickets.
What is the ROI on this investment?
Assume a week to be 40 hours.
Assume the cost per week of a help desk consultant or a technical writer to be £600.
Cost of dealing with these tickets = 2.5 hours = 6.25% of a week = £37.5 per week
£150 per month
£1800 per year
Cost of the screen recording = £400 software cost + (4 hours = 10% of a week = £60) = £460
ROI over one year = (£1800 - £460)/£460 = 2.91 = 291%
So even where the initial investment is quite high relative to the short term cost of just dealing with the tickets, over a period of time the investment pays off well. On the minus side, the screen recording may not be suitable for all of the tickets that come in. But as long as 3 out of the 10 tickets are removed from the help desk workload each week, the ROI over a year will still be positive (I'll leave you to run those figures). On the plus side, you may be able to do this for other complex functions and save the help desk consultants even more time without having to spend any more money as you have already paid for the tool.
This is just one example of ways that an appropriate financial investment can help the documentation team improve their ROI.
Next up in this series: Reducing time costs.
In Part 2 we looked at the broad picture of cost areas, specifically money and time.
In Part 3, we're going to look in more detail at reducing financial costs.
You may remember that at the end of Part 2 we talked about the broad areas in which you could reduce costs, and split them into direct and indirect costs:
1 - Reducing department costs (e.g. cost of tools)
2 - Reducing company costs (e.g. lowering the number of support calls)
3 - Improving department efficiency (e.g. single sourcing, homogeneous documentation suites)
4 - Improving company efficiency (e.g. centralised documentation management using taxonomies)
Direct costs (1 and 2) are those that relate to a specific product or team, whereas indirect costs (3 and 4) are spread over a number of products or teams. A cost may be direct in one situation or indirect in another. For example, if you have one product and you need to generate a help file for it, you can purchase a tool that will help you do that. That would be a direct cost. But if you have several products and you'll use the tool to generate help files for all of them, that would be an indirect cost.
You don't need to get too hung up on the difference between direct and indirect costs, but you should keep them in mind because they have the following important differences:
- Direct costs are typically easier to measure, but (sometimes) have less of an impact on ROI because their scope is smaller.
- Indirect costs are typically more difficult to measure, but (sometimes) have more of an impact on ROI because their scope is larger.
With both direct and indirect costs you can save 2 things: Time and money. As discussed in Part 1, every activity has an opportunity cost, which is the same thing as saying time equals money. So in this post we'll look at ways to reduce financial costs and/or improve the ROI of financial costs, and in the next post we'll look at time costs.
The financial costs for documentation - the things your company actually has to pay cold hard cash for - are normally pretty limited, whether they are direct or indirect. There are generic cost saving measures that a company can employ, such as using video conferencing to save on travel costs, having enforced expense limits, and so on, but these don't particularly apply to documentation anymore than they do for other areas of your business. So we won't look at those except to say that if your company is a bit profligate and you think the Documentation team can take a lead in reducing costs without sacrificing the quality of your output, go right ahead. Anything which improves your reputation and value to the company is a good thing.
The specific expenses for documentation, over and above the generic fixed costs like office space and computers, are normally Technical Writers and tools. As stated in Part 1 I'm not interested in helping companies reduce headcount, so that leaves tools as the one financial cost. Depending on what documentation you need to produce, you might have some or all of the following types of tools:
- Word processing (e.g. Microsoft Word, Adobe FrameMaker)
- Help authoring (e.g. Madcap Flare, Adobe RoboHelp)
- Screen shot capturing (TechSmith Snagit)
- Image editing (Adobe Photoshop, Corel PaintShop Pro)
- Technical illustration (e.g. Adobe Illustrator, Autodesk Sketchbook Pro)
- Graphing and flowchart (e.g. Microsoft Visio)
- Screen recording (e.g. TechSmith Camtasia, Adobe Captivate)
The cost of these tools comes down to licensing. To reduce this cost, firstly you can make sure that you have all of, and only, the licences you need (you'd be surprised how many people just assume that they need the same number of licences with the same level of support, every time the renewal comes up), and secondly you can look into free or cheaper alternatives. Now I'm not saying that you have to take the cheapest option and ignore functional requirements. You'll prise my copy of MadCap Flare out of my cold, dead hands, but Photoshop? Do you really need that? The answer might be "Yes", but there are excellent free alternatives like Paint.NET and The Gimp that provide large swathes of the functionality that PhotoShop does without the burden of the cost. Similarly, do you really need SnagIt when you can click PrtScn to capture whatever is on your screen (or ALT+PrtScn to capture just the application that has focus) and crop it in Paint.NET? Again, the answer might be "Yes", but it might be "No". And if there's one person on your team who legitimately can't do their job without Photoshop, but everyone else can, then you can still reduce your licence number down to 1, which will be a significant saving given the cost of some of these tools.
Reducing your licence costs can be seen as not so much an improvement on the return on investment, so much as a reduction of the investment itself. But if as Technical Writers you are doing the same (or more) with less, that can also be seen as an improvement on the ROI. Either way, reducing the financial costs of documentation whilst maintaining productivity and quality is always something worth aiming for. Let's look at an example of this in budget terms (all items costs are examples):
Cost of 1 help authoring licence: £750 per year.
Cost of 1 image editing licence: £750 per year.
Cost of 1 screen recording licence: £400 per year.
Cost of 1 screen shot capturing licence: £100 per year.
You have 4 Technical Writers, all of whom have 1 licence for each of these 4 things.
Total licence cost: (750 + 750 + 400 + 100) x 4 = £8000 per year
Following a review of who uses what, you decide the following:
- 4 writers need the help authoring tool.
- 1 writer needs the paid-for image editing tool for occasional-but-critical advanced image editing. The other 3 writers only do basic image editing which can be achieved with free tools without any loss of quality or speed.
- 3 writers need a screen recording tool for their documentation deliverables. 1 writer works on a product for which screen recordings are not required.
- No writers need a screen shot capturing tool.
4 x help authoring tools @£750
1 x image editing tool @£750
3 x screen recording tools @£400
Total cost: (4 x 750) + (1 x 750) + (3 x 400) = £4950
This is a reduction of £3050, or a 38.125% reduction in financial costs. And it gets better: This cost reduction will apply every year until or unless circumstances change, so over the course of 3 years your company will spend £9150 less, simply by you performing a one-time review of your licences.
Admittedly, it is often the case that the savings won't be this dramatic, but this example is used because it is an entirely plausible scenario. In technical divisions there can often be a certain amount of "keeping up with the Joneses", which can lead to a culture where if one person is given a licence for a tool it can be seen as an indication that they are more senior or more important or working on something cooler. Sometimes this is true, but a lot of the time it's not - a person is given what they need to do the job. But the whole team will be given the tool to prevent grumbling. After a pretty short period of time, most people realise that they don't actually need the tool, so the money spent has essentially been wasted.
(There is an argument that an investment in something that keeps team morale up and unhappiness down is a valid investment, and I have sympathy with that. No-one wants to work at a company that won't provide free coffee because they're too tight to spend £20 a month on a tin of Nescafe. However, there is a difference between providing benefits that make the workplace a pleasant place to be, and wasting money to appease a few grumblers. Paying hundreds or thousands of pounds a year for licences to make sure everyone has access to the new toys, even if they never need them, is not a benefit, it's a waste of money.)
So, regardless of why people have got a licence for something, periodic reviews of your licensing will help you keep costs to the (sensible) minimum.
On top of the licences themselves, there is often a maintenance package attached. In some cases the maintenance package is an additional cost to the licence, but never provides anything you use. As an example, in many cases the maintenance fee solely provides access to 24 hour telephone support, which sounds great, but have you ever actually used it? How many problems have you had with your image editing tool that you haven't been able to solve through a combination of help files, forums, and Google? Probably not many. So if you're paying for support, have a think about whether you really need it, or if only one of you can have it as a backstop in case you need to phone support.
The flip side of this is that the support is often bundled with free upgrades, and that's something you'll want to keep. Also, from my own experience, a member of my team once had a sudden and complete motherboard failure on his computer. MadCap Flare, which we used day in, day out, is linked to the specific machine as part of the licensing, and although you can deactivate the licence on one machine and reactivate it on another, if the motherboard fails that isn't an option. I used the support we got as part of the maintenance package to phone MadCap, explain the situation and get the licence deactivated, and when my colleague's computer was up and running again an hour later after the motherboard had been changed, he just reinstalled Flare, entered his licence code, and he was good to go. That alone was worth paying the maintenance for (and fair play to MadCap, they answered the phone right away and sorted it out within minutes). So choose your path carefully, and don't go the cheap route unless you are legitimately confident that cutting maintenance will have no impact.
All of the above is mainly about reducing the investment. What about increasing the investment to get a better return?
This is going to be a very contextual decision, and one that needs to be made in terms of increasing your ROI. I can evangelise about the benefits of using a good help authoring tool instead of a word processing package all day, but the important question is: Can your investment provide a return that is greater than the cost?
Obviously your investment in a software application is unlikely to return a direct financial benefit, so you need to understand what benefit it will bring. Let's look at an example.
Your help desk gets 200 tickets a week, of which 10 tickets (5%) are questions about how to perform a particularly complex function in your application. It takes the help desk on average 15 minutes to talk the person through the function. 15 minutes x 10 tickets = 150 minutes (2.5 hours). It will cost you £400 to buy the screen recording tool that you can use to make a video that will provide this walk through, and 4 hours to make the recording and embed it in the help file, thus freeing up the help desk to deal with other tickets.
What is the ROI on this investment?
Assume a week to be 40 hours.
Assume the cost per week of a help desk consultant or a technical writer to be £600.
Cost of dealing with these tickets = 2.5 hours = 6.25% of a week = £37.5 per week
£150 per month
£1800 per year
Cost of the screen recording = £400 software cost + (4 hours = 10% of a week = £60) = £460
ROI over one year = (£1800 - £460)/£460 = 2.91 = 291%
So even where the initial investment is quite high relative to the short term cost of just dealing with the tickets, over a period of time the investment pays off well. On the minus side, the screen recording may not be suitable for all of the tickets that come in. But as long as 3 out of the 10 tickets are removed from the help desk workload each week, the ROI over a year will still be positive (I'll leave you to run those figures). On the plus side, you may be able to do this for other complex functions and save the help desk consultants even more time without having to spend any more money as you have already paid for the tool.
This is just one example of ways that an appropriate financial investment can help the documentation team improve their ROI.
Next up in this series: Reducing time costs.
Thursday 4 June 2015
Estimation and Inductive Logic
In the previous post we talked about estimating tasks using story points, and how measuring complexity was an exercise in relative estimation (a comparison of 2 things), rather than absolute measurement. Estimating complexity rather than time is the most difficult part for people to get their heads around, and it might help to understand a bit about how we use estimation in everyday life to provide some context.
In order to do this, I need to delve into some old-school logic. Stick with me, I promise this is going somewhere relevant.
There are two main types of inferential logic, and these are from the general to the particular (called deduction) and from the particular to the general (called induction).
Deductive reasoning is the process of creating a valid argument from 2 or more propositions. The classic deductive example is the syllogism:
All Greeks are men
All men are mortal
Therefore, all Greeks are mortal
The conclusion to this syllogism (all Greeks are mortal) is said to be valid, and considered to be rigorously provable using symbolic logic, which we won’t go into here for reasons of brevity, but it looks like maths and comes prior to maths; it is the entire basis on which mathematics stands. For ease of remembering, the logic that Sherlock Holmes uses to solve his cases is deductive – he has facts A, B and C, and deduces from them conclusion D (it’s not strictly accurate to say that about Holmes all of the time, but it’s true for enough most of the time to be a useful way of remembering it).
Inductive reasoning is the process of drawing conclusions from specific examples. Unlike deductive reasoning, there is no certainty to inductive reason; it infers, or produces a probability, rather than producing a logically valid conclusion. This is the so-called problem of induction, and has been argued over by philosophers for a very long time. An example of inductive reasoning would be:
The sun has risen every day since I was born. Therefore it will rise again tomorrow.
Now, barring some (literally) astronomically unlikely event, this is true, at least for the next 4 billion or so years. But it’s not true in the sense that the syllogism above is true. There is nothing that can change the valid conclusion of the syllogism; it is a matter of logic, and the “real world” can’t alter that validity. Even if there were no Greeks, or a Greek was found who was immortal, the logic of the statement would still produce a valid conclusion.
But the inductive statement about the sun rising could be false, or it could be made false by something happening. As an example of that, in Britain for a long time it was considered that a defining feature of a swan was that it was white. This was taken as just a fact of life – the sun is yellow, grass is green, swans are white. An inductive statement around that might be “I’ve never seen a swan, but if I do, it’ll be a white swan”. Perfectly reasonable, but also completely false, because it turns out there are black swans in other countries. Ultimately, the logic of a “true” inductive statement simply doesn’t provide a valid logical conclusion, even if the statement is factually correct every day of your life.
The important thing about inductive reasoning is that it is how we predict things in order to navigate around our lives. If I was asked how long it takes to get from Swansea to Cardiff, I would say “54 minutes”, because I know from experience that that is how long it takes the train to get from Swansea to Cardiff, or I might say “about an hour”, because I know from experience how long it takes to drive there. That is inductive reasoning.
All very interesting, but how does this relate to estimating story points?
We estimate things - the time it takes to get somewhere, the height of a person, the weight of a bag - based on inductive reasoning. I'll make that point again to be absolutely clear: We estimate using inductive reasoning. That means that estimation is a process of taking our previous experience and drawing logical conclusions from it.
It is therefore entirely natural to think that the process of "estimation" in sprint planning will use the same experience-based reasoning, regardless of whether we are estimating time or story points. After all, no matter what the system of measurement, you're using the same experience to make the estimation. This must mean that complexity and time are really correlates of each other, right?
Wrong.
When you are estimating story points, the process is one of comparison, NOT one of estimation. This is where people seem to get hung up. Even if they don't understand what inductive logic is, something feels wrong to a lot of people if they are asked to estimate something in complexity without correlating that complexity with time. There's a good reason for that: Estimation as an activity requires experience of previous situations from which you use inductive logic to make a prediction about the future. But when you are starting out in agile, you probably don't HAVE any previous experience in estimating complexity, but you DO have experience in estimating time. So it feels much more natural when you try to estimate a task to work out how long it will take you, instead of just how complex it is. Your mind gets pulled into the literal meaning of the word "estimation", but you're not estimating at all, you're comparing. This is a qualitatively different activity. You compare "against" (something else), you estimate "in" (a unit of measurement).
When you go into your next sprint planning, try swapping the word "estimate" with the word "compare". You may find that the process makes a lot more sense for you. And if it doesn't, at least now you know why!
In order to do this, I need to delve into some old-school logic. Stick with me, I promise this is going somewhere relevant.
There are two main types of inferential logic, and these are from the general to the particular (called deduction) and from the particular to the general (called induction).
Deductive reasoning is the process of creating a valid argument from 2 or more propositions. The classic deductive example is the syllogism:
All Greeks are men
All men are mortal
Therefore, all Greeks are mortal
The conclusion to this syllogism (all Greeks are mortal) is said to be valid, and considered to be rigorously provable using symbolic logic, which we won’t go into here for reasons of brevity, but it looks like maths and comes prior to maths; it is the entire basis on which mathematics stands. For ease of remembering, the logic that Sherlock Holmes uses to solve his cases is deductive – he has facts A, B and C, and deduces from them conclusion D (it’s not strictly accurate to say that about Holmes all of the time, but it’s true for enough most of the time to be a useful way of remembering it).
Inductive reasoning is the process of drawing conclusions from specific examples. Unlike deductive reasoning, there is no certainty to inductive reason; it infers, or produces a probability, rather than producing a logically valid conclusion. This is the so-called problem of induction, and has been argued over by philosophers for a very long time. An example of inductive reasoning would be:
The sun has risen every day since I was born. Therefore it will rise again tomorrow.
Now, barring some (literally) astronomically unlikely event, this is true, at least for the next 4 billion or so years. But it’s not true in the sense that the syllogism above is true. There is nothing that can change the valid conclusion of the syllogism; it is a matter of logic, and the “real world” can’t alter that validity. Even if there were no Greeks, or a Greek was found who was immortal, the logic of the statement would still produce a valid conclusion.
But the inductive statement about the sun rising could be false, or it could be made false by something happening. As an example of that, in Britain for a long time it was considered that a defining feature of a swan was that it was white. This was taken as just a fact of life – the sun is yellow, grass is green, swans are white. An inductive statement around that might be “I’ve never seen a swan, but if I do, it’ll be a white swan”. Perfectly reasonable, but also completely false, because it turns out there are black swans in other countries. Ultimately, the logic of a “true” inductive statement simply doesn’t provide a valid logical conclusion, even if the statement is factually correct every day of your life.
The important thing about inductive reasoning is that it is how we predict things in order to navigate around our lives. If I was asked how long it takes to get from Swansea to Cardiff, I would say “54 minutes”, because I know from experience that that is how long it takes the train to get from Swansea to Cardiff, or I might say “about an hour”, because I know from experience how long it takes to drive there. That is inductive reasoning.
All very interesting, but how does this relate to estimating story points?
We estimate things - the time it takes to get somewhere, the height of a person, the weight of a bag - based on inductive reasoning. I'll make that point again to be absolutely clear: We estimate using inductive reasoning. That means that estimation is a process of taking our previous experience and drawing logical conclusions from it.
It is therefore entirely natural to think that the process of "estimation" in sprint planning will use the same experience-based reasoning, regardless of whether we are estimating time or story points. After all, no matter what the system of measurement, you're using the same experience to make the estimation. This must mean that complexity and time are really correlates of each other, right?
Wrong.
When you are estimating story points, the process is one of comparison, NOT one of estimation. This is where people seem to get hung up. Even if they don't understand what inductive logic is, something feels wrong to a lot of people if they are asked to estimate something in complexity without correlating that complexity with time. There's a good reason for that: Estimation as an activity requires experience of previous situations from which you use inductive logic to make a prediction about the future. But when you are starting out in agile, you probably don't HAVE any previous experience in estimating complexity, but you DO have experience in estimating time. So it feels much more natural when you try to estimate a task to work out how long it will take you, instead of just how complex it is. Your mind gets pulled into the literal meaning of the word "estimation", but you're not estimating at all, you're comparing. This is a qualitatively different activity. You compare "against" (something else), you estimate "in" (a unit of measurement).
When you go into your next sprint planning, try swapping the word "estimate" with the word "compare". You may find that the process makes a lot more sense for you. And if it doesn't, at least now you know why!
Story Points + Estimation = Confusion
When teams are implementing agile for the first time, a lot of people struggle with the use of story points (sometimes called effort points) to estimate tasks. I've heard the following sorts of questions come up many times:
Got your coffee? Right, let's crack on.
Story points are units of measurement that are used to measure the complexity of a task. The valid values that can be used are normally in a pseudo-Fibonacci sequence, e.g. 0, 0.5, 1, 2, 3, 5, 8, 13, 25. The complexity of a task is measured during sprint planning using these story points. An initial small high-priority task is selected from the release backlog by the team and given a story points number of 2. This task acts as the basis for comparison with the next task on the backlog.
The highest-priority task on the backlog is then discussed and, once everyone understands what has to be done, every member of the team decides what the complexity of the task is, relative to the first task that has 2 story points of complexity. This decision is made silently by each individual without conferring, and then at an agreed point (normally the Scrum master saying "Is everyone ready?", or words to that effect), every member of the team displays their estimate at the same time. Usually this is done by using planning poker cards that everyone holds up at the same time. If everyone holds up the same card then the task is given that number of story points and added to the sprint backlog, and the team moves on to the next task on the release backlog. If team members have held up cards with different estimates on, the team discusses the reasons for this, and then estimates again. This process continues until either all team members agree, or there is a large majority consensus.
Tasks are pulled off the release backlog in priority order and discussed and estimated (using the tasks that have already been estimated as the basis for comparison) until the sprint backlog is full. Previous sprints are used to provide an average number of story points that get completed in a sprint. Once this average number is reached the sprint backlog is considered full.
The most obvious and most asked question about story points: We know how many people we have, we know how many work hours are in the sprint, why don't we just estimate the time each task will take instead and keep going until the sprint is filled?
In short, it's because of the Planning Fallacy and the Dunning-Kruger effect. Lots can be said about this fallacy and effect, but the important point is that humans are really bad at estimating how long it will take them to do something. This is for all manner of reasons and holds regardless of whether the estimator is an expert or not. However - and this is probably the key to understanding why story points are used - humans are very good at comparing 2 things with each other, a process known as relative estimation. For example, if I ask you to estimate how tall George Clooney is, then you might guess 5 foot 10 inches. That estimate might be right or wrong (I have no idea). But if I show you this picture of George Clooney and Brad Pitt and ask you who is taller, you'll say Brad Pitt. That's relative estimation, a.k.a. comparing 2 things, and it's something you're good at (see the next section for a more detailed explanation of relative estimation).
Aside from the Planning Fallacy and the Dunning-Kruger effect, time comes with its own social, political and company implications. A stakeholder will ask "Why will it take that long?", when the answer, to you as the expert who does this for a living, is obvious: Because it will. But that's not a very useful answer. Likewise, some people have a tendency to give the answer that they think the stakeholder wants to hear:
"When can this be done by?"
"Er....tomorrow?"
That doesn't happen with story points because a measure of complexity cannot be directly equated to a measure of time. Over a number of sprints the velocity of each individual sprint, assuming a settled team, should get closer and closer to the average velocity of all the sprints. If your average velocity is 30 story points, and your sprint length is 2 weeks, you can equate 15 story points of complexity with one week's work by the team. But"equates" is not the same thing as "equals". This is an average, and it is entirely possible that on average 20 points gets done in the first week of the sprint and only 10 points in the second week (or vice versa). This could be due to any number of factors. The best you can do, whilst still being accurate, is to say that this specific team can achieve, on average, 30 story points of work in a 2 week sprint. That's your velocity.
Abstracting complexity away from time helps the team to be a "black box" - work items are added to the release backlog, and every sprint will slice off the top x number of story points. That is as detailed as stakeholders need to know. They might want to know more, but that's their problem, not the team's problem. Remember, the onus is on the team to be self-organising and responsible. That means that the stakeholders have to trust that the team will achieve their commitment in each sprint, all things being equal, and the team has to take responsibility for achieving that commitment. How the team achieves it is up to them, not up to the stakeholders.
(Occasionally a crisis will occur and you'll need to provide a patch to resolve a system down or some other equally serious problem. In this kind of circumstance you will probably need to give a time line for delivery, and, Agile theory aside, welcome to the real world. But this shouldn't happen very often, and the Agile theory should become your "real world" day in, day out. Agile is a way of working, not a luxury that you abandon when the unexpected occurs.)
There is no objective yardstick against which story point complexity is measured, so a task that has a complexity of 2 story points for one team might have a complexity of 8 story points for another team. This is entirely different to units of measurement that do have an objective yardstick. For example:
A metre is the length of the path travelled by light in vacuum during a time interval of 1/299,792,458 of a second.
Either something is a metre or or it isn't, depending on whether it meets that objective definition of what a metre is. A task can be measured accurately as 2 story points, or 5 story points, or 8 story points by 3 different teams. A metre is an absolute measurement. A story point is a relative measurement.
However, even for relative measurement you still need something against which you can measure an item. Otherwise the measurement of the item is not relative to anything. When estimating tasks for a sprint backlog, a bootstrapping process is used to provide the relative measurement. A single item (A) is chosen, given a story point number, and used as the measurement against which the next item (B) on the sprint backlog is measured. A and B are then used as measures of complexity to measure item C against, and so on. So how do you choose item A, and how complex should it be?
The first item chosen should be one of the least complex tasks on the backlog that is in a priority position that means it will be done in this sprint. It shouldn't be the smallest possible task, because that doesn't give you any leeway if you come up against a task that turns out to be even less complex. But it should be simple enough that the team can agree on a complexity of 2 story points for it. Why 2? Because that should be simple enough for there to be a decent amount of certainty about what is entailed and how difficult that will be, but it also leaves you 0.5 and 1 on your pseudo-Fibbonacci sequence if you find tasks which are less complex.
As an example, you might pick a work item (let's call it D) that requires you to add a column to a database table, add a text field to a form, and link the column to the field. If the Product Owner has done their job correctly, the acceptance criteria should already include information about field type, length, nullable or not nullable, and so on for the database field, and the location when the field should be added to the software along with the name, max characters, alpha, numeric, or alphanumeric, etc. The team should have groomed item D and made sure all of this information was available ready for planning. Because this work is not complex, item D should have a low number of story points, but it is possible to think of an item that is even less complex - say, adding a static value to a database field - for which you'll need the lowest number of story points. Therefore give item D 2 story points, add it to the sprint backlog and start estimating the highest priority item on the release backlog. Item D can be used as a marker against which the complexity of this next item can be relatively measured.
No. There is a loose correlation between complexity and time, in the sense that the Large Hadron Collider took a lot longer to build than a microscope does, but that doesn't have to be so. The fastest 147 ever recorded in professional snooker took Ronnie O'Sullivan 5 minutes 20 seconds, whereas the slowest took Cliff Thorburn over 30 minutes. The outcome was the same (a maximum break), and to you or me the complexity would be the same (very, very high). If the task was less complex for O'Sullivan, that was because he is the greatest snooker player to ever pick up a cue, whilst Thorburn, though a world champion in his own right, is not. The time in which O'Sullivan compiled his 147 was a function of his ability, experience and working style (O'Sullivan's nickname is "The Rocket", whereas Thorburn's nickname was "The Grinder"). The time was not a function of the complexity of achieving a 147. This is why you can't compare the velocity of 2 different teams, or even the same team that's had one or more personnel changes. Different skills, experience levels, personalities and working styles combine in a team to generate a unique speed of working that can't and shouldn't be judged against the speed of working of another team. This is especially true when the teams are not working on exactly the same product and problems. Complexity is a measure of how complex a work item is. Time is a measurement of...well, time. They are not the same thing.
There are 2 obvious alternative scales - the actual Fibonacci sequence or a sequence of consecutive numbers like 1....20 - but there are problems with both of them.
The actual Fibonacci sequence is 0, 1, 1, 2, 3, 5, 8, 13 and so on. The second "1" is not required when measuring complexity, so it's taken out of the possible values and 0.5 is used instead.
Using a sequential list, such as 1 to 20, means that the difference in complexity between 5 and 6 is something you can argue about, but the difference in complexity between 5 and 8 will be more easily visible. At the end of sprint planning there should be an obvious difference of complexity between those items with 5 story points and those items with 8 story points. That won't be the case if you use a sequential list where the differences between a 5 and a six are too fine-grained to understand at this stage of the process.
A couple of notes about this:
Inspect and adapt depending on whether your team finds it useful to have all of these values as an option.
The temptation is to say "Our sprint velocity is x hours of work.", or "Our sprint velocity is y work items.". Waterfall uses time and number of items left as important markers in the production process, and so it's tempting to keep using them when transitioning to Agile.
Don't do this.
Measuring how much time is left is like measuring how much fluid is left to drain out of a vessel. It's a meaningless measurement, unless you know the rate of flow. A litre of water could flow out of a vessel in seconds, or in days, depending on whether it's gushing or dripping. Likewise, measuring the number of items left is like measuring how much wood a carpenter has left whilst he's making something. Unless you know how quickly the carpenter can use that wood and what he's got to do to it to shape it appropriately, knowing how much he's got left won't tell you when he will finish, it will just tell you that he's not finished yet.
Story points abstract away the type of work that's being done, and just leave the complexity. This helps to remove prejudices and assumptions about how long different types of task take to complete. For example, most bugs might take, on average, 4 hours to investigate, fix, test and document, whereas most enhancements might take on average 8 hours. But "most" is not the same as "all". Anyone involved in software development who hasn't seen what superficially appeared to be a simple bug grow into a behemoth that took several people several days or more to find a viable solution for, hasn't been in software development for that long. The process of grooming will (hopefully) identify most bugs that might fall under this category, and as such they can be given an appropriate complexity, rather than people making the assumption that a bug = 4 hours and estimating it as such.
Story points provide a more accurate measure of how much work can actually get done in a sprint. The more sprints that are done, the more accurate the average measure of "number of sprints" / "Total number of story points done in those sprints" becomes.
A big part of the power of agile is the self-organising team. The team should, at its most fluent, operate almost as a gestalt entity, many minds combined to form something which is greater than the whole. Admittedly that's more of a theoretical goal than a practical option, but you shouldn't ever forget that the more harmonious and collective the team is, the more efficient and effective you will be. With that in mind, reality dictates that in any group of individuals there will be people who are more vocal than others, people who are more persuasive than others, and people who are more influential than others.
There is nothing wrong with this, but these people shouldn't be allowed to skew the decision of others, except through the use of relevant facts. "The joint knowledge of many diverse individuals can outperform experts in estimation and decisions-making problems", and if an influential person says "This item has a complexity of 5", then some of the group may agree with them simply because they don't want to disagree. But many minds provide many viewpoints, skills and insights, and it's important that these are all heard, otherwise the whole point of team planning is missed. The multiple inputs are the engine that drives accurate estimation, and conferring on the complexity score can prevent that.
It's important to understand that this lack of conferring is as much to do with forcing people who don't want to disagree to give an opinion, as it is to do with making sure that one or two people don't dominate the decision. Both sides need to move from dominant or submissive attitudes to assertive attitudes instead. But people being people, you will likely only get so far down that path before people's hard limits are reached. The talker will always prefer to talk, the silent will always prefer to stay silent, and the estimation process will suffer as a result.
Therefore agile best practise is not to confer on the specific story point number. The complexity can be discussed in relation to previously estimated tasks - "I think this is more complicated than item A because of x, y and z." - but when it comes to people holding up their planning poker cards, they should make their own choice. Your Scrum Master should be aware of this and facilitate the discussion accordingly.
Good question - if you don't know the average number of story points that the team has completed in previous sprints (your velocity), how do you know how many story points to commit to in the first sprint? As discussed above, story points are a measure of relative complexity. It is a fair assumption that your team won't be entirely made up of fresh-from-graduating staff with no real world experience, so the team should decide together how many story points they are willing to commit to in the first sprint. This decision should be based on the team's collective experience of development, testing and documentation (or whatever skills are used in the team). It is important that this decision is not made before planning, because you will be limiting yourself needlessly. Always remember the "Inspect and adapt" maxim; inspect the sprint backlog as you go through the planning meeting and adapt your decision about how many story points you're willing to commit to as you go.
Give your team a chance to work out their velocity by slightly under-committing in the first sprint, because you can always add more work in part-way through the sprint if you haven't added enough at the initial planning. Adding work into a sprint is always preferable to overcommitting and having to decide what to take out. The first sprint should be a calibration sprint and I wouldn't recommend planning a release at the end of it if you can avoid it. You are stepping into the unknown and you don't need that additional pressure. Obviously this advice is contingent on how supportive your internal stakeholders and management are, but if they don't understand why you'd rather not commit to a release at the end of the first sprint, they either don't understand Agile, or they're unrealistic.
Once you've completed your first sprint you can discuss the velocity at the retrospective, and work out if your story point estimation was a fair reflection of each work item's complexity. Take the results of these discussions into the planning session for sprint 2 and use them to determine an achievable story point number for the sprint.
Hopefully that will clear up some of the confusion you might feel about story point estimating. There is more to say about the difference between estimating and comparing, but I'll leave that for another post. If you've got any questions about story points or estimating, feel free to drop them in the comments and I'll do my best to answer them.
- Why are we estimating complexity?
- Why aren't we estimating time?
- Does complexity = time?
- Why are we using a Fibonacci sequence?
- Isn't this needlessly complicated?
Got your coffee? Right, let's crack on.
Overview
Story points are units of measurement that are used to measure the complexity of a task. The valid values that can be used are normally in a pseudo-Fibonacci sequence, e.g. 0, 0.5, 1, 2, 3, 5, 8, 13, 25. The complexity of a task is measured during sprint planning using these story points. An initial small high-priority task is selected from the release backlog by the team and given a story points number of 2. This task acts as the basis for comparison with the next task on the backlog.
The highest-priority task on the backlog is then discussed and, once everyone understands what has to be done, every member of the team decides what the complexity of the task is, relative to the first task that has 2 story points of complexity. This decision is made silently by each individual without conferring, and then at an agreed point (normally the Scrum master saying "Is everyone ready?", or words to that effect), every member of the team displays their estimate at the same time. Usually this is done by using planning poker cards that everyone holds up at the same time. If everyone holds up the same card then the task is given that number of story points and added to the sprint backlog, and the team moves on to the next task on the release backlog. If team members have held up cards with different estimates on, the team discusses the reasons for this, and then estimates again. This process continues until either all team members agree, or there is a large majority consensus.
Tasks are pulled off the release backlog in priority order and discussed and estimated (using the tasks that have already been estimated as the basis for comparison) until the sprint backlog is full. Previous sprints are used to provide an average number of story points that get completed in a sprint. Once this average number is reached the sprint backlog is considered full.
Why are we estimating complexity instead of time?
The most obvious and most asked question about story points: We know how many people we have, we know how many work hours are in the sprint, why don't we just estimate the time each task will take instead and keep going until the sprint is filled?
In short, it's because of the Planning Fallacy and the Dunning-Kruger effect. Lots can be said about this fallacy and effect, but the important point is that humans are really bad at estimating how long it will take them to do something. This is for all manner of reasons and holds regardless of whether the estimator is an expert or not. However - and this is probably the key to understanding why story points are used - humans are very good at comparing 2 things with each other, a process known as relative estimation. For example, if I ask you to estimate how tall George Clooney is, then you might guess 5 foot 10 inches. That estimate might be right or wrong (I have no idea). But if I show you this picture of George Clooney and Brad Pitt and ask you who is taller, you'll say Brad Pitt. That's relative estimation, a.k.a. comparing 2 things, and it's something you're good at (see the next section for a more detailed explanation of relative estimation).
Aside from the Planning Fallacy and the Dunning-Kruger effect, time comes with its own social, political and company implications. A stakeholder will ask "Why will it take that long?", when the answer, to you as the expert who does this for a living, is obvious: Because it will. But that's not a very useful answer. Likewise, some people have a tendency to give the answer that they think the stakeholder wants to hear:
"When can this be done by?"
"Er....tomorrow?"
That doesn't happen with story points because a measure of complexity cannot be directly equated to a measure of time. Over a number of sprints the velocity of each individual sprint, assuming a settled team, should get closer and closer to the average velocity of all the sprints. If your average velocity is 30 story points, and your sprint length is 2 weeks, you can equate 15 story points of complexity with one week's work by the team. But"equates" is not the same thing as "equals". This is an average, and it is entirely possible that on average 20 points gets done in the first week of the sprint and only 10 points in the second week (or vice versa). This could be due to any number of factors. The best you can do, whilst still being accurate, is to say that this specific team can achieve, on average, 30 story points of work in a 2 week sprint. That's your velocity.
Abstracting complexity away from time helps the team to be a "black box" - work items are added to the release backlog, and every sprint will slice off the top x number of story points. That is as detailed as stakeholders need to know. They might want to know more, but that's their problem, not the team's problem. Remember, the onus is on the team to be self-organising and responsible. That means that the stakeholders have to trust that the team will achieve their commitment in each sprint, all things being equal, and the team has to take responsibility for achieving that commitment. How the team achieves it is up to them, not up to the stakeholders.
(Occasionally a crisis will occur and you'll need to provide a patch to resolve a system down or some other equally serious problem. In this kind of circumstance you will probably need to give a time line for delivery, and, Agile theory aside, welcome to the real world. But this shouldn't happen very often, and the Agile theory should become your "real world" day in, day out. Agile is a way of working, not a luxury that you abandon when the unexpected occurs.)
What are we estimating the complexity against?
There is no objective yardstick against which story point complexity is measured, so a task that has a complexity of 2 story points for one team might have a complexity of 8 story points for another team. This is entirely different to units of measurement that do have an objective yardstick. For example:
A metre is the length of the path travelled by light in vacuum during a time interval of 1/299,792,458 of a second.
Either something is a metre or or it isn't, depending on whether it meets that objective definition of what a metre is. A task can be measured accurately as 2 story points, or 5 story points, or 8 story points by 3 different teams. A metre is an absolute measurement. A story point is a relative measurement.
However, even for relative measurement you still need something against which you can measure an item. Otherwise the measurement of the item is not relative to anything. When estimating tasks for a sprint backlog, a bootstrapping process is used to provide the relative measurement. A single item (A) is chosen, given a story point number, and used as the measurement against which the next item (B) on the sprint backlog is measured. A and B are then used as measures of complexity to measure item C against, and so on. So how do you choose item A, and how complex should it be?
The first item chosen should be one of the least complex tasks on the backlog that is in a priority position that means it will be done in this sprint. It shouldn't be the smallest possible task, because that doesn't give you any leeway if you come up against a task that turns out to be even less complex. But it should be simple enough that the team can agree on a complexity of 2 story points for it. Why 2? Because that should be simple enough for there to be a decent amount of certainty about what is entailed and how difficult that will be, but it also leaves you 0.5 and 1 on your pseudo-Fibbonacci sequence if you find tasks which are less complex.
As an example, you might pick a work item (let's call it D) that requires you to add a column to a database table, add a text field to a form, and link the column to the field. If the Product Owner has done their job correctly, the acceptance criteria should already include information about field type, length, nullable or not nullable, and so on for the database field, and the location when the field should be added to the software along with the name, max characters, alpha, numeric, or alphanumeric, etc. The team should have groomed item D and made sure all of this information was available ready for planning. Because this work is not complex, item D should have a low number of story points, but it is possible to think of an item that is even less complex - say, adding a static value to a database field - for which you'll need the lowest number of story points. Therefore give item D 2 story points, add it to the sprint backlog and start estimating the highest priority item on the release backlog. Item D can be used as a marker against which the complexity of this next item can be relatively measured.
So isn't complexity just another way of saying how long it will take?
No. There is a loose correlation between complexity and time, in the sense that the Large Hadron Collider took a lot longer to build than a microscope does, but that doesn't have to be so. The fastest 147 ever recorded in professional snooker took Ronnie O'Sullivan 5 minutes 20 seconds, whereas the slowest took Cliff Thorburn over 30 minutes. The outcome was the same (a maximum break), and to you or me the complexity would be the same (very, very high). If the task was less complex for O'Sullivan, that was because he is the greatest snooker player to ever pick up a cue, whilst Thorburn, though a world champion in his own right, is not. The time in which O'Sullivan compiled his 147 was a function of his ability, experience and working style (O'Sullivan's nickname is "The Rocket", whereas Thorburn's nickname was "The Grinder"). The time was not a function of the complexity of achieving a 147. This is why you can't compare the velocity of 2 different teams, or even the same team that's had one or more personnel changes. Different skills, experience levels, personalities and working styles combine in a team to generate a unique speed of working that can't and shouldn't be judged against the speed of working of another team. This is especially true when the teams are not working on exactly the same product and problems. Complexity is a measure of how complex a work item is. Time is a measurement of...well, time. They are not the same thing.
Why do we use a pseudo-Fibonacci sequence?
There are 2 obvious alternative scales - the actual Fibonacci sequence or a sequence of consecutive numbers like 1....20 - but there are problems with both of them.
The actual Fibonacci sequence is 0, 1, 1, 2, 3, 5, 8, 13 and so on. The second "1" is not required when measuring complexity, so it's taken out of the possible values and 0.5 is used instead.
Using a sequential list, such as 1 to 20, means that the difference in complexity between 5 and 6 is something you can argue about, but the difference in complexity between 5 and 8 will be more easily visible. At the end of sprint planning there should be an obvious difference of complexity between those items with 5 story points and those items with 8 story points. That won't be the case if you use a sequential list where the differences between a 5 and a six are too fine-grained to understand at this stage of the process.
A couple of notes about this:
- 0 can be used for non-productive tasks that need to be recorded.
- 0.5 is not always used; some teams find it useful, others don't.
- Some teams use a task that has 2 story points as their baseline first task, some teams use a task that has 1 story point. It really depends on how often you need to use 0.5 and 1 as to whether you use 1 or 2 as your baseline first task.
- If an estimate is agreed to be 13 or higher, some teams take this as a sign that the task has not been sufficiently broken down to the level of granularity that is required for a task. The task needs to be groomed into smaller chunks before it can be estimated.
Inspect and adapt depending on whether your team finds it useful to have all of these values as an option.
How do we use story points to calculate velocity?
The temptation is to say "Our sprint velocity is x hours of work.", or "Our sprint velocity is y work items.". Waterfall uses time and number of items left as important markers in the production process, and so it's tempting to keep using them when transitioning to Agile.
Don't do this.
Measuring how much time is left is like measuring how much fluid is left to drain out of a vessel. It's a meaningless measurement, unless you know the rate of flow. A litre of water could flow out of a vessel in seconds, or in days, depending on whether it's gushing or dripping. Likewise, measuring the number of items left is like measuring how much wood a carpenter has left whilst he's making something. Unless you know how quickly the carpenter can use that wood and what he's got to do to it to shape it appropriately, knowing how much he's got left won't tell you when he will finish, it will just tell you that he's not finished yet.
Story points abstract away the type of work that's being done, and just leave the complexity. This helps to remove prejudices and assumptions about how long different types of task take to complete. For example, most bugs might take, on average, 4 hours to investigate, fix, test and document, whereas most enhancements might take on average 8 hours. But "most" is not the same as "all". Anyone involved in software development who hasn't seen what superficially appeared to be a simple bug grow into a behemoth that took several people several days or more to find a viable solution for, hasn't been in software development for that long. The process of grooming will (hopefully) identify most bugs that might fall under this category, and as such they can be given an appropriate complexity, rather than people making the assumption that a bug = 4 hours and estimating it as such.
Story points provide a more accurate measure of how much work can actually get done in a sprint. The more sprints that are done, the more accurate the average measure of "number of sprints" / "Total number of story points done in those sprints" becomes.
Why can't we confer when we decide on the complexity?
A big part of the power of agile is the self-organising team. The team should, at its most fluent, operate almost as a gestalt entity, many minds combined to form something which is greater than the whole. Admittedly that's more of a theoretical goal than a practical option, but you shouldn't ever forget that the more harmonious and collective the team is, the more efficient and effective you will be. With that in mind, reality dictates that in any group of individuals there will be people who are more vocal than others, people who are more persuasive than others, and people who are more influential than others.
There is nothing wrong with this, but these people shouldn't be allowed to skew the decision of others, except through the use of relevant facts. "The joint knowledge of many diverse individuals can outperform experts in estimation and decisions-making problems", and if an influential person says "This item has a complexity of 5", then some of the group may agree with them simply because they don't want to disagree. But many minds provide many viewpoints, skills and insights, and it's important that these are all heard, otherwise the whole point of team planning is missed. The multiple inputs are the engine that drives accurate estimation, and conferring on the complexity score can prevent that.
It's important to understand that this lack of conferring is as much to do with forcing people who don't want to disagree to give an opinion, as it is to do with making sure that one or two people don't dominate the decision. Both sides need to move from dominant or submissive attitudes to assertive attitudes instead. But people being people, you will likely only get so far down that path before people's hard limits are reached. The talker will always prefer to talk, the silent will always prefer to stay silent, and the estimation process will suffer as a result.
Therefore agile best practise is not to confer on the specific story point number. The complexity can be discussed in relation to previously estimated tasks - "I think this is more complicated than item A because of x, y and z." - but when it comes to people holding up their planning poker cards, they should make their own choice. Your Scrum Master should be aware of this and facilitate the discussion accordingly.
How do we work out how many story points we can fit into the first sprint?
Good question - if you don't know the average number of story points that the team has completed in previous sprints (your velocity), how do you know how many story points to commit to in the first sprint? As discussed above, story points are a measure of relative complexity. It is a fair assumption that your team won't be entirely made up of fresh-from-graduating staff with no real world experience, so the team should decide together how many story points they are willing to commit to in the first sprint. This decision should be based on the team's collective experience of development, testing and documentation (or whatever skills are used in the team). It is important that this decision is not made before planning, because you will be limiting yourself needlessly. Always remember the "Inspect and adapt" maxim; inspect the sprint backlog as you go through the planning meeting and adapt your decision about how many story points you're willing to commit to as you go.
Give your team a chance to work out their velocity by slightly under-committing in the first sprint, because you can always add more work in part-way through the sprint if you haven't added enough at the initial planning. Adding work into a sprint is always preferable to overcommitting and having to decide what to take out. The first sprint should be a calibration sprint and I wouldn't recommend planning a release at the end of it if you can avoid it. You are stepping into the unknown and you don't need that additional pressure. Obviously this advice is contingent on how supportive your internal stakeholders and management are, but if they don't understand why you'd rather not commit to a release at the end of the first sprint, they either don't understand Agile, or they're unrealistic.
Once you've completed your first sprint you can discuss the velocity at the retrospective, and work out if your story point estimation was a fair reflection of each work item's complexity. Take the results of these discussions into the planning session for sprint 2 and use them to determine an achievable story point number for the sprint.
Hopefully that will clear up some of the confusion you might feel about story point estimating. There is more to say about the difference between estimating and comparing, but I'll leave that for another post. If you've got any questions about story points or estimating, feel free to drop them in the comments and I'll do my best to answer them.
Subscribe to:
Posts (Atom)