Imported Blog http://sett.com/import1391721468 A Blog en-us Sun, 18 Aug 2019 23:41:57 +0000 http://sett.com Sett RSS Generator Code Reviews that might actually work http://sett.com/import1391721468/uid/109684 I've had an opportunity to be part of a team doing a lot of greenfield development on a new codebase at a client recently, and it's been a lot of fun. The client already has a codebase that's grown organically over a decade to meet the changing and complex needs of a highly successful company, and is in surprisingly good shape considering. Still: it's huge, incorporates several competing implementations of The One True Programming Style, the occasional flash of mad genius, and a lot of code that was written by very dedicated developers working very hard to make very tight deadlines.

The new codebase shares none of the constraints of the old one, and the team is keen to keep things as pristine as possible as long as possible. One of the best tools in our arsenal is the enforced code review. New code entering the codebase needs to have been reviewed, no exceptions - and the goal is that most of the team reviews each piece. So far it's working out spectacularly well.

LinkedIn tells me I've been working on teams that have tried to incorporate code reviews with varying degrees of success for almost 6 years now, sometimes in larger teams that were sat in the same office, sometimes in smaller ones that were internationally distributed, and sometimes when I've paid external developers to look at code I've been writing by myself for clients.

So, here's what seems to work:

Formally make time for it

Few people seem to enjoy code reviews. There's the mental effort of understanding what someone was trying to achieve, the cognitive load of understanding how a piece of the system you're not working on is meant to fit together, and it takes time away from the joyful process of actually programming.

Conveniently, people will often stop harassing you to do them if you claim to be too busy. This has the downside that with the best intentions, code reviews stop getting done, or just get cursory glances and rubber stamps.

So we've instituted Review Time: 30 minutes in the morning before the standup when if someone asks you to do a code review you're obliged to, as your top priority. If you have a piece of work outstanding that needs review before you can merge it in, you know that your team mates are available to get that done for you, right then.

As tickets in our issue tracker can't be marked complete until they've been reviewed, developers bug other developers to get the reviews done, and have a time of the day when they know no-one can claim to be too busy...

Have a time of day when code reviews are everyone's top priority

Get code reviewed by as many team members as possible

Code reviews have several collateral benefits that mean the more team members who review a given piece of work, the better.

Firstly, it's a great way to make sure development isn't happening in personal silos. If you have an "ActiveMQ guy" and a "Stats Engine lady", you're going to find yourself in trouble when the AMQ Broker loses its breakfast while the AMQ guy is on holiday, or when there's a lot of Stats Engine code to be written, and the Stats Engine lady is snowed under with other tasks. It allows other developers to identify where the parts of the system that aren't obvious to them are, and thus where and how more documentation is needed.

Secondly, code reviewing is a great opportunity for mentoring within the team. A senior developer reviewing a junior's work is going to be able to suggest avenues for improvement the junior might not have thought of - use of a particularly library or feature, or areas where there are subtle bugs or side effects.

Equally, a junior reviewing a senior's work may get exposed to ideas and techniques that are non-obvious to them - whether that's a house style, a useful idiom, or a workaround for a problem they weren't even aware of. They're also more likely to ask disruptive questions that may not have a good answer about why the senior developers are doing things a certain way - it's too easy as a senior developer to assume that other senior developers have good reasons for some of the stranger code they write, and not question them...

When you consider the collateral benefits, getting code reviewed as widely as possible in the team makes a lot of sense

Use a review tool, and add accountability

Use a Code Review tool - it adds accountability, encourages detailed reviews, and archives useful commentary (we're using Review Board).

As we discussed above, the actual process of writing code reviews isn't always that much fun, and there's a temptation for it to turn in to rubber stamping - "uh, yeah, that looked fine" isn't usually a useful review. The act of putting your name on a review and saying "Ship It", knowing it'll be saved in the review system for all time can help focus the mind a little.

If the code breaks and the original developer isn't around, it's easy to look up which developers reviewed the code, and thus should be able to answer questions on it. This tends to make developers much keener to do a thorough job in the code review - both in making sure they understand the code, and in being confident the code is well-tested and of high quality.

My experience has been that code reviewing also becomes a lot easier with a decent tool - you can annotate pieces of code, start discussions with other reviewers and the original author in one place, and easily track improvements to the code as a result of the review.

Finally, it archives useful discussion. While ideally everyone would be producing and keeping up-to-date formal documentation of their architecture, style, and implementation decisions, the real world doesn't always works like this. Being able to drop in to git blame to see which commit some changes were made in, and then being able to go and read the discussions about it at the time can give you much better insight on why certain things were done a certain way, even when the documentation is a little thin.

Use a review tool. Really.

Keep a checklist of things to review

Make sure the team knows what they need to be looking for, at a minimum. Code quality and coding standards need their own article, but some pointers on what people should be looking for not only help inexperienced reviewers, but also help give the original developer an idea for what they should be aiming at.

As a starting point, perhaps consider:

  • Code needs to be tested, and reviewers should be able to get the tests passing on their machine without help from the original developer
  • Enough documentation should be included that the reviewer should be able to explain how the code works, and why it was written in that way
  • House style should be followed: no crazy new indentation style, database table naming conventions, or using $thingy as an identifier...

Decide and document as a team what's important to review

Avoiding drama

Programmers are rarely entirely egoless. Whether it's the senior developer who's been trying to get HR to change his job title to Scientist Guru, or the junior developer who avoids asking questions because they don't want to look ignorant, people can be quite protective of their work and defensive when someone suggests they've done it wrong - especially if that work was particularly arduous to produce.

Decide beforehand as a team how anal you want to be about things like code style (suggestion: dial it up to 11), what a useful level of automated testing and documentation looks like, and where (for example) certain tasks fall in the MVC split. If a reviewer (or a developer) can appeal to the higher authority of the team's best practices document, it can help to diffuse situations where one party feels they're being unfairly targetted.

The more of the team involved in a discussion, the more the discussion becomes about how the team wants to progress than individual personalities. Don't allow senior developers to pull rank (and if you are one, try and resist the temptation) - if someone can't explain their decision to the rest of the team - regardless of the technical ability of the other team members - it's often a good sign that they're clinging to a decision they made from an emotional rather than a professional perspective.

A useful way forward with differences of technical opinion is to try and agree a general principle that the team should be following, codifying that, and seeing if it can be applied to the situation at hand.

Recognize that code reviews can be drama flashpoints, and plan around that as a team

Some final thoughts...

If you're working well together as a team, code reviews should bring up minor issues, rather than major issues - sensible standups help you avoid nasty surprises like big architectural decisions gone awry, and general agreement and team buy-in about what constitutes good work gives everyone a standard to work towards.

Most teams have someone who can help resolve 'tie-breaks' where the "right way" comes down to a judgement call - whether that's the Technical Lead on the project, or, if they're one of the disagreeing parties, a senior dev from another team. Simply the process of explaining it to a third party is often enough to make the right way forward clear to all involved.

Split up code to code review in chunks that are logically useful, rather than that necessarily correspond to certain tickets or commits to the source repository. If your code review tool takes git commits, judicious use of temporary branches, cherry-picking, rebasing, and the path separator (eg: git diff master..mybranch -- foo.txt bar/ lib/foo.pm) can be helpful.

In Summary

Code reviews are a great way of keeping code quality high, intra-team mentoring, and making sure everyone in the is familiar with the wider codebase. Key points:

  • Have a time of day when code reviews are everyone's top priority
  • When you consider the collateral benefits, getting code reviewed as widely as possible in the team makes a lot of sense
  • Use a review tool. Really.
  • Decide and document as a team what's important to review
  • Recognize that code reviews can be drama flashpoints, and plan around that as a team
  • Split code in to the most useful chunks for review, rather than being beholden to your SCM or issue-tracking tool

>> Comment on this Post · Like this Post

]]>
Mon, 04 Jun 2012 15:15:36 +0000 http://sett.com/import1391721468/uid/109684
Scrum, The Good Bits: The Backlog http://sett.com/import1391721468/uid/109683 or Backlogs, accountability & the look on a Product Owner's face when their New Feature pushes another Feature over the deadline

This article is one of a series on Scrum, The Good Bits.

Scrum has this habit of giving new names to old ideas - status meetings become Daily Standups, requirements become Stories, and the list of things you need to do becomes a Backlog. Every so often someone discovers this and decides it means Scrum is bullshit, and loses the baby as the bath water swirls away.

This is a shame, because Scrum has a whole bunch of good ideas. And some of the best ones - from a developer's perspective - come from the Backlog.

The Problem We're Looking to Solve

I had a contract once where we were end-of-lining a platform while they redid the whole thing in Java. The platform was in constant use, however, and we had to keep adding new features (that were ultimately going to get thrown away). Not ideal, but not dreadful.

The other developer was not an easy man to work with, but he was a hard worker and a bright and (technically) talented guy. He was very blunt and far from tactful when dealing with people. But when the internal customer wanted something done, he got right on it and gave it his best shot. Despite this, he'd gotten himself a reputation for being lazy and disorganized.

His internal customer was a Creative, and when he needed something done, he needed it done right now. Even if it was a two-week project, it needed to be done NOW. The developer dutifully obliged - he'd drop whatever else he was doing and get to work on it.

Predictably, nothing got delivered. And because there was no good system for tracking where development time was going, he could also never remember, beyond "Well I was working on it, and then you asked me to work on something else". Everyone's the hero of their own story though, and the Internal Customer was sure it hadn't happened quite like that, and he was the boss, so when things didn't get delivered, it was the developer's fault.

Inevitably in software development, items in current development get pushed down in favour of fire fighting. This is especially true in small teams, and also one of the things that Scrum is meant to protect against.

Enter: The Backlog

The backlog is a Todo list with five very very important attributes that differentiate it from a simple list of tasks.

Firstly, every item on there has an estimate in terms of complexity. The estimates don't have to be perfect, but each is roughly sized in relation to the others, so you know Task A is about twice as hard as Task B. This is useful.

Secondly, the backlog has an order. And items on the backlog must be tackled in the order they're in - no jumping around and working on something further down the list, unless the backlog is reordered, and this is recorded. This is quite important.

Thirdly, the backlog is prominently displayed, so everyone can see it, and everyone can see when it changes, and the effect those changes have. This list is usually called the "Task Board". This is important.

Fourthly, the Product Owner or Customer is solely in charge of the order. Not only are they in charge of it, they're responsible and accountable for it. Yes they should be taking advice from the developers and Project Manager about a sensible way of approaching things, but ultimately, the order in which tasks are tackled is their responsibility. This is very important.

Fifthly, the backlog should have a deadline. This is the most important part.

Traditionally Scrum has three-week deadlines called Sprints. Display your backlog with a deadline, and show the items that will get done before and after that deadline, based on their estimate and the development team's capacity. Whenever a new item is added before the deadline, display other items that are being displaced by it falling out of the deadline.

The sight of a Product Owner given pause for thought when they see a new feature that's "absolutely crucial" pushing another "absolutely crucial" feature outside of the deadline is one of the most beautiful moments in software development, and this majestic moment is brought to you directly by the five important attributes above.

When someone comes to ask for a new feature or task or some time drag on the developers, they can insist it goes on the backlog, and is prioritized, where its effects are obvious to everyone.

Feature creep and out-of-band work requests now have immediate, obvious consequences on the delivery date, and these are immediately communicated to everyone (because you displayed the backlog prominently, as per point three, remember?).

Burndown

Burndown is fancy Scrum-speak for "what the developers have been working on, and how long they have left". The backlog makes a great place for this, and the daily standup an ideal time.

Each day, the developers say what they've worked on, and update the estimates on tasks to reflect the percentage they've completed, and if they've finished working on a task, they say which task they'll work on next.

Once everyone's done, the Project Manager (or anyone, really) shows what effect this has on which tasks are inside or outside the deadline.

This keeps the Customer or Product Owner happy, because they can see that people are tackling items in the order that's most important to them.

This keeps the Project Manager happy, because it uncovers unpleasant surprises about what people are working on, how well they're working on it, and what's actually going to be delivered. If something is causing a task to take much longer than the estimate, there's instant (well, within 24 hours) feedback, and a chance to address this.

And it keeps the Developers happy because they can show they're working on the right things, day in and day out, without having to explain at the end of six months why it looks like they're late delivering. Responsibility - and accountability - has been put back in the hands of the business.

Communication is Everything

As I'll never get tired of saying, Scrum is all about aiding communication - it's about stopping unpleasant surprises from becoming nasty surprises, and it's about estimates that work for the business and developers alike.

The backlog is an inspired communication tool for showing the status of a project to the whole team, for showing the developers what the customer (internal or otherwise) really cares about, and for demonstrating the effects of ad-hoc work.

For developers its power is in making their lives easier by allowing them to get on with the work they want to be doing with minimum distraction, and maximum visibility.

>> Comment on this Post · Like this Post

]]>
Sat, 21 Apr 2012 18:18:25 +0000 http://sett.com/import1391721468/uid/109683
Scrum, The Good Bits: Daily Standups http://sett.com/import1391721468/uid/109678 This article is one of a series on Scrum, The Good Bits.

One of the central principles of "Scrum that doesn't suck" is to maximize useful communication. If you load too much process on top of that, you actively inhibit communication. In theory, daily standups are a communication tool. The general process of daily standups is pretty simple:

  • Everyone gathers around the Todo list (Project Backlog) at the beginning of the day
  • Everyone takes turns to say: which task they worked on yesterday; which they'll work on today; how close to completion each task is (Burndown)
  • People also raise risks and impediments that are slowing them down
  • No-one talks for more than a couple of minutes

Developers and Project Managers find almost infinite ways to screw up this simple process, from treating Burndown as a measure of how hard people are working, to allowing deeply technical debate to break out in the middle of them while everyone else is standing around getting bored, to squabbling about if people are attending the meeting as Chickens or Pigs (seriously, people actually do this).

I'm going to talk about what I think are the two most useful types of communication you get from the daily standups, and the minimum viable process needed to accomplish these. Specifically, we're going to talk through:

  • Avoiding nasty surprises
  • Increasing collaboration

Avoiding Nasty Surprises

Software development is full of surprises - often unpleasant ones. What turns an unpleasant surprise in to a truly nasty surprise is when it's too late to fix it or do anything about it.

When a given task is a lot more complicated than anyone thought, that's an unpleasant surprise. When the developer working on it has kept quiet about that, and they're three weeks and half way in to solving a task that was meant to take a day, and you're releasing at the end of this week, that's a nasty surprise.

When a junior developer has decided they need to build a whole new framework to solve a relatively simple problem, and they've started building it instead of fixing the problem, that's an unpleasant surprise. When they've a week in to it, and have been making difficult-to-undo changes to support it, that's a nasty surprise.

When your remote developer can't access the issue tracker, and so has been ignoring bugs, that's an unpleasant surprise. When you've been assuming they're close to a fix a week later to find out they haven't even started on it, that's a nasty surprise.

When one of your senior developers is making great progress, but has chosen an interesting looking item from the very bottom of the Todo list rather than working on a far more urgent feature you were sure someone was doing, that's a nasty surprise.

Daily standups are a great way to find out about unpleasant surprises before they become nasty surprises. Senior developers should be able to spot unpleasant surprises by listening to what the other developers are working on, and why it's taking them so long. Project Managers should be able to elicit and start fixing process-based impediments ("I can't test this until I can get a power cable for the handheld device ordered") in good time. And Product Owners (or Customers) should be able to flag that everyone's building the wrong thing, a long long time before it's built.

There are teams where the Senior Developer, Project Manager, and Product Owner is the same person, and while not ideal, sometimes that's the reality, and that's ok.

Organize daily standups to catch unpleasant surprises early. Don't assume as a Senior Developer or Project Manager or Product Owner that you'll be able to easily spot all forms of unpleasant surprises turning in to nasty surprises - it's a team effort. Perhaps your Junior Developer has already been told by Technical Services that there's no way in hell they'll support a solution based on Wordpress, and the Senior Developer is midway through planning his Wordpress solution.

Don't assume that unpleasant surprises are anyone's fault. They're a constant in development - the goal is to stop them going nasty

Increasing Collaboration

The most difficult developer to add to a project is generally the second one. Whatever a given developer's failings, as long as they have a reasonably coherent picture in their head of how a project should progress, then the project has a reasonable chance of success; the different components are designed work together (sometimes a little too closely) because the same person is writing them.

Technical debt at this point is inconvenient, and will come back to bite, but not for a while - you're less likely to accidentally break or be confused by a system that is entirely your own work.

The second developer changes all this, largely by virtue of not being psychic. If they're not sure how something works, and it's not well documented (or, more likely, the documentation misses out 'obvious' bits), there's large scope for confusion and misunderstanding. The original developer may not remember off the top of their head why they did things a certain way, may not have the time to answer, probably won't remember all the nuances of the problem, or occasionally may even get defensive when asked about their design decisions by the new developer.

If you have a period of the day where everyone briefly summarizes what they're working on, what they're struggling with, and what the approach they're taking is, then you have a period of the day for people to remember why something works the way it does, and save each other some time.

Can this happen in an ad-hoc manner? Of course. But it's not always obvious who the right person to ask about a given component is, and it's not always the obvious person who has the best insight in to it. Perhaps the original developer doesn't remember why they didn't use ActiveMQ, but the Project Manager remembers the meeting with Technical Services where they explained why they wouldn't support it, or perhaps the best insight in to how to work with a given component doesn't come from the original developer, but from another developer who's interfaced with it recently.

Organize daily standups to capture insights on development tasks from the whole team, rather than finding the developers on the team accidentally duplicating each others work, or developing multiple and exclusive systems for doing the same thing.

Exegesis

As with all things Scrum and Agile, focus on why you're following certain processes, and if you don't understand why, stop doing them. If your daily standups are no longer functioning as an communication tool, you've lost the plot somewhere along the line.

If you don't currently use daily standups, and you want to improve communication, addressing the issues above, give them a go. Design your process around the two principles above - whether that's requiring developers to send a daily email at the beginning of the day about what they're working on, and why, and how it's going, or whether you do physically meetup, stand up, and follow the traditional Scrum standup structure.

>> Comment on this Post · Like this Post

]]>
Sat, 14 Apr 2012 14:55:08 +0000 http://sett.com/import1391721468/uid/109678
Scrum, The Good Bits: An Introduction http://sett.com/import1391721468/uid/109677 This is a series of articles discussing the best pieces of Scrum. The pieces you - as a developer or Project Manager - can steal and start using independently of a wider Scrum implementation. We'll discuss which bits of Scrum you can rip-off without drinking any of the Certified Agile Consultant Kool-Aid - in short, we'll be looking at how to do Scrum for small teams and startups.

Why Developers hate Scrum

Scrum is a business process, and like any business process, it sits between you and the work you need to get done. If it doesn't help you get that work done more effectively it's a big waste of time. Good developers, like everyone else, hate having their time wasted.

Scrum is often implemented by someone who went on a Scrum Master Certification (NOW 90% MORE AGILE™) once, and is attempting to implement it as a set of cargo-cult rituals in the wild hope it'll make everything better, and more Agile or something. Sometimes this'll be a Project Manager, and sometimes this'll be someone billing their consulting time as a Scrum Expert. Planning sessions become elaborate Arts and Crafts sessions with Post-Its and Sharpies, the Product Backlog becomes time sheets by any other name, and Velocity gets used as a personal productivity measure.

In short, in the wrong hands, Scrum becomes a tool for wasting developer time, and neatly tracking, graphing, and reporting the resulting drop-off in productivity.

Why Developers should love Scrum

This is a huge shame, because done right, Scrum protects and empowers developers. It pushes commercial and business considerations back in to the hands of the business and the customers, while placing implementation decisions back with the developers where it belongs.

Implemented well, it allows developers to track the progress they're making, helps them to estimate correctly, and to signal upcoming impediments and risks. It gives them a rock-solid answer to "Why isn't this done yet?" when the answer is "Because we've spent the last two weeks fire-fighting" or "Because the CTO came over and told us to redo the UI in Cornflower Blue".

And most importantly, it gives them the confidence and freedom to know what they're working on right now is the most important thing they can be working on, and that any open loops - any hidden requirements - have been accurately and adequately captured.

Steal Early, Steal Often

Scrum isn't all or nothing. It's a series of interrelated concepts and practices that happen to work well in collaboration. Over the next few weeks, I'll be looking at specific aspects of Scrum, and looking at how to get the biggest return for the smallest amount of additional process. I'll be trying to answer the question: What's the Minimum Viable Scrum Implementation? How can you implement the best bits of Scrum, while avoiding the common pitfalls?

Table of Contents

We start with the Introduction, which you've just read, and Daily Standups, linked to below. I'll be trying to update this every few days - the whole thing started as a gigantic monolithic article which was just getting far too long... If you want to make sure you'll get the whole thing, you can subscribe using RSS, or via email, using the side-bar to your right.

>> Comment on this Post · Like this Post

]]>
Sat, 14 Apr 2012 14:54:18 +0000 http://sett.com/import1391721468/uid/109677
How to Remember Everything, Ever and Forever http://sett.com/import1391721468/uid/109667 I have a coworker, Johnny, who is at least a little bit smarter than I am.

But Johnny has an (irritating) habit of always appearing a lot smarter than me, because Johnny remembers everything.

I often have to read something a few times before it sinks in, especially if the content is somewhat technical, where Johnny has read it once and remembered it all, and can answer all manner of questions about it. Combine this with my having the attention span of gnat, and there's a problem worth solving here.

This is of course a mixed blessing for Johnny, as people are quite happy to use him as a technical reference with a really good audio API, which hurts his productivity some. Also: I recently found out he's using the technique we'll be talking about to learn Japanese,

In this brief post, I'm going to introduce you to the concept of the Spacing Effect, give you an overview of how I'm using it to commit massive amounts of useful (to me) information to memory, and then give you a set of interesting links to further your research.

You'll Forget it, but at Least You're Consistent

Psychologists have known since the late 1800s that you forget things in a consistent manner, and over a consistent timeframe. The more times you're exposed to a piece of information, the more that timeframe stretches out - it takes you longer to forget it each time.

This is clearly simplified: some pieces of information you only need to hear once and they'll never leave you, and there are some things of which you need to be constantly reminded.

This is one aspect that makes rote learning such a chore - identification of which pieces of information you need to revise, which you already know, which you're on the verge of forgetting, etc.

If you're thinking "this sounds like the perfect task for a computer", you're right.

And you share that insight with a Polish man called Piotr Wozniak, author of SuperMemo. SuperMemo, and its cross-platform and open-source counterparts are glorified flashcard programs, with a twist: they store your previous learning history for each fact you want to learn, and aim to ask you to recall each fact just as you're about to forget it.

In this way, they optimize and minimize the amount of time you'll need to spend studying pieces of information in order to recall them.

Making Learning a Lifestyle

So far, we've established that there's a set of tool for optimizing anything you need to learn. Obviously mnemonics help, but these tools add in a structure you keep you using what you know, and to only force you to revise the pieces you're struggling with.

If like me, you're in a field where there are always new things to learn, this is a God-send.

Some things, after all, you have to learn. I recently did my Driving Theory test in the UK, and while much of it's common sense, notable parts aren't. Did you know that the average stopping distance at 40 mph is 36 metres? How many car lengths you should stop from another car when in a tunnel? In these situations, you're able to minimize the amount of time spent on a necessary evil.

Other things, you'll need to learn by doing - like learning a new programming language, or a new framework. But having all the information you'll be using at your fingertips makes this much easier. Learning to wrangle Twitter Bootstrap is quite a bit easier if you can just commit the useful classnames and selectors to memory first, as following proof trees in Z-Notation is a lot less irritating when you've memorized all the funny punctuation first.

By condensing the pieces of information you need or want to know in to a piece of software that's optimizing your learning experience, you can learn more, remember it longer, and remove much of the tedium.

And because it doesn't get bored, or forget to remind you to review it, it keeps the mental effort of remembering things forever to a minimum - you're only asked to review cards you're likely to be on the point of forgetting.

Unsurprisingly, learners of human languages are all over this like a rash, which brings us full circle: if you're trying to learn Japanese characters like Johnny, and you're smart like Johnny, you'll take full advantage of the Spacing Effect.

A Final Thought

My workflow with the tool I use is I use it for 5 minutes every morning. There's currently something in the order of 3,000 facts in my DB, ranging from the the Highway Code to an exhaustive list of Haskell's syntax and functions.

As a busy guy, 5 minutes is not a lot of time, and every day that I work with it, the fewer questions I get asked - I think I was asked 3 this morning. The exciting part of this to me, is that that means I can be loading it with more of the many things previously I didn't think I had the time to learn.

Links, References, etc

>> Comment on this Post · Like this Post

]]>
Fri, 17 Feb 2012 20:59:54 +0000 http://sett.com/import1391721468/uid/109667
Estimating like an Adult - What to Steal from Agile... http://sett.com/import1391721468/uid/109654 In Part 1, we talked about the primary problems with estimating software development accurately:

  • Your task will have hidden complexity you hadn't considered, as a function of it being software
  • You will be given extra or unrelated work to do, and if you fail to track and communicate this, you will look lazy and incompetent

But we ended with the uplifting message, which is if you plan for the former (by multiplying out your estimates), and make sure you adequately track the latter, then your estimates - when taken in aggregate - will probably be ok. And by ok, I mean better than 90% of other developers.

But who wants to be a 10 percenter, when you can be a 1 percenter?

I also promised Agile was going to solve all your problems ever when it came to estimating. This may not have been 100% accurate. What we are going to do is look at a whole bunch of Good Ideas(TM) that the Agile folk stole, curated, and invented when it comes to estimation.

Humans are Weird About Time

It turns out that we humans get a little weird about time. However much you believe "six hours" is "just an estimate", however much buy-in there is from the team about this, and however many times you've accepted that there's always hidden complexity, if a six hour task takes eighteen hours, everyone gets a little squirrely.

Squirrely is bad, because squirrely is bad for morale, and squirrely is bad for Project Manager / Developer relations. Squirrely leads to developers trying to hide actual progress and injecting "slush fund" tickets in to the work list (Agile: Backlog) that they can use to burn time down against, and leads to them getting demotivated, which leads to them spending more time on Reddit, and it's a vicious cycle.

The first rule of Agile Estimation Club is that we estimate in Story Points, not time, and the second rule of Agile Estimation Club is that no-one ever tries to work out the Story Point to hour conversion rate (a Story is a collection of tasks, or one big task).

Good Idea #1: Stop estimating in time, because everyone has deeply held beliefs about time.

Story Points

Story Points are a measurement of the complexity and tedium associated with a task. Complexity is important, because complexity hides other complexity, and tedium is important because no-one works effectively on boring tasks.

How do we measure Story Points? Simple. Story Points are measured in Story Points.

What this really means is we estimate tasks in relation to each other. Is this task about the same complexity and tediosity as that task? If so, you should give it the same number of Story Points.

It's super-tempting when you get started to estimate things in 'days' or 'hours', and then just drop the units, and look, you have Story Points! Don't do this: everyone will remember that 8 points is 'really' 8 days, and people will get squirrely. The whole point with Story Points is breaking the relationship - at an individual task level - between Story Points and chronological time.

Good Idea #2: Estimate the complexity and tedium of tasks in relation to each other, rather than in relation to time.

A Couple of Refinements

Many teams estimate using numbers from the Fibonacci Sequence. That is, you're only allowed to estimate your tickets using one of:

1, 2, 3, 5, 8, 13, 21, 34, 55

Although, This Is Agile, and so you'll get cretinous Agile Consultants telling you how you only estimate using the Fibonacci Sequence, and then hand out cards with '1, 3, 8, 20, 40, 100' on them, because they're nice round numbers *twitch* - actually this doesn't matter, as we'll see, but still...

The idea here is that is a task's complexity-tedium index grows, you have to decrease the accuracy - that is: the more complex a task is, the less chance you have of accurately estimating it, and you should account for that. When torn between two numbers, go for the larger - you'll usually be right.

Good Idea #3: The more complex and tedious a task, the less accurate your estimates will be.

Velocity

If there was no relationship at all between Story Points and time, then there wouldn't be much point in estimating.

We talked a lot about why we estimate previously, and estimates in complexity and tedium are of no interest to the business as a whole. The business needs Gantt charts, it needs deadlines, and it needs the occasional project to go massively over-budget and over-time so that one-day Presidential hopefuls can swoop in and fix them.

Velocity describes the number of Story Points a team can deliver in an iteration, which is a fixed period of time (three weeks, for example). In mature teams, Velocity should be fairly stable, and should see gradual increases as a team removes impediments.

When you've estimated every task or Story in your task list (ie: you've thought about and estimated everything you need to do), and once you know your Velocity, then you can tell the business how long something is going to take. You know you get 100 points done every 3 weeks, there are 1,200 points, and it's maths that even an Agile Consultant can do... What's more, for the first time, the estimates are likely to be somewhat accurate.

Good Idea #4: Your estimates relate to time only in aggregate, and only based on previous experience.

Mixed Ability Teams

Development teams are rarely homogenous. Some developers are worth three other developers, and some are regularly commiting work that needs to be unpicked and undone by other developers. Some developers have an OCD-level attention to detail, and some a stalkerish devotion to Twitter.

This has potential to screw time-based estimation systems. Remember: people get squirrely when something quantified in chronological units doesn't match up to elapsed time. If a Senior Dev estimated a ticket as three hours, and a Junior is about to start their tenth hour on it, that's going to be demoralizing. And if a Senior Dev is about to complete their third ten hour task in under an hour, they may get to thinking it's time to see what's new from Horse Ebooks.

When you estimate Stories in terms of each other, this largely goes away - people stop estimating Stories based on how long they think it will take them to do, and instead base them on how relatively difficult they are. This is a fundamental win, which I'm struggling to adequately do justice to here.

Good Idea #5: Estimating stories in Story Points and using Velocity for estimtes accounts for mixed-ability teams. This is huge.

Planning Poker

This brings us neatly to our final point, which is to do with the actual estimation process itself. I have the attention of a gnat when I'm not talking, which makes meetings at which I can't be talking most of the time a nightmare for me (and ones when I can a nightmare for other people). Estimating meetings fit this category, as estimating is meant to be collaborative.

Everyone in the team should get their say when it comes to estimating, especially as they may end up doing the Story or task at hand. What usually happens, though, is that one person dominates, lays down their opinion, and everyone else plays Angry Birds on their phone. This is probably a bad thing.

The process of Planning Poker is: everyone has a deck of cards with the Fibonacci sequence (or some pretty, incrementing numbers labelled as the Fibonacci sequence *twitch*) on them. Each Story is discussed, and then everyone lays a card, face down, on the table with their estimate. Once everyone has selected a card, everyone turns them over at once.

You ask the person with the highest card why they think it's such a complex task, and the person with the lowest card why they think it's relatively simple. Anyone else can air their views too. Rinse and repeat until a consensus is reached.

This makes it very difficult to play Angry Birds.

If the team has been discussing what actual work a Story entails, and you haven't been paying attention, you'll struggle to put a sensible estimation down, and then you'll be asked for why you chose such an estimate. If you guessed, rather than estimated, you may struggle with this, and your Project Manager will start to notice. With a persuasive (or sadistic) enough Project Manager, this leads to everyone being engaged.

Good Idea #6: Build shared ownership and engagement in the estimation process by estimating independently, then discussing collaboratively.

Summary

We've covered six good ideas from the Agile Estimation Process:

  • Stop estimating in time, because everyone has deeply held beliefs about time
  • Estimate the complexity and tedium of tasks in relation to each other, rather than in relation to time
  • The more complex and tedious a task, the less accurate your estimates will be
  • Your estimates relate to time only in aggregate, and only based on previous experience
  • Estimating stories in Story Points and using Velocity for estimates accounts for mixed-ability teams. This is huge
  • Build shared ownership and engagement in the estimation process by estimating independently, then discussing collaboratively

We haven't covered all the intricacies of how to get started though. I suggest you start by Googling a list of Agile-related buzzwords, and seeing where that gets you, if you're interested in a step-by-step guide.

Good luck, commander.

 

>> Comment on this Post · Like this Post

]]>
Thu, 16 Feb 2012 07:44:53 +0000 http://sett.com/import1391721468/uid/109654
How to Estimate like an Adult - A Developer's Guide http://sett.com/import1391721468/uid/109547 Part 1: How to Estimate like an Adolescent

Usefully estimating software projects is difficult, but not impossible.

A lack of understanding about why we estimate, what to estimate, and how to estimate leads to a breakdown of trust and communication between developers and managers.

Developers end up feeling guilty that they're not meeting their estimates, and at the same time defensive: they were just estimates after all, right? Managers feel exasperated that everything is taking three times as long as it should. What are they doing all day?

This article is about fixing your estimates, and your estimation process. It's split in to two parts - the part you're reading, titled "How to Estimate like an Adolescent", and the part you're not yet reading, titled "How to Estimate like an Adult - What to Steal from Agile".

As an aside, if you're in a position where someone else is estimating work you're doing, get out. The work will be late, you will be blamed, and you will be miserable. Programming is meant to be fun, and setting yourself up for accusations of professional incompetence and the niggling feeling that maybe you are incompetent is the antithesis of fun. Seriously, get out.

Estimates are not the same as deadlines (which someone else will be setting), largely because deadlines are the business's problem, not yours.

Why We Estimate

What does a business do? It turns money in to more money (usually).

It spends money on your time, a computer for you to use, and a (hopefully) endless supply of second-rate coffee. In exchange, it wants you to write software, which it will use to obtain more money.

Fine, and so far, so obvious. So why does a business need estimates from developers?

Firstly, so they can plan more effectively - so that those funny Gantt chart things line up in the right places. This means the marketing department know exactly when to make sure their diaries are free for wining and dining journalists, and this means the CEO's secretary can make sure he's not off hiking the Appalachian Trail around product launch.

Secondly, so they can estimate resource allocation properly. Remember when I said estimates aren't the same as deadlines? If the estimates bubbling up at the start of the project show there's no way two developers can deliver the electric eggbeater calibration software by the deadline, then at this point, the business can do anything about it; it's still the business's problem to solve.

And of course, if it only becomes apparent towards the end, that's when people start monitoring your Facebook usage. There is a way to solve this, and you should keep reading.

That's Economics 101 out the way.

What a Project Manager Does All Day

Project Managers spend all their days getting Agile Certification Scrum Master Training, but in the brief moments when they're not doing that, their job is to monitor progress on the project, and communicate that upward, updating their Gantt charts so the CEO can rebook his holiday to Argentina.

At the same time, their responsibility is to make sure resources aren't being wasted. That can involve anything from procuring software, arranging meetings with suppliers, and shouting at you for spending all day updating your LinkedIn profile.

The earlier and more accurately the Project Manager can communicate changes in the project status to the people they report to, the easier their life is, and the nicer they are to developers.

The more meetings they go to where the project has slipped, and they can't produce a good reason for that, the harder their life becomes, and the more chance there is of a poisoned donut among the decreasingly common Krispy Kremes.

What to Estimate

Any software you write, ever, will have some degree of hidden complexity. Whether this is the piece of open-source software you were hoping to rely on being riddled with bugs, or whether this is working with your coworker's poor grasp of service architecture, or whether this is that line in the specification that no-one thought about too hard that triples the workload, there are nasty surprises lurking everywhere.

When you're in your first job out of school, you're allowed to be bewildered and surprised by this. You're allowed to go home and cry in to your KFC Famous Bowl while Great Gig in the Sky plays, angry that you spent the 2 days you were meant to be setting up the AMQ Server trying instead to configure Entourage 2008 to talk to Exchange 2011.

But once you're an adult, this should no longer surprise you.

Not only should this no longer surprise you, but you should start getting an eye for which sorts of stories these situations are most likely to occur in. This is complexity. And this is what you, a software engineering adult, should be estimating (and what we'll get on to in Part 2).

There's Still an Easy Way Out

Before we go any further down the Agile Velocity rabbit hole (and we're going all the way down in Part 2), we're going to look at the easy way out here. That's because the easy way out usually works well enough, and it's important to look at why it works.

Here is the easy way out for estimating a development task/story:

Break your task in to smaller components

The more you break your task in to components, the more you'll think about how you need to approach it, the better understanding you'll gain of the issue.

Estimate those components optimistically, and then triple those estimates

As above, every software project has hidden complexity. Over the aggregate of software tasks, the degree of extra complexity (ie: the multiplier effect) tends to be constant. This is hugely important, and this aggregation effect is the key to estimating like an adult.

Why triple? That number was pulled from the air, by which I mean over a decade of software development experience.

Any time someone gives you anything extra to do (extra functionality, something that takes you away from the project), tell your Project Manager to make a note of it, and increase the estimate

Your Project Manager is responsible for your time being used effectively. If they're unaware, or forget (or just as often, you forget), where your time is being diverted, they can't do anything about it, and you end up looking like a slacker for not delivering on time.

This is the single biggest area of communication breakdown. Keep a note of when the scope is being added to, or your time is being diverted, and make sure your Project Manager is informed early, and often.

Give your Project Manager daily updates of the percentage progress you've made through the task

If you've tripled your optimistic estimates, some days you will be zooming through, and some days you will be moving slowly, but the overall completion of the task will probably come in on time. Cut to a shot of Gantt charts and a smiling CEO.

If you stick to this religiously, you will have a mostly easy life. You may initially get pushback from other developers. Any developer dumb enough to push back on such an estimate should have attention drawn to their previous estimation efforts vs elapsed calendar time.

And you know what? You don't have to wait until Part 2. We've covered the four most important points:

  • Plan for the hidden complexity which will definitely show up, or you will consistently under estimate
  • Estimates only have accuracy in aggregate
  • Make a note of any extra features or work you're asked to do, and increase the estimate accordingly
  • Get in to the habit of giving your Project Manager frequent and accurate updates on your progress, or they can't do their job properly. Don't be tempted to hide slow progress and attempt to make it up later - you'll just screw up their Gantt charts, and then everyone loses

What to Look Forward to in Part 2

In Part 2, we'll be looking at how Agile solves all your problems ever. Or, more accurately, how this whole Agile thing has a couple of really good ideas on estimation, how to use them, and how to avoid some gigantic pitfalls associated with them.

Until next week...

>> Comment on this Post · Like this Post

]]>
Fri, 10 Feb 2012 16:29:00 +0000 http://sett.com/import1391721468/uid/109547
Automatic Generation of Cucumber from Code http://sett.com/import1391721468/uid/109550

(All of the code mentioned here exists, and we're using it. Our actual codebase is all in Perl - I've written out examples here in Javascript for clarity and so there's no copyright issues. The actual implementation code for all this is pretty simple and not very clever, so I'm not planing to jump through the hoops needed to actually release it unless there is some massive unexpected demand...)

Introduction, in which we discover Cucumber

Let's start with Gherkin. Gherkin is basically a constrained application of English meant for specifying test cases. For example, for a calculator you might write:

You then set up a number of step parsers that match the steps (called step definitions), and execute code based on it. eg:

Gherkin is supported by a suite called Cucumber, and people tend to use the word Cucumber to describe the whole thing.

As you might be able to imagine, Cucumber gives Agile Consultants and gullible Business Analysts a vision of paradise. Look! Non-programmers can code! This solves all of our problems! We could write the whole test suite like this, and then it's also documentation!

This basically doesn't work in the real world. There are a range of nuanced reasons why it doesn't work, starting with something called Step Explosion, stopping off along the way at the fiddliness of testing exceptions, and featuring the fact that you really really really don't want non-programmers doing programming. This didn't stop me writing a Perl implementation, however, because it's fun, and why else would you program?

So in summary: Cucumber exists, it's All The Rage amongst Agile Types, but it's a curiosity rather than a testing panacea. It's an interesting way of organizing a small number of test cases, but anyone suggesting its full embrace as a replacement for any other testing tool should be taken outside and ... well, at the least, left outside.

The Plot Thickens, in which we set the scene

One of my clients at the moment has several large warehouses in several countries. Every day, they ship a huge number of items under several different brands to customers all around the world.

The business is obsessed with customer service. Fanatical about it. And so every item has to be shipped from the warehouse just right. The right number of bows and ribbons, packaging that cuts no corners, and a rigorous adherence to good taste that's pervasive down to storage containers in the warehouse being in company colours. That's why the customers keep coming back.

As a customer, amongst other things, you can ask for a Gift Message to be included in your order, and our hand-crafted warehousing software - with a pre-millenium pedigree - has to make some decisions about that. Circa 2001, the decisions were pretty simple:

But then between 2001 and 2011, things started to get a little more complicated. One of the brands put their foot down and insisted that their Gift Messages needed to be printed on cream-coloured paper, not ivory-coloured. And the physical layout of a new warehouse necessitated that we print out the Gift Messages when we pick orders, rather than when we pack them, like we do in the other warehouses.

And it turns out not all customers are entirely happy to use low-order ASCII for their messages, and that some languages require the message to be type-set by hand. And if the customer spends enough money with us a year, the Gift Message needs to be type-set by hand, and lovingly sprinkled with lavender water, before being signed using one of the pens originally used to sign The Constitution...

If you think the organic implementation of such business rules by a very dedicated but also very, very busy development team over the course of ten years might lead to software with the occasional rough edge, you'd be on the right track.

Not only does this level of complexity make it hard for programmers to extend the code, it also makes it hard for Testers to acceptance test, regression test, or understand it. And it also makes it very difficult for a Business Analyst to learn, document, and strategise how to extend and improve it.

Our Hero Arrives! in which we learn about Contracts

In order to help with some of the issues above, I've introduced a Business Rules library. If you're at decision point in the code which requires several pieces of information, and can have several possible outputs, you pull the code out, give it a name, and assert the Contract.

The Contract is an idea from Programming by Contract. The Contract your business rule has with the rest of the code asserts that the calling code must specify all inputs at call-time, that these inputs must conform to several custom type constraints, and in return, you will get a response conforming to a specific type constraint, and the business rule will operate statelessly. That is: it will not look outside of its inputs for information, and it will not change any persisting values.

Here is an example:

The constraints we're working with here have some interesting implications. As we guarantee that there's no access to values outside of the scope, except the values we pass in, we know that given the same inputs when called, we will get the same output.

Also, as we're defining custom type constraints, where those types are enumerable (ie: they're either enumerations or Boolean), then we can actually predict in advance what all the inputs could be.

And the implication of those taken together is that we can execute our rule with all possible inputs it's allowed to have - the Cartesian product of its enumerable inputs. And that means we can build - automatically - a truth table for our code:

warehouse product.measurable result
WH1 false false
WH1 true false
WH2 false false
WH2 true true

That's pretty cool, and one of the central reasons it's pretty cool is that you can pass this truth table to your Tester or Business Analyst to check. Or heck, they can even define the truth table for you from the User Story, and you turn it in to an automated test.

And that's where this goes from being quite cool to pretty interesting...

The Plot Thickens (again?)

We can programatically simplify the truth table above to its implications, by iterating over the inputs, and finding the simplest set of inputs that always lead to the same values. The above table can be simplified (by a computer) in to the following implications:

warehouse is 'WH1' => result is false
product_measurable is false => result is false
warehouse is 'WH2' && product.measurable is true => result is true

Astute readers will notice that an implication looks a great deal like a Cucumber scenario. After all, a Cucumber scenario simply states a series of preconditions, and the result that they imply.

If as part of your type constraints, you added in a few extra fields...

Then you could automatically generate Cucumber scenarios and the /step definitions/ needed to parse them:

These are Cucumber scenarios generated from your existing code. Next time you add a new warehouse, or a new type of packaging, or any other complicated decision, you can hand your Business Analyst an auto-generated Cucumber script that describes the current decision logic, and ask them to fix it up, and send it back ... and it already runs (but fails). You just need to update the code. That's both Business-Driven Development and Test-Driven Development...

The End!

Some Practical Considerations...

So we already got to the end of this article. Here are some considerations for developers thinking of trying to implement this themselves... There's no real structure or conclusion here, just a brain dump...

Enumerating free-form strings

You can't enumerate all variations if one of your incoming types is a string (rather than an enum of strings). Some of our inputs are strings. Philosophically, you shouldn't be making any decisions based on a string - if you know what your string might look like, and are taking a decision on it, you should pass it in as an enumerated type. If you're doing some kind of smart matching on it, your business rule will be simplified by doing that first, and passing in the result of that.

This leaves us with the case where a string provided in the input is used as part of the output. For that, I pass in a canary[link] string which embeds the name of the column that string is passed in as, and then replaces it with a marker in the output. For example, a free-form printer name that's only used sometimes (but could be anything) is passed in as `str["printer_name"]`, and then removed from the output, giving an output like, say: "floor3_[printer_name]". This has worked well so far.

If you really had a string you had to pass in, was used in decisions, and wasn't going to get passed out again, I'd consider embedding a list of testing strings in the type definition. Not perfect, but probably good enough. I'm yet to have a situation where an input is a naked integer, but that's probably going to be my solution when I do...

Reducing a truth table to implications

The algorithm that reduces a truth table to its implications - there's probably a right way to do this, but I just made one up:

For every subset of the input columns, ordered by number of columns used
Generate all Cartesian products of those
How many different answers do those inputs give?
If the answer is one, you have an implication
If this catches rows that haven't been `seen` yet
Mark those rows as `seen`
Save the implication

>> Comment on this Post · Like this Post

]]>
Sat, 24 Dec 2011 08:46:00 +0000 http://sett.com/import1391721468/uid/109550
Agile Scrum: Delivering Broken Software Since 1991 http://sett.com/import1391721468/uid/109554 Update: This is quite a long article. If you're looking for a quick read, it breaks down a bit like this:

  • The first third of this article describes Scrum
  • The second third describes how it gets subverted to produce broken software
  • The final third is where the practical advice for avoiding this is – whether you’re an Organizer or you’re an Activist Developer – you could potentially just skip straight there if you’ll be bored to tears by the first two sections…

Update 2: There are active and interesting discussions of this article on HackerNews and Reddit

I have a lot of love for Scrum, the software development process. I have my own little box of Index Cards and Sharpies, and I have sized Backlogs for many of my side projects. Scrum has potential to empower developers more than almost any other set of techniques.

But in almost every implementation of Scrum I've seen in The Real World™, managers are incentivized to help their team deliver broken software to a deadline, and usually end up succeeding in the former and failing in the latter. And when implemented like that, a system that should be an absolutely overwhelming win for developers becomes a tool to beat them around the head with...

Scrum Basics

So here's Scrum, simplified, and as it's meant to work: You have a Backlog of work to complete, broken down in to Stories, which ­are distinct pieces of work that should take a few hours to a few days to complete. These are frequently displayed on a Story Board, real or virtual.

You have some actors:

Role Also Known As Responsibilities
Product Owner
  • Producer
  • Business Analyst
  • Internal Customer
  • Generating Stories for the Backlog
  • Prioritising Stories already on the Backlog
Scrum Master
  • Project Manager
  • Managing the Scrum process
  • Removing development impediments
  • "Facilitation"
Team Members
  • Developer
  • Designer
  • Tester
  • Sys-Admin
  • Breaking Stories down in to Tasks
  • Estimating Story complexity
  • Doing the work...

(Any time someone tells me that a Cross-Functional Team means the QAs and Designers should be programming, I try to explain to them that they shouldn’t be anywhere near the development process – it’s better than jabbing them with my Biro. Generally – and a little sadly - you’ll only hear this rubbish from Agile Consultants.)

You have a Sprint which is a short-ish time period, typically two-three weeks, and you have Story Points assigned to each Story based on complexity, which are specifically not time-based estimates. You have a Capacity, based on your Velocity, which says how many Story Points you have historically completed per Sprint, and thus how many you expect to complete in the next one.

Now here's the awesome part, in theory, for the Developers. The Product Owner prioritizes every story sequentially. And in this way he/she gets to 'spend' your Capacity – and only up to your Capacity - each Sprint on the Stories they want to get completed. They've got a budget, and they can only spend that budget, and no more.

And any time a piece of work that's super-duper-mooper important comes in, someone writes a Story for it, and the Product Owner places it where they want in the Backlog. If they place it above any of the work you're intending to get done this Sprint, something has to drop off the bottom. As a Developer, you can illustrate this beautifully, in front of them, by removing Story Cards from the list of items you are intending to complete this Sprint.

(Some people say that this should never happen – once the Sprint is planned, it’s sacred, and that any changes to it need full team buy-in – any significant number of changes need a complete replan of the Sprint. I’ve not seen this so far in practice, but it sounds intriguing.)

This empowers Developers tremendously. Stories cannot be ‘snuck in’ or their lack of completion put down to Developer laziness or disorganization. This gives Developers all the right kind of accountability, and removes from them the responsibility for owning a time-machine when Feature Creep starts to bloat a project.

When it comes to crunch-time on a project, the Developers can show that they’ve been working efficiently, effectively, and to deadline, and push back against the idea that they should be morally obligated to start putting in long hours because they “didn’t get the work done”.

But...

If the Organizers (the ScrumMaster, and in most teams I’ve worked on, the Product Owner) of the team is unhappy with how new Stories will affect a deadline, they can - according to the Agile canon - change three things:

  • They can change the scope of what needs to be delivered by de-scoping Stories they think aren't so important;
  • They can add extra Developers with the (often misguided) hope it'll increase capacity;
  • Or they can change the deadline.

Which bring us to the central problem, and what this article is really about: The Organizers are rarely empowered to change any of those thing, so they change the one thing they can, but shouldn’t: the build quality.

How Under-Empowered Organizers Can – and do - Screw Up Scrum

The Organizers’ incentives are broken in most organizations. The Organizers are incentivized to get projects out of the door on the original deadline, at the original price (ie: staff levels) – if they deliver this, they’ve done their job well. When bugs, corners cut, and ungodly amounts of Technical Debt start to surface, all too often this is attributed to Developer incompetence…

(In the small number of software projects with sufficient number and quality of dedicated test engineers / QAs, observers will notice how what constitutes a bug of release-blocking severity starts to change as the deadline approaches. But in the unlikely event you have enough testing resource, you've probably also got a fairly enlightened organization supporting your work...)

Taking Feature Creep and Unexpected Events as a given, there will come a point in the life of almost every project where it becomes clear it can't be done to the original estimate, with the original resources. This appears to be an absolute constant in software development.

And this is the point where the Organizers need to find a solution. The most appealing solution is to find a magical wand that will somehow increase perceived Developer productivity or Velocity, and this is where Scrum usually suddenly morphs from an incredibly useful tool for Developers and Organizers alike in to a world of pain.

How do you magically increase Velocity? Here’s how: any tickets the Developers have created that represent code quality are history. Code quality is a “nice-to-have’ if this project is going to be delivered on time, sorry. If dropping the automated testing tickets means the project can be 'released', and the alternative is a delay or the Organizers having to ask for more resources, all too often those tickets are toast. Finished a feature, but now it needs proper documentation? Suddenly it can wait until after the release.

Before you know it, the project is leveraged to all hell on Technical Debt, and the result is a shitty product.

This is Scrum's biggest failing - the one knob that many Organizers are empowered and incentivized to twizzle is code quality, and usually only in one direction.

But...

If your Organizers are experienced, talented, and confident, and have buy-in from the rest of the business, the effect of this will be minimized. Developers will be shielded from the heat, and allowed to get on with building an excellent project. Teams like this exist - a client I'm currently doing some work for are really good at this - in no small part due to absolute buy-in in to Doing Scrum Right that goes from the Board level downward. This is rare, though, and even they don't always get it right (but they do offer free pizza, time off in lieu, and heart-felt thanks when Developers work weekends – time off in lieu dwarfing the others in importance).

Rescuing the Scrum

The Right Solution

The right solution to this problem is unambiguous business and customer buy-in and visibility - a situation where the business allows and trusts the Developers to be the guardians of code-quality and estimates, and it deals with "business realities" regarding deadlines - by managing resources and customer expectations properly.

The right solution to this problem also involves incentivizing the Organizers properly by making them accountable for code quality – both via honest feedback from Developers on the level of Technical Debt added, and from a perspective of how much continuing support the project requires after it’s “released”.

The Activist Solution, for Developers

In lieu of being able to do the most, there are some activist options for Developers wishing to take matters in to their own hands:

Firstly, stop highlighting time spent maintaining code quality. If your test and documentation tickets get consistently de-scoped, stop displaying them differently, and start padding out your estimates to opaquely contain them. Task estimates belong to Developers, and to Developers alone. If Organizers challenge them, offer to put their estimate on the story too, and "we'll see how long it takes".

This is - obviously - antithetical to the Agile ideal of transparency and clear communication. But so is squeezing code quality to make releases. Be like the Internet, and simply route around the damage - you have a responsibility to your co-Developers and to the customer here as a software professional.

Secondly, if your Backlog doesn’t highlight that adding items to it knocks items out of the Sprint, do something about it. Make sure that views of the Backlog contain a marker that shows where the team capacity for the Sprint is. Move that marker to reflect the work that can actually be achieved every time new Stories are added to the Sprint.

Essentially: start encouraging your Organizers to confront as early on as possible that the work that needs to get done may not fit in the time available. The sooner that that crops up as an unavoidable issue, the more time there is to find a real solution. The longer everyone ignores the problem, the more likely it is that the closing Sprints will become Developer Death Marches™.

Thirdly, start keeping a note of areas where you've had to sacrifice on code quality, and where you’ve created Technical Debt. Create Technical Debt stories to represent this, and insist they go on the Backlog, even if “they’ll never get done”. Find the person who is pushing Agile in your organization, and if they're even slightly empowered, enlist their help on this. Find the person who has ultimate ownership of the wider codebase, and make sure they have visibility of the Technical Debt you’re creating, and why.

Some level of Technical Debt is acceptable when you have a tight release schedule. But hiding the creation of Technical Debt is criminal. Make sure it’s out in the open for everyone to see.

Fourthly, start and relentlessly pursue a discussion on what Done means to your team. When is a piece of work truly Done? Is an untested feature really Done? Is it really releasable? What are the impacts for the company and customer on releasing code in which you have little confidence? Hammer out a team statement, with as much Organizer and business buy-in as possible (if you don’t have one), and print it out and stick it on your Story Board.

The Final Option, for Developers

Leave. The world is crying out for talented developers, and if you care enough about the software development process, you probably fit in that category. There are many places making bona-fide efforts to do this the right way, and a few well-placed questions at interviews ("How does the business deal with it if not all stories have been completed in a Sprint?" - I had a potential client tell me that all work HAD to be done or developers didn't go home... every Sprint) should help you work out which is which.

>> Comment on this Post · Like this Post

]]>
Wed, 28 Sep 2011 02:21:00 +0000 http://sett.com/import1391721468/uid/109554
Test-Driven Development? Give me a break... http://sett.com/import1391721468/uid/109576 Update: At the bottom of this post, I've linked to two large and quite different discussions of this post, both of which are worth reading...

Update 2: If the contents of this post make you angry, okay. It was written somewhat brashly. But, if the title alone makes you angry, and you decide this is an article about "Why Testing Code Sucks" without having read it, you've missed the point. Or I explained it badly :-)

Some things programmers say can be massive red flags. When I hear someone start advocating Test-Driven Development as the One True Programming Methodology, that's a red flag, and I start to assume you're either a shitty (or inexperienced) programmer, or some kind of Agile Testing Consultant (which normally implies the former).Testing is a tool for helping you, not for using to engage in a "more pious than thou" dick-swinging my Cucumber is bigger than yours idiocy. Testing is about giving you the developer useful and quick feedback about if you're on the right path, and if you've broken something, and for warning people who come after you if they've broken something. It's not an arcane methodology that somehow has some magical "making your code better" side-effect...

The whole concept of Test-Driven Development is hocus, and embracing it as your philosophy, criminal. Instead: Developer-Driven Testing. Give yourself and your coworkers useful tools for solving problems and supporting yourselves, rather than disappearing in to some testing hell where you're doing it a certain way because you're supposed to.

Have I had experience (and much value) out of sometimes writing tests for certain problem classes before writing any code? Yes. Changes to existing functionality are often a good candidate. Small and well-defined pieces of work, or little add-ons to already tested code are another.

But the demand that you should always write your tests first? Give me a break.

This is idiocy during a design or hacking or greenfield phase of development. Allowing your tests to dictate your code (rather than influence the design of modular code) and to dictate your design because you wrote over-invasive test is a massive fail.

Writing tests before code works pretty well in some situations. Test Driven Development, as handed down to us mortals by Agile Testing Experts and other assorted shills, is hocus.

Labouring under the idea that Tests Must Come First (and everything I've seen, and everything I do see now suggests that that is the central idea in TDD - you write a test, then you write the code to pass it) without pivoting to see that testing is a useful practice in so much as it helps developers is the wrong approach.

Even if you write only some tests first, if you want to do it meaningfully, then you either need to zoom down in to tiny bits of functionality first in order to be able to write those tests, or you write a test that requires most of the software to be finished, or you cheat and fudge it. The former is the right approach in a small number of situations - tests around bugs, or small, very well-defined pieces of functionality).

Making tests a central part of the process because they're useful to developers? Awesome. Dictating a workflow to developers that works in some cases as the One True Way: ridiculous.

Testing is about helping developers, and recognizing that automated testing is about benefit to developers, rather than cargo-culting a workflow and decreeing that one size fits all.

Writing tests first as a tool to be deployed where it works is "Developer Driven Testing" - focusing on making the developer more productive by choosing the right tool for the job. Generalizing a bunch of testing rules and saying This Is The One True Way Even When It Isn't - that's not right.

Discussion and thoughts (posted a few hours later)...

I wrote this a few short hours ago, and it's already generated quite the discussion.

On Hacker News, there's a discussion that I think asks a lot of good questions, and there's a real set of well-reasoned opinions. I have been responding on there quite a bit with the username peteretep.

On Reddit, the debate is a little more ... uh ... robust. There are a lot of people defending writing automated tests. As this blog is largely meant to move forward as being a testing advocacy and practical advice resource, I've clearly miscommunicated my thoughts, and not made it clear enough that I think software testing is pretty darn awesome, but I'm put off by slavish adherence to a particular methodology!

If you've posted a comment on the blog and it's not there yet, sorry. Some are getting caught in the spam folder. I'm not censoring anyone, and I'm not planning to, so please be patient!

Anyway, the whole thing serves me right for putting together my first blog post by copy-pasting from a bunch of HN comments I'd made. The next article is a walk-through of retro-fitting functional testing to large web-apps that don't already have it, and in such a way as the whole dev team starts using it.

>> Comment on this Post · Like this Post

]]>
Sat, 24 Sep 2011 02:30:00 +0000 http://sett.com/import1391721468/uid/109576