Update: At the bottom of this post, I've linked to two large and quite different discussions of this post, both of which are worth reading...
Update 2: If the contents of this post make you angry, okay. It was written somewhat brashly. But, if the title alone makes you angry, and you decide this is an article about "Why Testing Code Sucks" without having read it, you've missed the point. Or I explained it badly :-)
Some things programmers say can be massive red flags. When I hear someone start advocating Test-Driven Development as the One True Programming Methodology, that's a red flag, and I start to assume you're either a shitty (or inexperienced) programmer, or some kind of Agile Testing Consultant (which normally implies the former).Testing is a tool for helping you, not for using to engage in a "more pious than thou" dick-swinging my Cucumber is bigger than yours idiocy. Testing is about giving you the developer useful and quick feedback about if you're on the right path, and if you've broken something, and for warning people who come after you if they've broken something. It's not an arcane methodology that somehow has some magical "making your code better" side-effect...
The whole concept of Test-Driven Development is hocus, and embracing it as your philosophy, criminal. Instead: Developer-Driven Testing. Give yourself and your coworkers useful tools for solving problems and supporting yourselves, rather than disappearing in to some testing hell where you're doing it a certain way because you're supposed to.
Have I had experience (and much value) out of sometimes writing tests for certain problem classes before writing any code? Yes. Changes to existing functionality are often a good candidate. Small and well-defined pieces of work, or little add-ons to already tested code are another.
But the demand that you should always write your tests first? Give me a break.
This is idiocy during a design or hacking or greenfield phase of development. Allowing your tests to dictate your code (rather than influence the design of modular code) and to dictate your design because you wrote over-invasive test is a massive fail.
Writing tests before code works pretty well in some situations. Test Driven Development, as handed down to us mortals by Agile Testing Experts and other assorted shills, is hocus.
Labouring under the idea that Tests Must Come First (and everything I've seen, and everything I do see now suggests that that is the central idea in TDD - you write a test, then you write the code to pass it) without pivoting to see that testing is a useful practice in so much as it helps developers is the wrong approach.
Even if you write only some tests first, if you want to do it meaningfully, then you either need to zoom down in to tiny bits of functionality first in order to be able to write those tests, or you write a test that requires most of the software to be finished, or you cheat and fudge it. The former is the right approach in a small number of situations - tests around bugs, or small, very well-defined pieces of functionality).
Making tests a central part of the process because they're useful to developers? Awesome. Dictating a workflow to developers that works in some cases as the One True Way: ridiculous.
Testing is about helping developers, and recognizing that automated testing is about benefit to developers, rather than cargo-culting a workflow and decreeing that one size fits all.
Writing tests first as a tool to be deployed where it works is "Developer Driven Testing" - focusing on making the developer more productive by choosing the right tool for the job. Generalizing a bunch of testing rules and saying This Is The One True Way Even When It Isn't - that's not right.
Discussion and thoughts (posted a few hours later)...
I wrote this a few short hours ago, and it's already generated quite the discussion.
On Hacker News, there's a discussion that I think asks a lot of good questions, and there's a real set of well-reasoned opinions. I have been responding on there quite a bit with the username peteretep.
On Reddit, the debate is a little more ... uh ... robust. There are a lot of people defending writing automated tests. As this blog is largely meant to move forward as being a testing advocacy and practical advice resource, I've clearly miscommunicated my thoughts, and not made it clear enough that I think software testing is pretty darn awesome, but I'm put off by slavish adherence to a particular methodology!
If you've posted a comment on the blog and it's not there yet, sorry. Some are getting caught in the spam folder. I'm not censoring anyone, and I'm not planning to, so please be patient!
Anyway, the whole thing serves me right for putting together my first blog post by copy-pasting from a bunch of HN comments I'd made. The next article is a walk-through of retro-fitting functional testing to large web-apps that don't already have it, and in such a way as the whole dev team starts using it.
Get new posts sent to you. If you change your mind later, unfollow with one click.
You're a member of this community! Use the buttons to like this post or share it. Or leave a reply below.
TDD is predicated on the notion that consistent adherence to a software's contract throughout software maintenance and revision will lead to more efficient higher quality software. However, what TDD adherents seem to ignore or be unaware of is the fleetingly short lifetime of a typical software application in the industry. One year is ancient and even if a software application performs perfectly, unless it is overhauled or completely replaced with a brand new implementation users will simply move-on to something else because of this frustrating quality that users have: a never-ending lust for the new and the shiny. Therefore investing 2 months to write non-value add tests for a product that has one year to live can hardly be considered efficient or a good investment.
"Seek and ye shall find". We form our assumptions and biases and then proceed to go forth and "discover" those things that confirm them. To argue against TDD you must either argue against automated tests all together or argue _for_ their proper and most beneficial usage. Anyone who is both for testing and against TDD does our trade a huge disservice when they do not present their own testing practices for edification and review.
Why does this come up fourth in my Google results....GoogleFail.
This is nothing more than a subjective, personal, and highly emotive rant, with no examples or logic used to back it up. Using brief and pointless phrases like "fail" shows your unprofessionalism.
You've clearly been lectured by some TDD nut, taken it personally, as though he's making out you're doing things "wrong", and now you have a huge chip on your shoulder.
Here because I googled "I hate test driven development", I wanted to check if I was crazy, not sure yet ...
Thanks for the voice of reason! The only places I see TDD being applicable
1. Large teams integrating code of different qualities and trying to make it work (trust is lower)
2. Product development, where a break fix has implications across the user base (stakes are higher)
3. Small enhancement to a large messy ball of mud (CYA)
All other times, lets try succeeding before we fail please?
TDD is something that I recently heard about, touted by the new CTO at place where I used to work. Before I looked up the definition, I thought that it was a good way to develop a software. Later, when I really started looking into it, the following became my thoughts on the matter:
- unless TDD is just unit testing, the developer should not be writing the test cases
- from the requirements document, there should be 2 parallel tracks:
1. developer starts writing the code to address the requirements
2. QA starts writing the test cases for ALL of the requirements, keeping as much of it automated, as possible
- each version of the code should be subjected to the entire test suite
- at first, 99% of the test cases will fail and as the development effort matures, the failed test cases will tend towards zero
Testing is an offensive operation and coding is a defensive operation. This is the reason why QA did not, traditionally, report to the Project Manager because of the conflict of interest i.e. QA's objective is to break the system as much as possible and should come at the software with a sledge hammer. I don't see how a developer writing his/her own test cases can be that aggressive.
Probably the mistake that I made was to think that TDD was just developing of ALL the test cases and using that to periodically and frequently checking the entire software for completeness.
I don't think TDD is good, we are compelled to learn so many things for our development that when we start designing we should go through the process without the need of Unit testing. Let's remember that when we paint or write as artists (if we are such) we do the right thing as if were poured from genius, than we adjust, but the flow must not be interrupted by ASSERT IS TRUE... ? No thanks. I like pair programming instead with the right partner.
So thanks for your lines Author!! I don't feel so alone now.
On a large project that spans several international teams we have found TDD to be a solid practice. We approach it not from a "Build a Test Case" perspective, but more from a Requirements Mapping need. Take a Requirement/Task from the Backlog and write a test or tests to satisfy that requirement. The code that then passes that test has satisfied that requirement. Our coverage isn't 100% (Closer to 85-90) but at least we don't run into a test later or test never situation. All our products have to pass FDA scrutiny and TDD has managed to help along the road. We cover the code and SQA covers the BDD and Acceptance. None of us are drones or cult members, but many of us see the wisdom in "Test-First" approaches.
One more benefit to add, which might also be useful to you as a "code cowboy", which is the regression testing that the test suite gives you when you go back to modify your code in significant ways. Instead of having to retest multiple scenarios by hand after you've modified that code which has been in production for a year, you can just hit the "Run" button and if it shows green, be confident that your change didn't break something that was previously working.
I've coded both ways, TDD and non-TDD. I find that if I don't use TDD, then I end up manually testing everything anyway (and often in the debugger, so I can make sure I'm hitting the branches of code that I expect). With TDD, I know exactly what code I'm hitting. It is more tedious and slow to develop this way, but the accuracy is very high.
"A survey of 1,027, mainly private sector, IT projects published in the 2001 British Computer Society Review showed that only 130 (12.7 per cent) succeeded. "
Maybe you have always been in that 12.7%, but many across the industry notice that software projects are often failing. TDD was always really an attempt to avoid this. Perhaps the TDD you have seen was not practiced well? That is the main reason I have seen for it not working on projects. When practiced well, I have found it very (not 'extremely'!) useful.