Hacker News new | past | comments | ask | show | jobs | submit login
TDD is dead. Long live testing. (heinemeierhansson.com)
351 points by gerjomarty on April 23, 2014 | hide | past | favorite | 161 comments



What DHH describes is called the test pyramid (http://martinfowler.com/bliki/TestPyramid.html), which despite the fact that DHH seems to dislike Uncle Bob, it is something Uncle Bob has demonstrated and advocated multiple times in very sensible ways.

The bigger thing DHH doesn't mention or explain is that Basecamp is designed to be intentionally simple. Much of the complexity that you get in client work where you don't control the requirements Basecamp is able to simply avoid by not building.

The idea that if you own your own product and control the requirements that you can have a MUCH cleaner codebase without needing as many tests is 100% true. But, David didn't say that and it's a shame. It's a much more powerful point because there are cases where clean architectures are valuable and the standard Rails MVC pattern isn't enough. At the same time, there are plenty of times where the basics will get you a very long ways. If you control the complexity, simple MVC might be all you ever need.

Everything in software development involves managing requirements and tradeoffs. If you simplify requirements, you reduce complexity. You reduce complexity, you reduce surface area required to test.

It's like weight reduction in cars, you reduce overall weight a few pounds, you can also reduce an extra 5-10 lbs. in required components to support the added weight/complexity.

I just wish that people could discuss these ideas with nuance and empathy towards other project's requirements. Some projects require a lot of tests and have significant inherent complexity, some don't. A one size fits all philosophy might sound appealing, but it doesn't work.

Don't throw the TDD baby out with the bathwater.


I wish this post stays at the top of _ANY_ discussion about automation testing.

Couple V.E.R.Y important points:

1) Testing Pyramid

People, who agree or disagree, talk a lot about unit-testing yet they never talk about the other types of automation testing; as if the rest were forgotten.

To make matter worse, people often haven't had the experience of managing full-blown System Test. Hint: it's very brittle, slow, and expensive (even with the improved infrastructure automation tools such as Chef, Packer, Docker, Vagrant). Don't forget the magic word "tightly-coupled" that'll demotivate you hard and quick whenever you want to make some changes.

2) Control the requirement, control the codebase

This is something that NO one ever mentioned in ANY "clean architecture/codebase" discussion. People often focus on the tools (Rails, Haskell, Clojure) and never mentioned that they actually control the requirements (or in another words: opinionated) or the requirements take a back seat against cleaner code.

For example: if it didn't fit with Rails, don't do it <= controlling the requirement.


I would prefer this post stays as an example of how not to do a post about the testing pyramid. Martin Fowler post/article is better IMHO, and I'm sure there are much better posts.

There are some good ideas mixed with fallacies and disdain/contempt. These are not good in any discussion.


Another thing I think DHH has missed here is that his experience is not particularly broad. His achievements are very impressive, but if I recall rightly, he started Rails before finishing his CS degree, and that's basically all he's done since.

One of the core assumptions of Rails is having a database. A lot of it is focused on turning HTTP requests into SQL requests, and turning SQL responses into HTML. Which is fine; that's what a lot of web stuff is, and it's fine for that. But it's not the only thing that happens in the world.

I've TDDed systems that are not locked in intimate embraces with databases, and TDD is much easier and more pleasant there. On one project we did like that, we ended up with 40k lines of test (and a similar amount of production code) that ran in a few seconds. It was the most pleasant development experience of my career, and TDD was no problem there.


It's a lot easier to write unit tests for, you know, units. For example, if you have a date-to-readable-string conversion library, you can test the crap out of it since it's input is a date object and its output is a string. It's a unit of code, so all the testing that's required is unit testing.

On the other hand, when the "unit" you are testing relies on six external API's and your own database to work, best of luck to ya. It's not a unit at this point, it's the glue that holds other units together. It is important to get it right, but given how many different combinations of failure modes there are you are not going to test it all and you are not going to get it right. Mocks/fakes do not make this stuff much better as timing and latency are important too. What if the UPS address service you are using doesn't return an error but times out? Now your production code has to include enough of a framework for simulating network errors.

I do agree with TFA: TDD is a niche thing that has been so widely embraced for all the wrong reasons that it's no longer useful even in the niche. Also, system-wide testing is very important as well and the tooling for it sucks. My rule of thumb is to be pragmatic about automated testing: test the things that will give you the greatest ROI in terms of time/effort/time-to-market/etc. but don't be religious about it. The tests are not the end product and should not be treated as such.


Agreed. Unit testing requires units.

I also agree that being pragmatic is generally the way to go, but that kind of pragmatism has limits. If I'm too focused on short-term pragmatism, I will let the issues of external systems infect mine. If I'm being longer-term pragmatic, I'll try to keep my code relatively clean and reliable, no matter how kooky the things I'm integrating with.

For example, the first REST integration I do, I may just say "we'll ignore network errors and see what happens". But if I'm doing a few, I'm going to look hard at extracting a reusable layer of code that handles layers, timeouts, partial responses, and the like. And that I will test properly. That way the rest of my code can assume sanity.


Short-term pragmatism and long-term pragmatism. Excellent concepts that I've never thought to quite express.

It seems you can always 'win' an argument in software engineering or TDD by talking about taking the 'pragmatic' solution, or the 'proper' solution. It often seems rather arbitrary whether to build something properly now, or introduce some short-term technical debt.

It's easy to accuse people of being cowboy coders, or architecture astronauts, and it unfortunately seems to fall into these dichotomies. Admittedly there are people like that, but mostly through lack of experience. The progression is maybe commonly cowboy coder-> architecture astronaut -> balanced software craftsman.

There's probably a bunch of steps missing on the way and it's not a clear linear path either, but I do like the term of having different durations of pragmatism, than a one-size fits all, "We'll cut corners to be pragmatic, because we're not architecture astronauts".


That path you describe was definitely mine. I think the cycle driving that change is something like:

Things seem great -> I'm ignoring a problem -> Ok, there really is a problem -> I hate the problem -> Look, a solution! -> The solution is the best thing ever! -> Ok, I've taken it too far -> Things seem great.

Regarding the arbitrariness of building something right versus taking technical debt: to me that's great grounds for experimentation, as long as I can trust business stakeholders to give me room. At my last startup, we sometimes took on substantial amounts of technical debt, because I knew could always trust my cofounder to give us the room to clean up our messes when it became necessary.

But in a more pathological business setting, I'd be an absolutist. No technical debt! Never! Because I've seen too many places take that long slide from a little technical debt to an enormous low-productivity snarl. And then they just accept that as normal, generally for the rest of the company's life.

So I think a number of these engineering arguments are framed too narrowly. People often end up cowboy coders or architecture astronauts because that's what's working for them in their circumstances.


> Test-first units leads to an overly complex web of intermediary objects and indirection in order to avoid doing anything that's "slow". Like hitting the database. Or file IO. Or going through the browser to test the whole system. It's given birth to some truly horrendous monstrosities of architecture. A dense jungle of service objects, command patterns, and worse.

This is DHH's central argument, he is once again defending his "there are only three places to put code" application design and the monolithic Rails architecture. We see him, time and time again, sniping at people who outgrow those patterns but still want to use Rails. People who do want fast and isolated unit tests, who want encapsulated, reusable service objects and people who are perhaps building something more complicated than a TODO list.

He goes as far as to subtly deprecate unit testing, something which is incredibly vital in a dynamic, loosely typed language such as Ruby, where monkey-patching other's code is more of a rule than an exception. In Ruby, unit tests stand in place of static compiler checks. I haven't heard a strong argument against them nor a replacement for them. The binary notion of "the whole application works" or "the whole application does not work" does nothing to quell the critics who say that Ruby and indeed Rails projects are brittle and difficult to refactor.

I love Rails and think it's a fine product, but I don't understand its leadership strategy, doggedly preserving web application design as it existed in 2004.

> "Rails 5 will be renamed to Basecamp. This will help to end confusion over which types of apps to build using Rails." @markbates


> Test-first units leads to an overly complex web of intermediary objects and indirection in order to avoid doing anything that's "slow". Like hitting the database. Or file IO. Or going through the browser to test the whole system. It's given birth to some truly horrendous monstrosities of architecture. A dense jungle of service objects, command patterns, and worse.

Maybe your name is David, and you don't do test-first programming, and neither do I for that matter. But don't think for a second that separation of concerns is an artifact of TDD. And yes, having a service layer is a good thing. Even if you're doing a TODO app.


The thing is... in Rails app, the Controller is slowly becoming THE service layer.

In the past literatures written mostly for enterprise projects, Service layer (http://martinfowler.com/eaaCatalog/serviceLayer.html) answers to multiple interfaces.

Fast forward to today, Rails Controllers become the API which other interfaces can consume via JSON, XML, or even HTML or simply stream of texts if configured.

If you can dictates that "here's an endpoint that you can poke, I will not provide WS-*" (in other words: you can dictate the requirements), then there's no need for a service layer.


Uh. In any complex application, you write services which are responsible for various business needs, and whose responsibilities only loosely overlap the way your webapp is accessed from the outside. You may even have services only used by other services.

And obviously, stuffing your business logic along with your web glue (whatever concrete representation you're sending to the outside, you're still stuffing parameters in a template, essentially) still violates SRP. Not to mention that it makes it utterly impossible to reuse outside of your web application, something which is sometimes desirable.


I don't disagree with you. In fact, I am writing those services whether it is for the API component or in the web glue on projects I worked on.

I'm just pointing out that there are a fraction of Rails crowd that don't think these services are needed is partly because of how things work in Rails.


> We see him, time and time again, sniping at people who outgrow those patterns but still want to use Rails.

To me it seems more like the pattern proponents who are sniping at those who find MVC to be sufficient. Take for example the recent discussion that dhh participated in:

* The post's title: "Rails - The Missing Parts". Good start - establish your position by claiming that Rails is defective in that crucial parts that everyone requires are just missing.

* "We’re solving these problems with 3 concepts we believe should be part of any “Advanced” Rails deployment". Now try to convince people that if they aspire to anything 'advanced' then they simply must be adding in these additional bits, no questions asked, no two ways about it.

dhh's "sniping" response?

"Whatever floats your boat, though. If this is what you prefer, great. But please hold the "beginner's version" crap. Plenty of large apps are built with vanilla Rails."

So all he's saying is "if you want to do that, go ahead. Just don't try to make out that Rails is defective because it doesn't do it by default".


> In Ruby, unit tests stand in place of static compiler checks. I haven't heard a strong argument against them nor a replacement for them.

How about static compiler checks? :)


I don't know a great deal about ruby, but I'd wager that the way it's designed would make static type checking essentially impossible. And you probably understand this, but still. Dynamic languages allow fundamentally unsound types, for example:

    def foo(x):
        print x
        return foo
This function has no computable type (in a standard system anyway); it's "a -> a -> a -> ...". However it's perfectly obvious what it does, and conceivable that it or similar functions might exist and be useful in actual code. Dynamic languages allow behaviors which are impossible in statically typed languages (properties created at runtime are another example).

Of course, one could use gradual typing to get around some of this.


Still not an adequate replacement for unit tests. The compiler can prove that your code is correct but not that your logic is.


In the same way that writing a unit test for something proves your logic is correct? This is not intended as a snark or something. Just stating the obvious that your unit tests are no silver bullet to a working correct piece of software.

My 2 cents: Combine static(ish) typing with tests and a number of (semi-manual) test scenario's and you get a few steps closer to a correctly working piece of software.


Manual testing is what really confirms that your code is working properly. Automated testing verifies the conditions necessary for your code to pass manual testing. The real value of automated tests is for when you need (or someone else needs) to come back and change something.

I don't think static typing is necessary in that case, but I understand it has benefits in some situations.


The kind of "architecture" that you are talking about that was often postulated under the banners of "Single Responsibility Principle" (a poor rule of thumb in fact) and "design for testability", that results in one-method classes, or classes that do not have any state and pass everything in via method parameters is in fact contradicting basic tenets of OOP like encapsulation and having a reusable domain model - I find it, like DHH, a horrible abomination, even if tests are faster because of it.

There are lots of other ways of dealing with complexity in Rails that do not involve any of this kind of thing. Some of the people who go around talking about DCI and services have problems with basic OO modelling or even with simply writing good methods (it actually takes a fair amount of skill), with Ruby and Rails knowledge, and drowns in complexity not because of Rails default architecture patterns, but because of a general lack of coding skills, lack of developer-PM communication that increases essential complexity, lack of developer-developer communication, NIH, etc. Here is a whole bunch of thing that you can do without introducing "service objects" and all that:

- Extract non-business-logic-related general components like API wrappers, widgets into a Gem, Rails engines, jQuery plugin etc.

- Use existing gems and techniques that concisely encode high-level patterns, like state machines

- Extract common pieces of behaviour into controller or model mixins (concerns)

- Use finer-grain modelling, e.g. introduce value objects (see documentation of composed_of) or simply split models into smaller ones, e.g. instead of having simply a User model, separate User and Profile. Extract very complicated algorithms into separate classes.

- Promote code reuse by taking extra care of having an ultra clean API for crucial domain logic operations - the methods that correspond to those should be listed in one place with brief descriptions, documented in detail where they are defined, have easy to remember names, flexible parameter lists allowing handling of different use cases, clear error signaling and so forth.

- Group related models in modules.

I would like to see one codebase that perfects all the basic coding practices of this kind and yet still has issues with the explosion of complexity. Somehow people manage to write, for example, complicated games spanning several hundreds of thousands of lines of code without introducing LightRayHitSpherePredicateFactories all over the place, and to a large extent I think they do so by mastering the basics of the kind listed above. I would also like to see one codebase that uses DCI or Service Objects religiously that isn't a 100 line TODO list.


> The kind of "architecture" that you are talking about that was often postulated under the banners of "Single Responsibility Principle" (a poor rule of thumb in fact) and "design for testability", that results in one-method classes, or classes that do not have any state and pass everything in via method parameters is in fact contradicting basic tenets of OOP like encapsulation and having a reusable domain model - I find it, like DHH, a horrible abomination, even if tests are faster because of it.

Both the kinds of design features you describe are generally bad [1] ways of implementing either SRP or design for testability, and don't seem to be the architectural choices the grandparent post was actually suggesting. Things like, however, taking external dependencies as constructor parameters and coding to their required API rather than baking the specific concrete implementation to use into class design and coding the class to that is more what I think the GP is talking about. This doesn't require limiting the internal state of the object, it just pulls decisions that belong to the calling environment out of the object and back to the calling environment, which makes the code more reusable (including being "reusable" in a unit-testing environment where the external requirements are mocks whose behavior is defined by test parameters rather than adaptors to real external resources.)

[1] There are very narrow specific cases where they might be the right thing


> doggedly preserving web application design as it existed in 2004

It's actually more ancient if you consider the fact that the ActiveRecord pattern (in spirit, not in name) goes back to VB3.


ActiveRecord also showed up in Martin Fowler P of EAA book in 2004: http://martinfowler.com/eaaCatalog/activeRecord.html

I think somewhere in that book (Forewords or preface or whatnot) he also mentioned DHH's help/contribution to the book.

As much as Rails community hates Enterprise, lots of patterns that Rails was built upon showed up in that book as gulp "Enterprise" patterns gulp


> In Ruby, unit tests stand in place of static compiler checks

I think you have it backwards. Non-ruby developers often use static compiler checks as a replacement for proper unit tests.


Static compiler checks are free, while unit-tests cost initial effort and later maintenance.


Isn't it simple? Passing unit tests by definition don't guarantee that your software works. Passing system or acceptance tests do.

Unit tests are still nice to have, which DHH doesn't seem to oppose. Along with good documentation, good unit tests may help keep things maintainable for developers themselves. But they are far from essential, so depending on time constraints and project complexity they may just not make sense to spend resources on.


I've been slowly coming to this realization myself lately, I thought our situation was just outside the mainstream of what most people work on, but maybe not.

Our team builds data-intensive biomedical web applications (open source project for it all here: http://harvest.research.chop.edu). Much of our UI is data-driven, so many of the bugs we encounter are at the intersection of code, config, and unique data circumstances. While a lot of the low-level components can be unit tested, individual apps as a whole need functional testing with real (or real enough) data for us to consider them sufficiently tested. The effort required to mock out things is often higher than just cloning production and running new code on top of existing data. This gets complicated in a hurry when you also have to introduce schema migrations before you can test. It's almost like we need to be doing integration testing, far, far earlier than you would normally.

Furthermore, the reality is that what started out as a Django app now has almost as much client-side JavaScript as it does Python code. This complicates the testing picture further, and I suspect many teams pushing things further in the direction of true web applications are starting to bump into this more and more.


> the intersection of code, config, and unique data circumstances

I'm sure I'm way over-simplifying here, but those sound like missed edge cases? I can imagine they'd be difficult to predict, regardless.


Why moving toward more client-side JS complicates testing further?


Because now you're introducing an interface when previously you had shared objects all on the server in a single codebase. For example, let's say my JavaScript client works with a "User" object delivered via REST API from the server. Let's say I change Django's User model object to modify some existing attribute. Previously, all my Python code's tests would now use this updated model object and I could simply run my tests and count on finding bugs where things that make use of "User" were broken as a result of the change. But now, with lots of client side code, a whole other JavaScript-based test suite (completely outside Django's) needs to run to make sure the new JSON representation will work.

However, this means I have to not just test the backend Django code, but also the output of the REST service, the interaction between the JavaScript code and REST API, and the internal JavaScript methods that use the User object on the client. You are now dealing with at least two completely independent testing frameworks (one in Python and one in JavaScript). If you want traditional unit tests, you need to mock the API calls and the JSON payloads in both directions. Now you've got to maintain all those mock objects for your tests so that your tests are actually testing real payloads and calls and not outdated versions. Ultimately, the only foolproof way to actually be sure it all works and you didn't miss anything is to actually deploy the whole app together and poke at the full stack through a headless browser test.


1) See the testing pyramid article posted somewhere within this thread.

2) I never have to maintain "Mock" objects in my tests, my Mock objects came for free (I use Mockito and I have less Java Interface, I mock my real classes and I inject them in the right places).

3) Separating Django tests and JS tests shouldn't be too bad and often preferred.

4) You can test JSON payload from backend to front-end IN the back-end serialization code, speaking of this, I use JAX-RS (and Jackson/JAXB) so JSON payload is something I normally don't test since that means I'm testing the framework that is already well-tested. I normally don't test JSON payload coming from the front-end: it's JavaScript, it's the browser, I don't test an already tested thing.

But I'll give you another example of Object transformation from one form to another: I use Dozer to convert Domain model (Django model, ActiveRecord model) to Data Transfer Object (Plain old Java object). To test this, I write a Unit test that converts it and check the expected values.

5) Nobody argues end-to-end testing :)

Check PhantomJS, CasperJS, Selenium (especially WebDriver) and also Sauce Lab (We use them all). But end-to-end testing is very expensive so hence the testing pyramid.


I'm personally not a big fan of mocking; it introduces a lot of duplication into the system. On the other hand, I'm not a big fan of testing a fully integrated system unless you've TDD'd everything from scratch, because then people let systems get slow enough to make tests too slow to maintain a good pace.

If you're already in the "integrating with slow things" problem space, then one solution is to automatically generate the mocks from real responses. E.g., some testing setup code calls your Django layer to create and fetch a User object. You then persist that so that your tests run quickly.

And yeah, I'll definitely use end-to-end smoke tests to make sure that the whole thing joins up in the obvious spots. But those approaches are slow and flaky enough that I've never managed to do more than basic testing through them.


Largely because more of the automated integration testing has to be done with a headless browser, e.g. Poltergeist.

If you have no JS in an important area of your site, you can integration test it without a headless browser. Your test process models HTTP requests as simple method calls to the web framework. (E.g. in Rails, you can simulate an HTTP request by sending the appropriate method call to Rack.) The method calls simply return the HTTP response. You can then make assertions against that response, "click links" by sending more method calls based on the links in the response, submit forms in a similar manner, etc.

But if a given part of your app depends on JS, you pretty much have to integration test in a headless browser. Given the state of the tooling, that's just not as convenient as the former approach. Headless browsers tend to be slow as molasses. There are all kinds of weird edge cases, often related to asynchronous stuff. You spend a lot of time debugging tests instead of using tests to find application bugs.

Worst of all, headless browsers still can't truly test that "the user experience is correct." That's because we haven't yet found a way to define correctness. For example, a bug resulting from the interaction of JS and CSS is definitely a bug, and it can utterly break the app. But how do you assert against that? How do define the correct visual state of the UI?


Yes, I've known about the headless proposition for a while.

Splitting front-end and back-end tests is desirable.

> Worst of all, headless browsers still can't truly test that "the user experience is correct."

This is the claim from the old Joel Spolsky article about automation tests but should not be the ultimate dealbreaker.

Nobody claims you should rely on automation-tests 100%. Automation-tests test functionality of your software not the look-n-feel or user-experience. You have a separate tests for that.

The problem between JS and CSS shouldn't be that many either (shouldn't be a factor that, again, becomes a dealbreaker). If you have tons of this then perhaps what's broken is the tools we use? or perhaps how we use it?

I don't test my configurations (in-code configuration, not infrastructure configuration) because configuration is one-time only. You test it manually and forget about it.


> Splitting front-end and back-end tests is desirable.

I don't feel confident without integration tests. An integration test should test as much of the system together as is practical. If I test the client and server sides separately, I can't know whether the client and server will work together properly.

For example, let's say I assert that the server returns a certain JSON object in response to a certain request. Then I assert that the JS does the correct thing upon receiving that JSON object.

But then, a month later, a coworker decides to change the structure of the JSON object. He updates the JS and the tests for the JS. But he forgets to update the server. (Or maybe it's a version control mistake, and he loses those changes.) Anyone running the tests will still see all test passing, yet the app is broken.

Scenarios like that worry me, which is why integration tests are my favorite kind of test.

> Automation-tests test functionality of your software not the look-n-feel or user-experience.

It's not about the difference between a drop shadow or no drop shadow. We're not talking cosmetic stuff. We're talking elements disappearing, being positioned so they cover other important elements, etc. Stuff that breaks the UI.

> The problem between JS and CSS shouldn't be that many either

Maybe it shouldn't be, but it is. I'm not saying I encounter twelve JS-CSS bugs a day. But they do happen. And when they make it into production, clients get upset. There are strong business reasons to

> If you have tons of this then perhaps what's broken is the tools we use? or perhaps how we use it?

Exactly. I think there's a tooling problem.


> I don't feel confident without integration tests.

Nobody does. Having said that, my unit-tests are a-plenty and they test things in isolation.

My integration-tests are limited to database interaction with the back-end system only but do not test near-end-to-end to avoid overlap with my unit-tests.

I have another sets of functional-tests that use Selenium but with minimum test cases written for it only to test the happy path (can I create a user? can I delete a user? There is no corner cases tests unless we found that they're a must) in most cases because it is expensive to maintain the full blown functional tests.

Corner cases are done at the unit-test or integration-test level.


I think a problem is that we tend to overgeneralise our own experience. When we try out ideas, such as TDD, we (hopefully) get a good feel for how they work (or don't work) for us. Reading articles or getting advice might help us to understand and use those ideas better, but the impression is still based on your personal experience. That's fine when you're talking about how you like to work, but I think it comes unstuck when you start trying to apply it to other people. Even if you read other people's perspectives, it's often tempting to consider those that disagree to be idiots (although you might not be so blunt), and those that agree with you to be sensible.

I think it's more productive to try out a technique, try out some variations based on others' advice, and then share the experience: I found this technique useful in these situations, but not in these. Here are some tips that I found effective, and here's what didn't work for me. Then, people can try out techniques for themselves, and learn from your experience.

If we want to generalise what's effective and what's not for software development overall, then I think we need a much rigorous approach then getting a vibe from the community. I've only ever seen a handful of studies that try to be unbiased and somewhat scientific in assessing techniques like TDD.


Yes, but if you don't generalize you can't have a captivating title and can't reach the HN frontpage.


And let's not forget adding drama. It's not just that my generalization is right, it's that everything you have said is fucking wrong.

Because the world doesn't have enough drama.


You also can't be human or probably any animal for that matter. Pattern matching and generalization is how we survive and function in a world that is bigger than us.


I used to be dogmatic about TDD. Then I joined a team where management doesn't value unit testing that much, and so I got a lot less religious about it.

What I am missing now isn't the test-first mentality. I honestly don't think there's a quality advantage to writing the tests before the code. And the danger of ardent TDD was always investing a ton of time ironing out a unit to perfection, only to realize you missed the forest for the trees and the unit, while perfect, has no place in the overall solution.

Rather, I miss the side benefits that come with it:

- Automated tests actually get written.

- It improves development cadence when you're writing a chunk of code you don't really want to be writing (by setting mini milestones).

So now I occasionally practice TDD for those reasons.


Interesting! Those benefits are also part of it for me.

The quality advantage I have noticed for TDD comes in better coverage. If I'm doing test-after, I already think the code works, so it's harder for me to notice the places where my thinking is wrong. If I'm doing test-first, I start out more skeptical and clear-headed.

I also think I design better, and for similar reasons. TDD starts me focused on the how the code appears from the outside, and then I make the implementation conform to that. If I start with the implementation, it's easier for me to get a little slack about the API; because my head is full with how it works internally, more of that can end up in the external interface.


The book Growing Object Oriented Software, Guided By Tests does a good job explaining how to avoid the "missing forests for the trees" problem. The short summary is to use integration and unit tests in tandem, which keeps you focused on the entire feature while also developing carefully tested units. The book explains it better than I do.


I've had great luck writing quality code by writing the tests first. Testing code is in a higher level -- simpler -- dialect of your main code, thus easier to understand. Gradually the tests cover more and more code, until I'm confident to go ahead and connect the functions into the main line of the system. If the code isn't 100% covered, that's fine, and if code is simple enough to not have tests at first, that's fine too.

Doing testing first helps ensure code is testable. I haven't found OP's issue of "overly complex web of intermediary objects" -- the Fudge mocking library helps alleviate that. Example: test code creates a fake urlopen(), which the receiving code uses instead of doing real I/O. No intermediates.

(This is Python, often with Django)


I can certainly sympathise with the abstinence-only pride and shame cycle. My projects typically start with a test-first approach, which evaporates when the clock starts ticking.

It seems to me, first of all, that there's a threshold of software complexity, beneath which writing tests is a net loss in time and productivity. If you have (as I often do) a couple of hundred LOC spread across a few files, with one or two models/views, then there is negligible gain to automated testing - by the time you've set up/torn down the test DB, you could have refreshed the page in a browser 5 times and seen all the error messages in there. I accept that this threshold can be hit very rapidly once you get towards a modest website, even; but my day job basically involves writing toy SPAs with a very simple API behind the scenes, built and then left chugging along until they're out of date, so I often sail under it.

A bigger anxiety I have about TDD is that, while there are no end of resources on using tests to drive software design, I haven't come across much about good design of tests (suggestions very much welcome). I rarely have confidence that my tests are any more use than assert_true(true). You test for the constraints you anticipate when you write the tests. I don't trust my code-fu that far, and so my tests inherit their author's impostor syndrome.


> I haven't come across much about good design of tests (suggestions very much welcome)

Growing Object-Oriented Software Guided By Tests is worth a start. The authors use TDD, but that shouldn't put you off from learning the theory of separating concerns enough to write fewer tests.


+1 - that's an outrageously good book.


I started doing TDD in 2000, so I'm pretty comfortable with it, and I totally agree with you on the threshold of complexity. For me, TDD is a way to get a better outcome for a project. It has big benefits, but it also has costs. If the costs are above the benefits, I won't do it.

The obvious case is a single-use command-line shell script. Small code, single author, minimal duration, no reuse: automated testing isn't worth it. But if I'm building a large, multi-programmer, long-lasting project, with lots of changes along the way, I'll do a lot of automated testing.

The tricky thing is that many projects start out small and then unexpectedly grow. You start out saying, "This doesn't need a test." And then it becomes, "Well now testing this is kinda hard and I don't have the time." And gradually you end up with an untestable legacy system that is a nightmare to work on.

So the deal I make now is that I'm happy to do quick and dirty things as long as I get to throw away the code or clean it up properly when I decide the time is right. Basically, I'm willing to take on technical debt as long as I know I can declare bankruptcy when that's the right choice.


http://xunitpatterns.com/

That's another book for good tests design. Warning: very thick since it is a "reference" type of book.


> I haven't come across much about good design of tests (suggestions very much welcome).

I have yet to see a large-scale, popular piece of software written using TDD


I dislike mocks. I've never seen the point in testing code against an entirely fictional representation of the most complicated and slow part of the system, just because it happens to be more convenient. Of course it's more convenient. The only compelling reason I can see for mocks is when you've got code that hits external live-APIs that don't give you any real option for automated testing (E.G., reading from and posting to the Twitter API).

If an app is worth writing, and worth writing tests for, do it justice and test the whole shooting match. Yeah it's hard, but that just makes it all the more worth doing. Automate your tests so that they cover everything that is important, from DOM elements on a dynamically built webpage to your model (and therefore the data that gets written to your SQL database), but don't pretend that stubs are substitutes for this. If your model relies on a database, let it rely on the database during the tests too, otherwise what exactly are you testing?


Mocks (or I guess fakes, really) can be useful for simulating certain scenarios that are difficult to produce in real life but your code needs to handle. For example if in an integration test you want to see how your application deals with high latency or errors in a service dependency, you can write a mock of that service and have it introduce arbitrary delays or errors.

Of course it helps to test against the real service as well, since it's often prohibitively time-consuming or impossible to model ever aspect of it, but I wouldn't discount mocks so easily.


> I dislike mocks. I've never seen the point in testing code against an entirely fictional representation of the most complicated and slow part of the system, just because it happens to be more convenient.

You aren't testing code against an entirely fictional representation of the system, you are testing one piece of code in isolation, and using a fictional representation of another piece of the system to eliminate variables.

Unit testing is about verifying that individual components do their job correctly -- when doing it, you literally don't care what other pieces of code would do, because that's out of scope of the unit test.

Testing that the whole system works together is system/integration testing, and is a different thing. You don't use mocks for system testing, only unit testing.

> If an app is worth writing, and worth writing tests for, do it justice and test the whole shooting match.

Yes, that's what system/integration testing is for.

That doesn't mean you shouldn't have code that is amenable to proper unit testing.


> otherwise what exactly are you testing?

it depends, what do you want to test ?

If you are testing a single function, why would you care whether the data comes from the database, the network or from a mock ? All you need is to verify that the given function produces the expected output given the right (and wrong) inputs. Being able to test this way also makes easy to keep the components in your software decoupled.

My philosophy is to use mocks for unit tests and the real thing for integration tests.


It seems to me that if you are writing a lot of mocks, you are probably retrofitting tests onto an existing code base. If your code is written to be testable you usually can avoid mocks.

That being said, if you are writing a test on an algorithm and the data comes from a database this means you are not just writing a test on the algorithm - you are writing a test on the database with all the cruft that comes along with that. This test can be hundreds of times slower due to the DB dependency.

So mocks do have their place. The test pyramid is the answer - you need units (sometimes with mocks, integration, system).

In my experience most companies are way too integration heavy and very light on the unit side of things. For example for my current project our dev tests take 12 hours to run. This is the result of going too far away from the unit level and being too integration heavy.

There is a balance! Sounds like the rails world the balance is too unit heavy. In the enterprise world I think it's too integration/system heavy.


I use mocks heavily and I also test the database heavily. I use mocks in unit tests which is part of our automated build process. We also have fixtures and actual db level testing in our second level testing. Why not test both? unit testing will often catch a whole class of errors that integration tests will not or at least not accurately catch. I agree that the TDD dogma is silly but mocks and unit testing have serious value that you seem to be discounting.


Well speed of testing is not the only consideration. Mocks are also (primarily?) intended to isolate your testing so you're not looking through an entire stack to find a bug. Often, especially in early phases of software they're over kill, and distract from implementation time however.


I find stubs occasionally useful. But I don't get mocks.

http://martinfowler.com/articles/mocksArentStubs.html


That's the money link. People who save time by using mocks instead of stubs are deferring design problems to later stages, which is really bad. When I'm only responsible for my module and don't give a shit about the system as a whole, then mocking is fine. But in this case, the development process is toxic.


I dislike mocks too; it introduces a lot of duplication that I think is deeply problematic. Automated tests are for me a way to support changing the system easily and safely; lots of mocks often work against that.

But what you're getting is speed. Tests are at their most valuable when they give you quick feedback. As test suite times creep up, people stop running them as much, or at all. I've been on projects where all our tests run in seconds, and it is an enormously different development experience than when they run in minutes.


Mocks to me represent a different approach to testing - think London School of Testing vs Classic TDD. With Classic TDD, it can be difficult to test side effects in your code since its more focused on input and output.


TDD on the unit level does seem to lead to high granularity, and complexity in the system as a whole in favor of simplicity in test construction. However, some of the problems described sound like they are coming from a premature optimization mindset rather than TDD itself.

Once I learned about Cucumber and the idea of stating a requirement/test that you can't even parse yet, much less have a test for, much less have the code for, I started liking the idea of TDD as a wish list pyramid.

This allows you to think in terms of the big picture requirements, and then drill down as required to fulfill those requirements. Because you've spec/documented your design on the way down to the unit tests you can always step up a few levels and reconsider, and rewrite architectural "tests".

For example my first test is "I have a software tool for editing photos". Now I implement this test by checking if there is an executable in a path. Fail. Now I make a hello world exe for that path. Pass. Now I write a new test: "It opens an OpenGL window.", and later "it uses a mvc pattern", "the edits are represented as a scene graph", etc. all the way down to specific logic.

You later realize that a scene graph is not the right way to model your process, so you change that test to a different requirement, the altered requirements now redirect the TDD flow of an entire section of the application instead of just unit by unit.


Hear, hear. Dogmatic adherence to any system is dangerous. It goes along with being dogmatic. It's important for developers to read what others have done, see how others have solved problems and stand on the shoulders of giants before thinking for themselves, instead of just blindly accepting "best practice" as the best practice.

What I like to do with TDD is use it whenever it's the quickest way to develop something with a level of confidence that is appropriate to the situation.

If I'm creating an HTTP API and someone else is writing the front-end, I'll create unit tests that spin up my API and make requests to it to ensure I get the correct responses.

If I'm in the middle of a codebase somewhere and there's a function which is only ever hit after a bunch of others things, I'll probably write a few tests for that, too.

Just like religious dogma, we are free to pick-and-choose what we do with TDD, when we feel it's most appropriate. Of course, don't overdo it to the point where the code suffers for it.


> If I'm creating an HTTP API and someone else is writing the front-end, I'll create unit tests that spin up my API and make requests to it to ensure I get the correct responses.

That's not unit test though, rather integration or system test.


I don't think TDD actually makes that distinction. As long as you have a test and you write test before you write production code, you are doing TDD.

It's actually very comubersome to do pure unit test when you have multiple layers involved, as opposed to a plain textbook algorithm design where unit test is trivial.


Right you are. I tend to be quite lazy when distinguishing between types of tests, but as @yeukhon said, TDD doesn't distinguish :)


This is a stunning opinion considering the fact that there are many Rails shops that can't turn around a build in less than an hour because you can't test models independently of the database.


Yeah. The hand-wavey reference to advances in parallelisation and cloud runner infrastructure doesn't adequately address the drawbacks of this "post-TDD" approach.


Sometimes I wonder whether the reason he doesn't like TDD is because he inadvertently made it difficult for himself and others in Rails.

This is the engineering form of confirmation bias. If you make something hard to do, it's hard to like it.


I think that is part of it. When I tried to TDD existing PHP code, it was pretty awful and I hated testing.

At the same time, I think when you design away as much complexity as possible at the product requirements level, as 37Signals proudly does with their products, it is hard to appreciate the complexity inherent to the requirements in other codebases.

I've seen this with other developers coming into large codebases with lots of complexity and scale requirements wanting to use a simple/naive ORM solution with no caching or worry about query speed/quantity. That solution falls over and they quickly learn that the requirements are more complex than a simple app with a few dozen users.

Basecamp is obviously at a high level of traffic and scaling complexity, but they still work to reduce requirement complexity which isn't always an option if you aren't the product owner.


Not that I agree with him, but if your project can get away with a local sqlite database for testing (that is, no stored procs or database specific queries) and you make sure you don't create when build will suffice, you can have models tested with a database and still be reasonably fast.


He did mention fixtures, which speed things up dramatically, so I don't think he shares the same experiences with the Rails shops you are talking about (Who I assume do hit the database since the industry is in favor of factories). Of course fixtures vs factories is a different topic, but you are wrong in your analogy.


dhh has spent so much of his career actively leading Rails that it's tough to tell the difference between his personal evolution as a developer and general trends in the industry. I don't think he even attempts to make a distinction in this essay.

I worry about a push away from unit tests. I can't imagine having to refactor a large application with only system-level integration tests to work with, especially in a dynamically-typed language like Ruby. I haven't written a Rails app in a long time though, maybe these days it's easier to just start over than to refactor.

But any bad day for dogma is a good day for the rest of us. Write software using whatever methodology works for you and always question the value of so-called "best-practices".


It's not so much about moving away from unit tests are more to do with picking the best points at which to test your apps. For web apps the natural points at are the model level and at the client level. For the Django apps I write these tend to be the most stable areas since they are likely to be heavily influenced by user requirements. The result is tests that have a long lifetime and provide better system documentation since they generally exercise the business logic of an app. That neatly avoids one of the major problems I have found with TDD that unless you know in advance exactly what you are doing and going to do you will end up writing a lot of tests that get junked.


"I have found with TDD that unless you know in advance exactly what you are doing and going to do you will end up writing a lot of tests that get junked".

If you know all that up front, you might as well go with waterfall methodology.


I generalized DHH's conclusion re: TDD in my blog post, "XDDs: stay healthily skeptical and don't drink the kool aid":

http://www.pixelmonkey.org/2012/02/12/xdds

"I list one of my skills as “thought-driven development”. This is a little tongue-in-cheek; software engineering over the last few years has developed a lot of “XDDs,” such as test-driven development, behavior-driven development, model-driven development, etc. etc.

“Thought-driven development” doesn’t actually exist, but by it, I simply mean: perhaps we should think about what we’re doing, rather than reaching for a nearby methodology du jour..."


Love that post. And I love the acronym - TDD - Thought Driven Development. Now, I can finally start telling everyone that I strictly do TDD. ;)


> Over the years, the test-first rhetoric got louder and angrier, though. More mean-spirited.

Where are those mean TDD zealots? Can you point me at blog posts or mailing list messages displaying such behavior? I've never seen it personally. On the other hand, now and again a blog post like this comes up that's full of disdain towards the practice of testing first. I do TDD because it helps me get my work done. I'm happy to help others write tests if they wish, but I'd never look down on another developer because he or she doesn't use this tool. Some people like drawing diagrams, some people like to use an IDE... Do what works for you.


If you were developing rails apps in the mid-late 2000s then you'd have known too many to count.

There was a period in time between 2005-2008 in Boston where if you didn't practice TDD you were persona non-grata.

I think interestingly enough, a good number of those people now have 10+ years experience programming and now understand what those of us who had 10 years experience at that point in time were trying to tell them.


The irony is that that attitude is something you see from rails people about anything that's currently fashionable in the rails world. And I think they picked it up from DHH himself. You didn't get the same kind of TDD zealotry among e.g. Django people.


Where are those mean TDD zealots? Can you point me at blog posts or mailing list messages displaying such behavior?

"My thesis is that it has become infeasible, in light of what's happened over the last 6 years, for a software developer to consider himself 'professional' if he does not practice test driven development."

Robert C. Martin, 2008

http://www.infoq.com/interviews/coplien-martin-tdd


DHH's keynote from the first day of RailsConf, in which he talks a bit about testing and the general perception of writing software as "science", is online here:

Part 1: http://www.justin.tv/confreaks/b/522089408 (starts at 11:00)

Part 2: http://www.justin.tv/confreaks/b/522101045


Are these videos working for you? I cant get them to start


It took a long long time, but they eventually started.


I don't think people have a great consensus on what "TDD" really means, so there's a lot of "Do you do TDD? You should do TDD!", people taking mixed approaches that they sort of make up or copy from a quick blog post, then deciding that TDD sucks or rules.

For example, this article seems to think TDD is mostly about unit tests (and I've definitely seen the same opinion elsewhere). But somewhere else, specifically GOOS[0], the TDD cycle always starts with an end to end test, with an inner unit test/development cycle to get the test passing. I think this is really important, because if you don't have end to end tests, it's a lot harder to refactor your code in significant ways, and can leave you with extra work maintaining all of your unit tests.

Another thing is "fast tests" as resulting in "a dense jungle" of objects. At least in GOOS, the assertion is that a larger network of small objects is better than a smaller network of large objects. I'm not going to argue the benefits of one way or another, but what I mean to point out is that (at least some) people use TDD because they think it will help them write a more maintainable code base, and aren't just writing code to make their tests fast for the sake of fast tests.

[0]http://www.growing-object-oriented-software.com/


All the guys crying about the prevalence of TDD are the ones who are lucky enough to live in a tech hub from a developed country. In my area I'm the only one who knows about TDD/BDD.

You want to know how testing is done here?

- 80% of projects have no testing whatsoever

- 20% are manually tested by students paid with 3$/hour

Unlike others, I would love to be surrounded by "TDD zealots or fanatics".


You really don't want to be surrounded by those people. They'll make you hate your life. Unless you are one of them.


You would hate your life if you were surrounded by TDD fanatics?

Would you love your life if you had to change huge applications with 100k+ LOC without any kind of testing and if anything breaks at the client, it's your fault?

TDD didn't appear out of the void because some wankers want to deride everyone else, it's the ONLY WAY to have tests in the business world because if the features are developed first, the business guys will say "let's skip the tests to cut the costs".


There's a balance. >it's the ONLY WAY to have tests in the business world

Perhaps in some organizations. I feel sorry if you have to work in such an environment. My company recognizes the value of testing-we have 15,000 acceptance tests and unit tests, and we do TDD perhaps 20% of the time.

I worked at a bank where there were 0 unit tests or acceptance tests. Just some half-assed QA guy going through toy scenarios and pushing the 'approve' button. That's not where I want to be (therefore I quit), but I also don't want to be in an environment where people scream and call me an infidel if I don't do TDD.


I think fanatic and zealot is the key word.

I'm more of a pragmatist. I think one size fits all is a bad strategy, and different projects have different needs.

There is nothing I hate more than working with people who passionately believe that there is only one true way to do something (like testing) and believe the process and 'craft' is more important than other factors like, say, delivering business value.

There is a lot between your extremes of TDD and no tests at all. And when I hear a developer say some methodology is the ONLY WAY I find it's best to run, not walk.


I'm not saying TDD is the only way to write code, I'm saying TDD is the only way to make sure that you will have minimal automated testing.

About your condescending remarks about delivering business value, I'd like to see you delivering business value while working with 10 other people on a tightly coupled 100k+ LOC application with no tests. Do that, then I might take you seriously when you say that the craft and process don't matter.

EDIT: Here's how business is done in some parts of the world. I give you the crash tests for the world's cheapest car: https://www.youtube.com/watch?v=RUdKf0FQgEg .Businessmen will gladly endanger human lives in the name of cutting costs, do you actually think they will allow any kind of software testing?


> I'm not saying TDD is the only way to write code, I'm saying TDD is the only way to make sure that you will have minimal automated testing.

We'll have to disagree there. It's a fine way. There is rarely just one true way however.

> About your condescending remarks about delivering business value, I'd like to see you delivering business value while working with 10 other people on a tightly coupled 100k+ LOC application with no tests.

I, sadly, do this every day. It sucks.

> I might take you seriously when you say that the craft and process don't matter.

I never said they don't matter.

I'm advocating not taking a black and white view of things, which you seem to be doing. There are more options out there than 100% TDD 100% of the time or skipping testing completely...


The pyramid works rather well in my experience.

I still practice TDD though. When I started working on a binary tree space-partitioning algorithm a while ago for my talk at Pycon 2014 I started just writing code and spent a precious couple of nights banging my head against the wall because it was always off and would come up with intermittent errors. I had figured that I knew the data-structure and algorithm well enough and this is one-off code so who cares? So when I was desperate I had told a couple of good friends about my problem and they reminded me: I hadn't written the tests figuring that I'd save myself the time.

TDD has just been a part of my process for years now that I don't think I can even write good code without it. Even in statically type-checked languages. The practice forces me to specify the contracts and behaviors of each piece of my code before I write a single line to implement it. That loop of test, write, pass/fail offloads a tonne of complexity from my mind. It also helps me to discover where my assumptions were wrong in the design process when I notice some tests require too many mocks/stubs/assumptions about state or are simply brittle. Along with a keeping a rigorous development journal it's one of the most powerful tools in my arsenal.


I think he's conflating the separate concepts of Test Driven Development and Unit Testing. They can be done independently of each other.

I agree that many other people also conflate them, and are zealots for (or against) them together, but they are still separate concepts.

He does make some good arguments against Unit Testing, but he makes none against TDD.


Interesting. I agree with everything you wrote until the last sentence. I think he made some great arguments against TDD, but none against unit testing.


Really? What did you think was an argument against TDD?

> Test-first units leads to an overly complex web of intermediary objects and indirection in order to avoid doing anything that's "slow".

This is an argument against unit testing (and is a good example of the conflation). Personally I do TDD with slow, system tests, that mainly are testing business requirements.

He makes a couple of points that some TDD people are rabid, but that is just an argument that some people are jerks. It's not an argument against TDD. (And exactly the same is true of Unit Testing).


Yes let's move towards more coarse grained tests so we can lets the fine grained subtle bugs slip through. You know stuff like heartbleed. Theres a good idea.

To be fair a lot of unit test regimes wouldn't have caught something like heartbleed. Specifically cause it's a security issue and not necessarily a functional issue. But the point is software is damn complex and easy to screw up.

Maybe rather than doing ultimately less test (system instead of unit). We should be slowing down doing even more test at ALL levels (unit, system, GUI, security, performance, whatever) and building something to the best of our abilities. Or does that even matter anymore in the MVP build and sell quick software world?


I think a big part of the TDD phenomenon happened when the "big name" rails consulting shops insisted on using it.

In consulting, I think this makes all the sense in the world.

a) When you hand off code, you can demonstrably prove what you handed off works. b) You can take the business objectives of whoever you are consulting for and translate those into high level tests, which helps you stay focused. c) You get to charge double what you'd normally have charged the client.

When running a in-house team at a startup none of those bullet points translate, and while TDD does offer other benefits, really at the early stages of the game every line of code you write needs to solve the problem you're trying to validate.


And... depends on what you are consulting on.

I live out of consulting, and my biggest projects is writing experimental consumer behavior simulation engines.

Is not that TDD does not have a value, it is just impractical

For once, since they are simulations, they are trying to predict the future, hence there is no right answer, there is the "Acceptable" answer, or the "Defensible" answer; so having a little test on the side that tells me that I am still producing the predicted result does not help much because there is no such thing as the correct predicted result

Sure I can do test to make sure I match the excel spreadsheet that I receive for spec, they would tell me that I copy the formulas correctly; but then, for seconds, I bring the results back and they test it with more real data, and they realize that we need to tweak 100s of different little formulas (which invalidate all the unit tests), change the order of a few of them, and introduced a couple of new ones (as I was told once "because I don't know what I was smoking that Tuesday, so we should use this new one"), and of course "can we have these changes for the end of the day?"

So it all unit tests are voided, make all those changes which are simple enough to be done in a couple of hours. And I am not going to know what is the correct result because there is no valid spreadsheet now, so I don't know in advance what is expected, hence I cannot write the test first

And wait until I try to tell them, the modifications will take 2 hours, but I need another 2 days to write the new set of unit tests, which since I wrote with the result of the modifications instead of before the modifications, they will not prove anything

Rinse and repeat 10s times a month for the last 4 years...


where is this Rails train going? it used to be cool, but it looks like it's heading towards the chasm now.

posts like this just encourage my opinion about the Merb merge with Rails being a disaster which helped Rails transform from "PHP in Ruby" to something beautiful to work with, while leading to self-destruction of Merb. Rails gained a lot of traction and is now abandoning any good decision Yehuda and other great developers made.

this post simply sums up to: "I don't know how to test-first Rails applications, and I've invented Rails. we're abandoning test-first." admitting you have a problem is a first step in fixing it, but this is not going to fix it in the long run.


I've felt the same way for a long time too. The Merb merge had more to do with soothing large egos and indulging some architecture astronauts than it did about making it a better framework for developers to work with. Now it feels a lot like the over-engineered Java frameworks it was designed to liberate us from.


Couldn't disagree more. The Merb merge gave rails new life, as it made it so you weren't locked into the default Rails stack (AR/ERB/Sprockets/etc).


I've been feeling this way about testing for a while now and I'm glad that DHH wrote about it so I don't have to feel "wrong" or "embarrassed" by it anymore.

I personally have gone a bit further and only write tests after a successful incarnation of a project (that means customers) that I want to keep developing, or if writing and running the test itself costs me less time then F5'ing a page or mucking around in the rails console.

I have noticed that it's much easier to throw away code that I haven't invested a lot of time in and usually the second time I write it (with tests), I have a much clearer idea of how to implement and test something.


Perhaps design by contract is the way forwards? Explicit runtime validation of the flow of data through code at each step in the system.

As a frontend developer, I have been moving towards a greater emphasis on functional & integration testing. We're also using scripter here, a test tool that actually compares screen caps with the actual on-screen state of our applications. So a purely visual approach.

There's a wonderful world of testing tools out there. I can imagine it's incredibly daunting to a newcomer trying to decide which to pick up and run with, especially with so many job specs specifically asking for "TDD", and now with DHH saying this.


"so many job specs specifically asking for "TDD"" - yes, but do you really want to work with fanatics anyway?


Job specs ask for a lot of things but in my experience it's often as much about what the company aspires to rather than what they're really doing. More often than not that will be dabbling more than fanaticism.

c.f. Agile, Kanban, BDD


I fully agree that TDD has turned into a cargo cult obsession for many developer shops. The benefits of unit tests are usually for the developer, but in over abundance can lead to inflexibility and debt when your system needs to change.


Part of the problem is developers are often too hesitant to delete tests. Tests, like any code, should be deleted when their value is lower than their cost. Tests have maintenance cost like any other code. Plus they add to the time it takes to produce a build.

When making significant changes to implementation of something it's often worth just leaving acceptance tests in place for regression and deleting unit tests and TDDing your new implementation


TDD has turned into a way for devs to 1-up other devs. When testing becomes a discipline of its own with the knowledge of these testing frameworks, something has gone wrong. Tests shouldn't need much more than assert statements, and maybe a little extra.


> in over abundance can lead to inflexibility and debt when your system needs to change.

I hear this argument a lot, and I think it is absolutely incorrect. If tests are hindering your system then you're just doing it wrong. It's like saying "climbing rope is a hindrance for the rock climber, since I have tied gordian knots around my feet and now cannot move."

TLDR doing something badly doesn't mean the thing you have botched is itself bad.


Yes, this is really what I mean by "over abundance". I'm suggesting that excessive application of unit test "fundamentalism" can lead to a variety of problems. A particular manifestation of what you describe as "doing something badly".


As rhetorical gambits go, the “what other people do is religion” is getting pretty old.

It’s meant to reflect on the speaker as the voice of reason among the thoughtless. Who wouldn’t want to think of themselves that way? I’m the smart one who sees through the dogma!

But it doesn’t have that effect anymore – mostly it makes the speaker look someone who disrespects his readers.


As rhetorical gambits go, the “what other people do is religion” is getting pretty old.

And a bit rich coming from a high priest. He wrote an entire pretentious rant (Rails is Omakase) when someone questioned his own dogma.


I agree with the sentiment but, despite the proviso that we should not go anti-TDD, I still think it's too strongly worded.

If you are rapidly prototyping something then writing tests first is a hinderance. But if you have well defined requirements then writing tests first will save you time.

What's best will differ depending on whether you're designing a web app or building an interface to an SAP system or an external gateway. (My guess is that DHH does very little of the well defined interface building type stuff)


I fully agree with the sentiment. I've always been against TDD being pushed. Obviously as developers we love the stress-free appeal of unlimited time, test first, achieve 100% confidence in code. Wow so glamour. In practice this will always never work, since we are time and budget constrained. Halfway we find that we are dumping tests in favor of writing even more hacky-tacky code just to meet a deadline. This code will never be refactored, because the client is satisfied with the results, and does not appreciate all the edge-cases because you said you would take care of them.

No, I'd rather produce code that is well written, can be deployed, and taken over by other devs if needed, not several levels of testing paradigm's that need to be satisfied before code can be migrated.

Yes I agree that there is obviously a place for tests, and IMHO that's when certain business logic is considered implemented, and tests will need to catch all the edge-cases to make sure it will deliver in the future and not break with modification.


This article is hilarious. If you want to see why TDD, unit testing, isolation, etc. are important, a Rails app written the "Rails Way" is the first place you should look.

They all look the same. They start out with great tests, verifying that the simplest CRUD operations are covered. Then as the changes come, the tests fade away. Their big browser-driven tests are so slow, brittle, and difficult, the question of "should we test this?" becomes paramount as they try to convince themselves that their code doesn't need tests. And even the most ardent TDD proponents and green-band wearers are suffocated as they have to admit that, yes, for the business' sake we cannot spend days building a test harness to test if a new field is required.

I hope this article is the myth-buster to the fantasy world that the Rails community takes testing seriously.


My biggest problem with TDD is that it tends to take people away from whiteboarding and thinking about architecture and into writing everything as if it were a recursive problem -- write test for base case, code base case, write test for next base case, code next base case, write some significant test case, code general case but often this is completely the wrong pattern for solving a problem. This may be an example of doing it wrong, but it seems to be the practice experts promote when using TDD on a problem (it's very common in both blogging and books).

There are great blog posts of this including Dave Thomas getting horrendously bogged down in writing a Sudoku solver in a situation that could be well handled by going to a whiteboard and thinking about the top level behaviours more than the code.


Exactly. The simplistic view that code can be accreted through a 'red/green/refactor' cycle of adding a test, coding til it passes, and so on, misses the meta-game that it makes the developer play: if you are going to stick within the rules, you have to decide which test to add that will force you to write the correct additional code so that it enables the refactoring you know your code actually needs. If you actually know what the code should look like, it's pointless trying to get there by trying to make the right sequence of legal test-first moves - it turns coding into a kind of chess puzzle.

TDD zealots will say "but you're overthinking it! YAGNI!" but that is to deny that sometimes, a smart developer really is capable of holding more than a small piece of the system in her head at a time and can actually see elegant, flexible solutions before the tests force her to.


I recommend you to take a look to the "Growing object oriented software guided by tests" book.

The first assumption you made is about TDD being only unit tests. What if TDD is not only unit tests? It could be that TDD is used in many levels, therefore when you do a more high level test, like an end to end/functional/system test, and there should be at least the idea of that system to start with. It doesn't appear suddenly.

The previous book applies TDD in that way: You get what the system has to do, prepare an idea/design of how the system has to do it and validate it through code. Of course that code is written test first.

Thus experts do not promote TDD only on units, thus "no design at all". You will find the no "BIG design upfront" though, but this is a different matter.


I've always thought these two things were independent. I use whiteboarding for designing interactions and high-level architecture, then (sometimes) use TDD when I have a relatively good grasp of what a unit is supposed to do. Neither precludes the other.


While the idea of testing is good, TDD is so full of stupid quirks and BS disguised as "best practices" it's not even funny

These often result in slow (and sometimes useless) tests. If you want to use TDD to have more time at the foosball table, great

Tests that only test one thing? So I have 30% tests/condition check (or less) and the rest as boilerplate?

"and I do not write software test-first"

Me neither. It's idiotic

Also, TDD fanatics have a tendency of building software that has great coverage and thousands of tests, but fail the simplest of smoke tests.

The world has not had TDD for a long time, and software got delivered. (And sometimes much more stable and durable than today's "one update per week" software)


The world has not had TDD for a long time, and software got delivered.

That's an argument against any kind of improvement; you can replace "TDD" with anything introduced since people started writing software.


The issue I have with TDD fans is that they preach TDD is absolutely necessary for development.

I'm not against improving things, I'm against saying it's "the only true way" and ignoring the shortcomings.

Sure, do your RoR project using TDD, now, doing TDD in C is a whole different problem.


Even though I admire DHH's contribution to software he can be very extremist sometimes. Follow him on twitter and you see him taking extremes complaining about every other subject, including a lot of cursing.


In a positive way I miss some examples of "hurting my designs" and "what that approach is doing to the integrity of your system design". Why TDD is doing that? Could you show me where do you have problems? That would be a nice piece of feedback to learn.

The post feels like a rant mixed with fallacies and a salt of contempt. I'd expect it from Zed Shaw. I'm sure Zed could do it much better.

TDD is dead and the third word in the post is "fundamentalism". Yes, nobody can say that fundamentalisms in tools are good. Every tool has its use cases. Although it doesn't justify the "TDD is dead" motto. Correct me if I'm wrong but it looks like a "Straw man" fallacy. Half of the post is dedicated to build a straw man of fundamentalism nobody can deny.

The other second half is focused in unit tests. While he talks about "Test-first units", all unit tests (written before or after) have the aforementioned issues. Unless they're not unit . Thus we're introduced to other tests types that are not unit.

@programminggeek in a previous comment talked about this, is the test pyramid (http://martinfowler.com/bliki/TestPyramid.html). In this moment the conversation is outside the TDD scope, but discussing about how good or bad are different kind of tests. Again, nobody can deny the benefits of having different test types and not only unit. IMHO is a another red herring fallacy.

So what did I get from it?

  - Fundamentalisms in tools/paradigms are bad.
  - DHH has some problems in his designs he cannot unit test.
  - Using only unit tests is bad, you need more high level tests.
  - Try capybara.


I've often found ideals don't quite translate into practical application. Within economics, political theory, etc., I think TDD also deserves a place as a 'wonderful ideal' that fails simply because it does not take into account it's implementing process: human action. Not to mention that the way many programmers learn and code today seems at odds with the structured nature of creating tests.

Maybe there is a better approach to writing maintainable code than TDD? I imagine a world where all code is understandable, readable, and instantly recognizable, but once again we arrive at the fork where the 'ideal world' deviates from the real one in which we all reside. Shame.


"I proclaim that X is Y and as of now, it's a truth that holds itself. We need to correct every broken mind to acknowledge this new self evident truth."

   But first of all take a deep breath. We're herding some sacred 
   cows to the slaughter right now. That's painful and bloody. TDD
   has been so successful that it's interwoven in a lot of 
   programmer identities. TDD is not just what they do, it's who
   they are. We have some serious deprogramming ahead of us as a 
   community to get out from under that, and it's going to take some 
   time.
Not a comment on the author, or on the general idea on tests (which I mostly agree with).


>Test-first units leads to an overly complex web of intermediary objects and indirection in order to avoid doing anything that's "slow". Like hitting the database. Or file IO. Or going through the browser to test the whole system. It's given birth to some truly horrendous monstrosities of architecture.

Oh yes - I agree completely with this.

I'm not sure I agree with testing things with Capybara though. How are you sure your backend controller actions are doing what they're supposed to be doing? Sure your browser might render what you need but what if a model attribute is set to false instead of true? How do you account for that using a frontend test suite?


> The current fanatical TDD experience leads to a primary focus on the unit tests, because those are the tests capable of driving the code design (the original justification for test-first).

I don't agree with this blanket statement. For web apps at least, doing outside in TDD helps to create a nice ratio of integration tests to unit tests. By writing some high level happy path integration tests to drive functionality and then filling in with smaller functional units and unit tests I think is a better way of going about testing than starting with the units. TDD is a hard thing to get right and unfortunately when it goes wrong it can be very painful.


My javascript coding habits and environment are now finally reaching a point where I am consistently writing tests where I find it necessary, and oh boy is it nice! Until now, the simplest refactoring would scare the shit out of me.

Thus, I am now doing more TDD than I did before. I just find that it's much easier to write testable code if you write the test first, and I like the satisfaction of turning a red lamp green. I don't do it all the time, certainly not, but I do it quite a bit with parts where my intuition tells me that I am likely to destroy something in the future. TDD is nowhere near dead to me.


I mostly agree with this post, but I do wonder whether the difficulty of doing unit testing the OP describes is partially related to their use of RoR and the associated (complex) object-oriented design.


Whilst I'm generally pro-TDD (and personally haven't seen or experienced any snobbery against folks who are not), I can see how test-first can seem grating to people. However, I would say TDD, a long with a more decoupled architecture, has a primary advantage over system-test only; division of labour. By not relying on databases or web-services to be implemented, we can work more in parallel, as long as the interfaces to these external systems are stable. If you only have system tests, you have to work in sequence.


I've written software test first, and I've written it cowboy. Writing the tests first leaves me with better architecture and fewer defects.

Maybe you do better with the cowboy hat on, I don't.


IMHO, it all depends on how complex your task is and how experienced you are in solving that particular task class. Even the most ardent proponents of TDD suggest that you do spikes (write some throwaway code) in order to see what design is applicable to your problem.

On the other hand, if your task is relatively straightforward, you can anticipate the required API and start with tests first.


Ah! An un-brainwashing from no other than Rails brainwasher in chief!

Rails has always had this fundamentalist/cult/group-think perspective on almost everything. No wonder they go overboard.


The efficacy of TDD is dependant on language,tooling and experience. You want to spend most of your time writing the problem, AFTER you have thought about it. Developers complain about TDD when they can't think through their problem clearly and use it as a tool to validate bad thinking. TDD in this situation will naturally feel like running in heavy lead boots, since progress will be slow. Very surprised that the test pyramid is not being discussed as much as it should.


As someone who wrote automated compiler tests many years ago, I believe that the right balance for a developer is to write an example test or two for QA to work from for writing unit tests for her new features.

IMHO, developers should not be in the practice of exhaustive, automated unit testing, but should write sanity checks for continuous integration and after deployment smoke tests to make sure nothing fundamental broke in the build. It's easy to take TDD too far.


"Test-first fundamentalism is like abstinence-only sex ed: An unrealistic, ineffective morality campaign for self-loathing and shaming."

Amen for a perfect analogy.


This post is amazing. I can't stand the TDD zealots who I've worked with in the past. They act as if they aren't even programmers, they are test-framework gurus first and foremost. Product is #1 in my book. Tests are needed but these guys would cowboy around taking weeks to work on a task because they'd use the excuse 'gotta write tests...' This post is a breath of fresh air.


All this talk of gurus, senseis, ninjas, zealots, rockstars, and superstars and shit has to stop. It's seriously making me annoyed I work in this industry.


Thank you for this comment! Story of my life really.


TDD is but one example of the apparently inevitable fate of good ideas in software development: it becomes the One True Way, universally applicable, and the new litmus test for distinguishing Real Developers from troglodytes. Whenever a developer gets a new tool, she is supposed to throw out the old one (there will never, of course, be more than one in the toolbox at any given time.)


The response is as religious as the subject; and I don't think the AA allegory is very respectful.


Off-topic, but I like seeing more and more programming related articles hitting the frontpage lately.


I use TDD extensively and it helps me a lot. It works like version control for progress. It splits problem into small parts which can be solved separately. I can work on very hard problems, without fully understanding them.

But I agree: for simple CRUD web apps TDD is overkill.


If you take this view, what is your approach to testing processes that use external services? External services tend to fail randomly, especially if you are in a corporate setting.

Is some amount of decoupling desirable? Or is it just like databases, file IO, browsers, etc.?


Fully agree. I never saw TDD working for GUI code or algorithm design.

Just plain testing input/outputs of functions/methods that can work as independent black boxes without any dependencies to outside systems or closed source binary libraries.


The problem is that people think test driven design leads to good design. It does not. It does not even lead to design at all. All it does is create testable code. This code might have good design, or it might not.


Most of the examples I see seem to test the framework rather than the new code.


Finally some common sense. I never liked this TDD madness.


TDD is so prevalent in the Rails community, it will be interesting to see what happens now that their leader has discredited all the benefits of it.


Nothing. DHH is famous for being opinionated. Things DHH dislikes include RSpec, factories, haml and service objects layers.


Automatic test is good, but I doubt the effects of TDD, at least it does work well for me, maybe good for others


Finally... acknowledgement from someone famous.

TDD is wonderfull in theory, just no missing edge-case detection; stuff you easily spot when using a "hybrid continuous testing methodology".

I'd say: Code away! Just make sure you have tests for the important stuff when you 'check in' your stuff.

Offtopic: I should become famous/respected in "the community" so I can push this kinda stuff forward faster internally.


"Blank is dead" is the new "blank is the blank for blank".


All guilt I've been feeling about not always writing tests first... Gone! Phew.

I think point of this is... TDD is still good. But it's not ALWAYS good. A great way to frame thoughts, but writing tests last isn't "wrong" either.


IT REALLY DEPENDS ON WHAT YOU ARE DOING


The most accurate comment on the thread and you get down voted!


Me too!


People like talking about the reality of implementing design patterns but leave out huge variables, like what is being built.

I think his article highlights something else about FAD design patterns, hype and pressures. The amount of GUI developers I met running around yelling TDD was amusing, and refreshing meeting people who do what is right instead of implementing hype-patterns... new word? :¬)


Them bait titles. Make it stop.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: