> I appreciate your answer, but the destructiveness of the reward loop is directly addressed in the PDF.
I've gone back through most of the PDF, and I can't figure out what section you're referring to. There's a bunch of discussion of perverse incentives (mostly involved incompetent managers or painfully sloppy developers, who will fail with any methodology), but I don't see where the author addresses what I'm talking about.
Specifically, I'm talking about the hour-to-hour process of implementing complex features, and how tests can "lead" you through the implementation process. It's possible to ride that red-green-refactor loop for hours, deep in the zone. If a colleague interrupts me, no problem, I have a red test case waiting on my monitor as soon as I look back.
This "loop" is hard to teach, and it requires both skill and judgment. I've had mixed results teaching it to junior developers—some of them suddenly become massively productive, and others get lost writing reams of lousy tests. It certainly won't turn a terrible programmer into a good one.
If anything, the biggest drawback of this process is that it can suck me in for endless productive hours, keeping me going long after I should have taken a break. It's too much of a good thing. Sure, I've written some of the most beautiful and best-designed code in my life inside that loop. But I've also let it push me deep into "brain fry".
> No amount of TDD will solve you getting half-way through a design and then finding you needed many-many because you misunderstood requirements.
I've occasionally been lucky enough to work on projects where all the requirements could be known in advance. These tend to be either very short consulting projects, or things like "implement a compiler for language X."
But usually I work with startups and smaller companies. There's a constant process of discovery—nobody knows the right answer up front, because we have to invent it, in cooperation with paying customers. The idea that I can go ask a bunch of people what I need to build in 6 months, then spend weeks designing it, and months implementing it, is totally alien. We'd be out of business if it took us that long to try an idea for our customers.
It's possible to build good software under these conditions, with fairly clean code. But it takes skilled programmers with taste, competent management and a good process.
Testing (unit, integration, etc.) is a potentially useful part of that process, allowing you to adapt to changing requirements while minimizing regressions.
I've gone back through most of the PDF, and I can't figure out what section you're referring to. There's a bunch of discussion of perverse incentives (mostly involved incompetent managers or painfully sloppy developers, who will fail with any methodology), but I don't see where the author addresses what I'm talking about.
Specifically, I'm talking about the hour-to-hour process of implementing complex features, and how tests can "lead" you through the implementation process. It's possible to ride that red-green-refactor loop for hours, deep in the zone. If a colleague interrupts me, no problem, I have a red test case waiting on my monitor as soon as I look back.
This "loop" is hard to teach, and it requires both skill and judgment. I've had mixed results teaching it to junior developers—some of them suddenly become massively productive, and others get lost writing reams of lousy tests. It certainly won't turn a terrible programmer into a good one.
If anything, the biggest drawback of this process is that it can suck me in for endless productive hours, keeping me going long after I should have taken a break. It's too much of a good thing. Sure, I've written some of the most beautiful and best-designed code in my life inside that loop. But I've also let it push me deep into "brain fry".
> No amount of TDD will solve you getting half-way through a design and then finding you needed many-many because you misunderstood requirements.
I've occasionally been lucky enough to work on projects where all the requirements could be known in advance. These tend to be either very short consulting projects, or things like "implement a compiler for language X."
But usually I work with startups and smaller companies. There's a constant process of discovery—nobody knows the right answer up front, because we have to invent it, in cooperation with paying customers. The idea that I can go ask a bunch of people what I need to build in 6 months, then spend weeks designing it, and months implementing it, is totally alien. We'd be out of business if it took us that long to try an idea for our customers.
It's possible to build good software under these conditions, with fairly clean code. But it takes skilled programmers with taste, competent management and a good process.
Testing (unit, integration, etc.) is a potentially useful part of that process, allowing you to adapt to changing requirements while minimizing regressions.