If you've got a slow test suite, that mean's you've got a bad test suite. Taking away tests from it to make it faster takes away from its purpose, which is to aid you in refactoring.
Or it means you are testing something that's computationally expensive. Not everything is just web model input validation--some people are doing real work. :P
Not nessasarily. I've worked on math heavy programs where single calculations could take seconds to run. For the frequency they come up in actual use, this was not a problem, but our tests needed to run these calculations more than any single execution of the program would likely need to.
More specifically, consider an SSE2-based function 'float32 floor(float32)'. There's only about 4 billion inputs, so why not test them all? That only takes a minute or so.
How is testing 100 inputs a unit test and testing 4 billion inputs, through exactly the same API, an integration test?
As the author points out, many people wrote libraries which are supposed to handle the entire range, but ended up making errors under various conditions, and even given wrong answers for over 20% of the possible input range.
Is 90 seconds to test a function "slow"? What about 4.5 minutes to test three functions?
If you say it's slow then either it's a bad test suite, and/or it includes integration tests. I believe that is the logic, yes?
There is no lower unit to test, so therefore this must be a unit test.
The linked-to page shows that testing all possibilities identifies flaws that normal manual test construction did not find. Therefore, it must be a better test suite than using manually selected test cases, with several examples of poorly tested implementations.
(Note: writing an exact test against the equivalent libc results is easier to write than selecting test cases manually, and it's easier for someone else to verify that the code is testing all possibilities than to verify that a selected set of corner cases is complete.)
Therefore, logic says that it is not a bad test suite.
Since it contains unit tests and it is not a bad test suite, therefore it must not be slow.
Therefore, 4.5 minutes to unit test these three functions is not "slow".
Therefore, acceptable unit tests may take several minutes to run.
That is what the logic says. Do you agree? If not, where is the flaw in my logic?
How can you have a good test suite without integration tests? That's not a full test suite. That's a cop-out.
A good test suite has two qualities - how comprehensive it is and how fast it takes to run. If either is lacking, then it is no longer a good test suite.
It's quite easy to have slow tests that aren't integration tests. For instance, there's some tests in Sympy that are only a few lines of code that run very slow because the calculation is difficult. Sometimes (but not always), it's trying to calculate a very difficult integral (which is a test of integration, but not an integration test).
Or it just means you have tests which could be better optimized for speed but in fact are optimized for something else.
We had a series of tests (more towards integration tests I guess) at one point in LedgerSMB that did things like check database permissions for some semblance of sanity. These took about 10 min to run on a decent system. The reason was we stuck with functionality we could guarantee wouldn't change (information schema) which did not perform well in this case. Eventually we got tired of this and rewrote the tests against the system tables cutting it down to something quite manageable.
We had this test mixed in with db logic unit tests because it provided more information we could use to track other failures of tests (i.e. "the database is sanely set up" is a prerequisite for db unit tests).
Heavy computation algorithms. My main focus is on geospatial analysis, and to test certain things, you are going to end up with some 1000ms+ tests. Get 10 or 20 of those, and you have a problem.