Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Generative testing for JavaScript (github.com/graue)
52 points by luu on Aug 16, 2015 | hide | past | favorite | 11 comments


I'm a bit torn here. On the one hand, this sounds like it has a high probability of doing exactly what unit tests shouldn't do: pass on one run and fail on another.

On the other hand, the fact that it would fail at all would help you see that you have a bug. Something you might not have caught before.


These aren't unit tests, they're generative tests. They're used for different things. Unit tests make assertions about how a program responds to a specific input. Generative tests make assertions about invariants in a program over a wide range of inputs.

The workflow for using them is very different. A unit test suite contains a finite number of assertions and, assuming no bugs, should run in a relatively small (or at least bounded) amount of time. A generative test, however, usually can run forever _by design_.

A typical workflow is to start a run overnight and see if it caught anything in the morning. If the generative suite finds any bugs, you turn the specific cases that caused the failures into unit tests and commit them.




So is generative testing the same thing as fuzzing?


Fuzzing: "The program is then monitored for exceptions such as crashes, or failing built-in code assertions or for finding potential memory leak"

Fuzzing is more crude in that it doesn't really know what happens after providing the random value, it just tries to crash or fail asserts.

generative testing involves writing an invariant, in the test itself, that holds over the random values.

here are some examples of properties or invariants http://fsharpforfunandprofit.com/posts/property-based-testin...


One feature I've added to my own generative testing tool [1] to mitigate this problem is to save the random value that made the test fail and add it to a set of deterministic test cases that always need to pass before the tool will try any new randomized cases.

Of course, if you discover a test failure on a very low-probability corner case, other people working on their own branches probably won't trigger it. But with the right workflow, the situation shouldn't be all that different from adding a regular test case, which could always fail once attempt you merge it with someone else's code that wasn't written against it.

[1] https://github.com/DalekBaldwin/check-it


This is a reasonable, common quibble when starting property based testing.

A failing test should be straightforward to turn into a simple checked example - most good libraries have extensive simplification steps that try and boil failing examples down to the minimal example that still causes the test to fail.

But this is only necessary if repeated runs of the property based tests don't reproduce a failure*. For me, they have.


The generators in this project are deterministic, and the tests thus repeatable, according to this extract from the README:

Avoid calling Math.random in your [custom generator] functions, since if you do so, test runs won't be repeatable. All randomness should come from the built-in generators.


I'm not sure about this library, but you could also use a PRNG with a fixed seed before test time, so that the tests are repeatable. One of the key points to me is the removal much of the author bias in test case generation.


This doesn't seem to have been updated in a year. How does this compare to other projects such as https://github.com/jsverify/jsverify which are currently more active?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: