Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Still not really a statement about accuracy...


Experimenting with more models increases the likelihood you'll find a good model. A good model is, by definition, a more accurate representation of your domain than a bad model. It will also tend to generate more accurate predictions, if that's what you care about.

As a secondary point, re-implementing inference code for each new model makes it almost certain that there are bugs in said code. So even without changing the model, automatically generated inference code is likely to have fewer bugs and thus give more accurate inferences than hand-written code. (assuming it runs to convergence; naturally there are lots of scenarios in which naively generated code will be slower to converge than something hand-tuned).


I don't consider bugs in code a matter of accuracy of the model. And while you can compare the accuracy levels of various models, having a compiler do the inference or you do the inference doesn't chance the accuracy. I also don't subscribe to the idea that the right model for a scientific or statistical phenomenon is a random event.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: