> Then why do we so consistently fuck it up horribly?
This is the crux of it. Coding these days "ain't that hard" because the current generation of programmers sneer at formal methods. They believe providing even semi-rigorous proofs that their code works is not worth their effort.
The result? Testing code with arbitrary input is the best practice; even semi-rigorous proofs are a "luxury". Would your high school geometry teacher accept "well, here's three triangles whose angles sum to 180 degrees. QED."? Then why are we so content as professionals to provide similar "proofs" of correctness?
Unfortunately, there's a strong feedback method here. Formal methods receive little treatment because there is little enthusiasm for their application. There is little enthusiasm for their application because the tools aren't advanced enough. The core problem is still the same though: programmers don't take math seriously.
I'd say blaming programmers is a bit wrong, blame the business; most businesses don't pay programmers to write good code, they pay for fast code that generally works. They don't want to pay too much because how hard can it be to put database data onto the screen. Most programming simply doesn't require math beyond elementary arithmetic. You think most programmers have time for formal proofs?
Every business wants cheap and fast. And responsible professionals insist on doing things correctly anyways. You don't blame the business when someone builds a bridge that collapses when it gets windy, you blame the person who designed it wrong. Same thing with code, nobody is forcing us to do it wrong, and doing it right doesn't take significantly longer. Programmers just need to be educated and learn how to write code.
> And responsible professionals insist on doing things correctly anyways.
There is more than one "correct". Cheap fast prototypes can and generally should skimp on "correct" scaling things because that extra work isn't necessary unless the idea takes off, otherwise you're saving a ton of labor skipping things that a final product might need but a failed idea won't.
> You don't blame the business when someone builds a bridge that collapses
Be serious; lives aren't generally on the line for most business apps and that kind of engineering is time consuming and expensive and totally unnecessary most of the time for application developers.
> do it wrong, and doing it right doesn't take significantly longer
I completely disagree.
>Programmers just need to be educated and learn how to write code.
Sounds like you need to spend some time running a business and paying for those programmers; you'll change your mind fast.
>Cheap fast prototypes can and generally should skimp on "correct" scaling things
Prototypes don't go into production, so it is moot. No, first versions should not skimp on correctness. It saves virtually no time at all. For example, I've done absolutely nothing special to make our web apps scalable, we just use scalable tech to build it in the first place instead of garbage.
>Be serious; lives aren't generally on the line
How is that relevant?
>I completely disagree.
Maybe you should try it sometime then.
>Sounds like you need to spend some time running a business and paying for those programmers
Almost 6 years so far, how much more time do I need to spend before reality will suddenly flip on its head and doing things wrong will suddenly become awesome?
> Prototypes don't go into production, so it is moot.
Yes they do.
> No, first versions should not skimp on correctness.
Yes they should.
> It saves virtually no time at all.
It saves plenty of time.
> we just use scalable tech to build it in the first place instead of garbage.
We aren't talking about tech, but technique. There are plenty of times you can do things in a less than optimal fashion that is much quicker to code than the optimal version which isn't worth doing unless the idea gets some traction.
If it goes into production, then it is not a prototype by definition. Since you are making the claim that doing things right is so much more time and effort, could you give an example of that happening? As I've gotten more experienced over the years, and built up a greater mathematical knowledge, I've found that writing correct code has saved me time, not cost it.
> If it goes into production, then it is not a prototype by definition.
That's not what prototype means to me. A prototype is a first cut that's enough to try an idea out with real users, but is built by optimizing developer time only, not performance or scalability. If an idea is validated, you can come back and optimize things, add caches, optimize db queries to pull minimal required fields or add paging or all of the dozens of little things you can skip to save time that don't add functionality but do add scalability to either traffic or a full database. This has nothing at all to do with mathematical knowledge.
It's not about correct code vs wrong code, it's about the right code for the situation. Bad ideas don't need scalable code and scalable performant code is not as fast to write as quick and dirty code. Quick and dirty doesn't mean wrong, it means working but not optimal.
Sometimes you write these quick and dirty things to let the business guy try his idea out and if it fails, as it often does, then you've not wasted time or money making something scalable and performant that doesn't need to be.
Production code is code that has had the optimization pass after you've decided an idea is worth the effort.
Imperfect code that's done cheaper and faster could be more acceptable to your client with no moral or legal ramifications for you (ie as long as you make the client aware of the drawbacks and nothing bad like people dieing could occur). An exception would be programming in the medical fields. Other than those cases, comparing most programming to bridge building is silly.
The next difficulty is defining correct code. It's going to be difficult to find consensus beyond anything basic.
This is the crux of it. Coding these days "ain't that hard" because the current generation of programmers sneer at formal methods. They believe providing even semi-rigorous proofs that their code works is not worth their effort.
The result? Testing code with arbitrary input is the best practice; even semi-rigorous proofs are a "luxury". Would your high school geometry teacher accept "well, here's three triangles whose angles sum to 180 degrees. QED."? Then why are we so content as professionals to provide similar "proofs" of correctness?
Unfortunately, there's a strong feedback method here. Formal methods receive little treatment because there is little enthusiasm for their application. There is little enthusiasm for their application because the tools aren't advanced enough. The core problem is still the same though: programmers don't take math seriously.