> I have taken extensive experiments with CS faculty members from all over the world, and the vast majority of them —I mean about 95 % of them— cannot program a Binary Search.
Knuth’s the Art of Computer Programming volume 3 has some more on this. Off the top of my head he writes that it took something like a decade from the first published binary search to the first correct for all inputs version.
This is a bar that you would expect a professor to be able to jump over, but it goes to show that sometimes a simple sounding algorithm can be exceedingly tricky to get right. In my experience some kind of Hoare triple derived methodology with invariants is the only reliable way to code a correct binary search from scratch, as opposed to simply having memorized a solution. Here[1] is what at a glance appears to be a readable and correct example of how one might do that.
For Dijkstra programming meant to think about a problem, then sit for a few minutes at your desk and write with pen on paper "in one go" the perfect algorithm/program, no bugs, no errors, no corrections, from the first try, and also, in beautiful handwriting. Some kind of idealized way in which Mozart would allegedly write his compositions. Whereas "Beethoven was a doubter and a struggler who started writing before he finished the composition and then glued corrections on the paper. In one place he did this nine times. When they peeled them, the last version proved identical to the first one. This iterative method of programming is somehow a very Anglo-Saxon custom. British education is pervaded by it. People learn, when they write, not to try to get it right the first time. Just write what's on your mind and then rewrite repeatedly to get the product you want. That's partly why word processors are marketed so aggressively and partly why they have been so successful there." [1]
> For Dijkstra programming meant to think about a problem, then sit for a few minutes at your desk and write with pen on paper "in one go" the perfect algorithm/program, no bugs, no errors, no corrections, from the first try, and also, in beautiful handwriting.
An approach that works best if you get to pick your own, narrowly defined problems, and do not deal with externally imposed, let alone changing, requirements. It utterly does not scale.
I'm not entirely sure when the last time was that Dijkstra actually typed a line of code into a computer (or punched it into a card, as the case may have been), but it appears to have been around the early 1970s. So, like the Pope's pronunciations on sex, Dijkstra's pronunciations on programming derive a lot of their clearcut purity by coming from a non-practitioner.
Interestingly enough Dijkstra anticipated your response and for me at least adequately refuted it in some of his other EWDs.
Are you aware that among the “narrowly defined problems, and do not deal with externally imposed, let alone changing, requirements” that Dijkstra solved are such doozies as distributed consensus, synchronization, and OS level hardware interrupt handling?
And so these are now no longer problems? Or do they remain thorny matters in practice, precisely because the versions that Djikstra solved are narrowly defined and without awkward external requirements?
Good software engineering largely rests on taking whatever "awkward external requirements" you're dealing with and decomposing them by concerns until you have a set of "narrowly defined" problems that you can effectively reason about, formally or informally. Dijkstra was a master of that kind of decomposition and that's how he got the interesting "theoretical" problems he solved: they came up in the course of breaking down practical problems!
The point then is that with the concerns having been adequately separated, rigorous, disciplined, and mathematical reasoning is a better approach to thorny "real-world" problems than spitballing and hoping it works.
> Good software engineering largely rests on taking whatever "awkward external requirements" you're dealing with and decomposing them by concerns until you have a set of "narrowly defined" problems that you can effectively reason about, formally or informally. Dijkstra was a master of that kind of decomposition and that's how he got the interesting "theoretical" problems he solved: they came up in the course of breaking down practical problems!
Every good software engineer breaks down the problems they have to solve, and usually you find that they're, say, 20% interesting theoretical problem and 80% messy practical drudgery. Unfortunately most of us don't have the luxury of choosing to only solve the nice 20%.
> The point then is that with the concerns having been adequately separated, rigorous, disciplined, and mathematical reasoning is a better approach to thorny "real-world" problems than spitballing and hoping it works.
What's the evidence for this? Because if Djikstra published a paper "solving" the problem, but the problem was not actually solved in the real world, that to me suggests that his approach is decidedly ineffective.
> What's the evidence for this? Because if Djikstra published a paper "solving" the problem, but the problem was not actually solved in the real world, that to me suggests that his approach is decidedly ineffective.
Another possibility is that the field is dominated by coders who are mathematically illiterate and unwilling to remedy that deficiency while hiding behind rhetorical hand waving about the “real world.” You yourself appear to stand as evidence for that explanation.
Fortunately we are seeing some real progress in the field. For example tools like TLA+ are slowly but surely getting wider and wider usage. Lamport is very much in the same school of thought as Dijkstra by the way. Even TDD represents a major step toward a more disciplined approach to programming.
> Another possibility is that the field is dominated by coders who are mathematically illiterate and unwilling to remedy that deficiency while hiding behind rhetorical hand waving about the “real world.” You yourself appear to stand as evidence for that explanation.
I'll take the gratuitous personal attack as a sign that you have no real argument. "coders who are mathematically illiterate" doesn't explain anything. Even in the research mathematics community it's acknowledged that if you haven't communicated your proof to others then you haven't proved the theorem (see all the fuss around the ABC conjecture).
I'm all for mathematical rigour in programming (and, not that it matters, I'm as experienced/qualified as a non-research mathematician can get). But judging how effective something is by how rigorous it is is putting the cart before the horse.
> Even TDD represents a major step toward a more disciplined approach to programming.
A step that Dijkstra would have abhorred; he used variations of "Program testing can be used to show the presence of bugs, but never to show their absence!" in at least a dozen different EWD notes.
Yes, Dijkstra invented the semaphore, but that "solved" synchronization about to the extent that John McCarthy inventing the linked list "solved" data structures (or FAANG hiring, for that matter).
While i am not one to condone Hagiography, your comment is pure Hubris.
If you think solving problems by constant trial-and-error is the norm and Dijkstra's idealized mathematically disciplined systematic approach methods can well be ignored, then i fear you have not understood the difference between Science, Engineering and Craftsmanship/Tradesmanship (without formal training).
>I'm not entirely sure when the last time was that Dijkstra actually typed a line of code into a computer ... coming from a non-practitioner.
This is laughable. By this logic every code monkey today (including myself) is better than Dijkstra because we simply have written a larger volume of code and that too in many different languages!
One needs to practice Humility when approaching the Words/Works of Masters and try to understand them instead of being dismissive. I keep the following quote by Arthur Conan Doyle from the novel The Valley of Fear in mind when studying the Greats;
Mediocrity knows nothing higher than itself, but Talent instantly recognizes Genius.
You might find the following two papers relevant :
1) This paper gives a retrospective on a earlier seminal paper by the author himself (what he got wrong and what he got right) : Retrospective: An Axiomatic Basis For Computer Programming by C.A.R. Hoare - https://cacm.acm.org/magazines/2009/10/42360-retrospective-a...
This is what impresses me the most about Dijkstra's accomplishments. It's not that he was extremely smart, although he was. Or that he discovered many fundamentally important algorithms, although he did. It's that he chose to develop and share how he did what he did. It's that you can read through one of his papers and it makes you feel like you could have applied his methods and made the same discovery. Of course that's not necessarily true, but that it's even sometimes true is remarkable.
And speaking of his papers, the EWD archive the submission is from is an absolute treasure-trove. While I'd recommend one of his books for someone interested in learning the methods he and his colleagues used, the more technical EWDs are very interesting. I personally enjoy the more polemical ones too, but I understand why many people are put off by them. Whenever someone gets around to writing the comprehensive multi-volume history of the development of Comput(er|ing) Science they will be an invaluable resource. Here's just one that I find notable[1].
>It's that he chose to develop and share how he did what he did. It's that you can read through one of his papers and it makes you feel like you could have applied his methods and made the same discovery. Of course that's not necessarily true, but that it's even sometimes true is remarkable.
Exactly Right!
This is the reason i worship at the "Altar of Dijkstra :-)" because he shows by example how to apply mathematical logic to develop a step-by-step solution to seemingly complicated problems. I am more interested in understanding the methodology of thinking involved in the problem-solving process (the essence of his genius which cannot be taught directly but can only be demonstrated via examples) than the details of the actual examples themselves.
To borrow Dr. Watson's words about Sherlock Holmes (from the story "The Red-Headed League");
I trust that I am not more dense than my neighbors, but I was always oppressed with a sense of my own stupidity in my dealings with [Edsger Dijkstra]. Here I had heard what he had heard, I had seen what he had seen, and yet from his words it was evident that he saw clearly not only what had happened but what was about to happen, while to me the whole business was still confused and grotesque.
> If you think solving problems by constant trial-and-error is the norm
News flash: It is.
> Dijkstra's idealized mathematically disciplined systematic approach methods can well be ignored
I never said that. I think they have their place, and I use them myself in some situations. It's Dijkstra and his acolytes who think iterative approaches can be ignored.
> One needs to practice Humility when approaching the Words/Works of Masters and try to understand them instead of being dismissive.
I've read a lot of Dijkstra's writings. They are not without merit. But he himself could have done with practicing some humility when engaging with computing as it is actually practiced in the physical world, as opposed to in his mathematical models.
Knuth is not any less theoretically sound than Dijkstra. But he engaged with practical computing issues and wrote software that is still in wide use. Wirth is not any less disdainful of commercial computing hype than Dijkstra. But he built actual hardware and software systems. Those are the masters I look up to.
I don't think you have understood the nature of Dijkstra's work nor the genius behind it. It is different from the genius of Niklaus Wirth (simplicity of OS, Programming Language implementations) or Donald Knuth (efficiency of Algorithm implementations) in that it is at a more "meta level" bringing together the fields of Computation, Languages and Mathematics.
I have already pointed out the laughable ridiculousness of your measuring Dijkstra's contributions by the amount of code written. He pioneered, defined and established great many aspects of the Computing Science field when it was still in its infancy. Here is a non-exhaustive list (from https://en.wikipedia.org/wiki/Edsger_W._Dijkstra and other sources)
1) He designed/described an ISA/Assembly Language to interface with one of the first Computers ever built (his PhD thesis named "Communication with an Automatic Computer").
3) He designed/co-wrote one of the very first Algol 60 compilers named the "Dijkstra-Zonneveld Algol 60 compiler" - https://en.wikipedia.org/wiki/ALGOL_60
4) Instrumental in establishing "Structured Programming" constructs in language design itself which we all take for granted today.
5) Instrumental in establishing Mathematical Rigour to Computer Programming via Mathematical Logic/Predicate Calculus akin to the writing of Mathematical Proofs to show "Correctness by Design and Construction". The importance of this cannot be overstated.
6) Instrumental in establishing aspects of Concurrent Programming (eg. CSP), Synchronization (eg. Mutex, Semaphores) etc.
7) Instrumental in devising several non-trivial algorithms for both "normal" vs. Concurrent programs eg. Shortest-Path, Dining Philosophers etc.
As you should now be able to appreciate, Dijkstra did write a lot many "Programs" (in addition to other writings) all of which were path-breaking which few can claim about their own life's work.
1-3 are from his time as a practitioner in the field. There was an exchange in Knuth's interview in Peter Seibel's "Coders at Work" that I thought really insightful:
> Seibel: It seems a lot of the people I've talked to had direct access to a machine when they were starting out. Yet Dijkstra has a paper I'm sure you're familiar with, where he basically says we shouldn't let computer-science science students touch a machine for the first few years of their training; they should spend all their time manipulating symbols.
> Knuth: But that's not the way he learned either. He said a lot of really great things and inspirational things, but he's not always right. Neither am I, but my take on it is this: Take a scientist in any field. The scientist gets older and says, "Oh, yes, some of the things that I've been doing have a really great payoff and other things, I'm not using anymore. I'm not going to have my students waste time on the stuff that doesn't make giant steps. I'm not going to talk about low-level stuff at all. These theoretical concepts are really so powerful-that's the whole story. Forget about how I got to this point."
> I think that's a fundamental error made by scientists in every field. They don't realize that when you're learning something you've got to see something at all levels. You've got to see the floor before you build the ceiling. That all goes into the brain and gets shoved down to the point where the older people forget that they needed it.
(4) He certainly promoted them. I don't think he invented them; I believe the one he was most involved in was guard clauses which he promoted as an alternative to if statements. For better or worse, this never caught on.
(5) I think it is, in fact, vastly overstated. Mathematical rigor has its uses in some areas (progress conditions in loops, e.g., and getting binary searches right is vastly easier by applying some mathematical thinking). The kind of "end to end" correctness reasoning he promoted has found only a few adopters (avionics, etc).
If you disagree, how many of your last 10 projects have you proven correct? How many of them have found to be defect free so far?
6-7 yes, sure. I'm not denying that he made many important contributions to algorithmic thinking. But that does not translate into expertise into how programming ought to be done. Notably, none of his accomplishments in 1-3 were attained according to the methodology he advocated later in life as the only possible one, nor did he try to reproduce them using that methodology.
Whenever somebody says something, one has to consider their Competency, Context and Time to read between the lines and try and get at the gist of what is being intended rather then the mere words themselves.
This is orders of magnitude more important when we try to understand "The Greats" (in any field) because they have a certain insight into things, a huge knowledge base to draw upon and are aware of myriads of nuances involved in their domain of expertise all of which we poor students have to struggle to grasp.
It is with the above points in mind that one should interpret Knuth's nuanced quote above. He had the highest of regards/respect for Dijkstra (see his detailed review (pdf) of Dijkstra's book "Structured Programming" at http://infolab.stanford.edu/pub/cstr/reports/cs/tr/73/371/CS...) Given that Donald Knuth is the "Patron Saint of Yak Shaves" (https://yakshav.es/the-patron-saint-of-yakshaves/) his perspective did not allow a fully mathematical abstraction without any reference to a machine (real or virtual). Note that he did not deny Dijkstra's approach but merely wasn't agreeing with Dijkstra's strident promotion of it. That is a viewpoint and not a value judgement.
Coming back to the points above;
[1-3] - One has to remember that when Dijkstra did these almost nothing that we take for granted today existed. Everything had to be built from ground-up. You can imagine the difficulty when you have nothing/nobody to learn from and are basically inventing stuff as you go along. You only had Mathematics as a stable bedrock.
[4] No, he pioneered it. Wikipedia actually credits him with coining the very term "Structured Programming" (https://en.wikipedia.org/wiki/Structured_programming#cite_no...) Regardless, he wrote EWD249 "Notes on Structured Programming" in 1969. Also see EWD1308 "What led to Notes on Structured Programming".
[5] Most people misunderstand what is meant by "Correctness" in a Computer Program. Perhaps Bertrand Meyer's synonym of a "Contract" is better suited to explain that what is meant is "Correctness w.r.t. Specification". Obviously this is fundamental to Programming itself. To get the benefit of this approach you don't have to go 100% rigour (rather difficult) but train yourself to think in these terms and using appropriate pre/post conditions. A good example can be seen in the use of Design-by-Contract syntax in Eiffel. Here is Dijkstra from EWD288 "Concern for Correctness as a Guiding Principle for Program Composition"
My thesis is, that a helpful programming methodology should be closely tied to correctness concerns. I am perfectly willing to admit that I myself may be the most complete victim of my own propaganda, but that will not prevent me from preaching my gospel, which is as follows. When correctness concerns come as an afterthought and correctness proofs have to be given once the program is already completed, the programmer can indeed expect severe troubles. If, however, he adheres to the discipline to produce the correctness proofs as he programs along, he will produce program and proof with less effort than programming alone would have taken.
Finally, a word or two about a wide-spread superstition, viz. that correctness proofs can only be given if you know exactly what your program has to do, that in real life it is often not completely known what the program has to do and that, therefore, in real life correctness proofs are impractical. The fallacy in this argument is to be found in the confusion between "exact" and "complete": although the program requirements may still be "incomplete", a certain number of broad characteristics will be "exactly" known. The abstract program can see to it that these broad specifications are exactly met, while more detailed aspects of the problem specification are catered for in the lower levels.
[6-7] Your comment "Notably, none of his accomplishments in 1-3 were attained according to the methodology he advocated later in life as the only possible one" is exactly the wrong inference to draw! It is precisely due to this formative experience that he was such a strident proponent of the use of Mathematics in Programming. As pointed out above, he had to invent a lot of this stuff as he went along and had nothing other than mathematical logic to aid him in his endeavour. Dijkstra actually said he was "slow" to understand the works of Floyd/Hoare/Others on Program Correctness techniques. But of course once he got involved, he took it to the next level.
In summary, read Dijkstra, try and get at what he intended to convey rather than the tone/words (which are irrelevant in the broader scheme of things) and put forth effort to understand what has been said/written down. The rewards are great.
> [...] using appropriate pre/post conditions. A good example can be seen in the use of Design-by-Contract syntax in Eiffel.
I agree that these have some promise, but there is a rather wide gap between those and correctness proofs, and I'm not convinced that's a gap generally worth bridging.
> One has to remember that when Dijkstra did these almost nothing that we take for granted today existed
And yet at that time, he was capable of writing an ALGOL 60 compiler in assembly language within a few months. Two decades later, armed with his complete arsenal of program correctness techniques, he spent weeks writing an toy expression parser in a high level language, abandoned his original objective along the way, and ended up with an absolute dog's breakfast of impenetrable prose and code: https://www.cs.utexas.edu/users/EWD/transcriptions/EWD05xx/E...
You have now shifted goalposts and are arguing about things different from the ones you started with. There doesn't seem to be anymore to discuss about and so i will end this with the following notes.
Your original contention that Dijkstra had not written much code (thus insinuating that he was mostly "talk") i have proven false. Incidentally, he wrote about 1318 EWDs totaling over 7700 pages in addition to his other books and papers (most of them have some "program pseudo-code")! This is indeed some impressive output.
I had also pointed out that much of his work is at a "meta-level" trying to get at the nature of programming and trying to establish that discipline on a sound mathematical basis. Thus it cannot be compared directly to conventional programing but one has to understand the intent behind his writings and then try to practice them honestly in one's own work. He did not claim that his methods were "easy" only that they were a necessity to write "correct" programs. The degree to which they are necessary and applicable by "ordinary programmers" (i.e. w/o too much mathematical maturity) can be tuned (hence my example of DbC from Eiffel). From Mathematics we all know the difficulty of writing and reading proofs. So your expectation that Dijkstra's usage of it should be easy to grasp is fundamentally wrong. It is the nature of the subject itself that demands effort and there is no getting around it. Incidentally while i am somewhat familiar with Wirth's compiler book (written for students), i have not read EWD550(seems to be written as a challenge response and quite intricate/non-trivial). It seems that they have very different purposes and audiences in mind and so a comparison between them is not fair.
> Your original contention that Dijkstra had not written much code (thus insinuating that he was mostly "talk")
I could have been clearer about this, yes. I know that he was quite skilled at writing code in his younger years. However, he largely seemed to have given up on the practice by the 1970s, and the less he was coding, the more certain he seemed to become how it ought to be done and taught.
> From Mathematics we all know the difficulty of writing and reading proofs. So your expectation that Dijkstra's usage of it should be easy to grasp is fundamentally wrong.
So the supposedly superior methodology is neither easier to use, nor does it produce easier to maintain programs. It's almost like its superiority consists purely of gatekeeping the programming profession to those conversant with the methodology's shibboleths.
> I have not read EWD550
I should have led with that. As far as I know, it's the longest published program to which Dijkstra tried to apply his methodology, so, to me, the clearest evidence that it does not scale.
You now seem to be trying to argue the inarguable and also trying to play "gotcha" games.
To take the latter first, because EWD550 is intricate and unfinished, is it your contention that Dijkstra was wrong about his methodology which he had already demonstrated in other papers and books? This is not an argument but only calls into question one's own current competency and effort put forth to understand what has been written down. I am sure that if one sits down with it for concentrated study for some time that one can understand it. People lose interest in many things that they take up. This is even more true of geniuses since they have so much going on that once they have understood the gist of something they move on and leave the rest as "an exercise for the reader". They are not obligated to spell out everything for "the reader"; whether "The Reader" likes it or not is quite another matter. So to me it seems your inference on EWD550 is wrong.
>So the supposedly superior methodology is neither easier to use, nor does it produce easier to maintain programs. It's almost like its superiority consists purely of gatekeeping the programming profession to those conversant with the methodology's shibboleths.
Again your inference is wrong. The "superiority" lies in "proving correctness w.r.t. specification" and not in "ease of comprehension". There is no gatekeeping or any other malicious intent involved but merely insistence on a methodology. Mathematical Proofs are by definition intricate, tedious and require concentrated study. That is the nature of mathematical logic and there are no shortcuts (the famous example of Russel and Whitehead taking more than 300 pages to prove 1+1=2 comes to mind here). If one wants to follow Dijkstra's method to the letter, then one has to "up one's game". If one doesn't want to do that but get the same benefits (to a certain degree) then use a simpler approach like DbC based on the same principles but much more "user friendly".
To paraphrase a famous quote: "There is no Royal Road to understanding Dijkstra".
Well, don't tell me, tell Dijkstra. I commit after every few lines, just on that he would think I am an absolute moron, not to be allowed near keyboards.
Knuth’s the Art of Computer Programming volume 3 has some more on this. Off the top of my head he writes that it took something like a decade from the first published binary search to the first correct for all inputs version.
This is a bar that you would expect a professor to be able to jump over, but it goes to show that sometimes a simple sounding algorithm can be exceedingly tricky to get right. In my experience some kind of Hoare triple derived methodology with invariants is the only reliable way to code a correct binary search from scratch, as opposed to simply having memorized a solution. Here[1] is what at a glance appears to be a readable and correct example of how one might do that.
[1] https://zhu45.org/posts/2018/Jan/12/how-to-write-binary-sear...