> I think perhaps some alterations would really be necessary to make this analysis tractable.
In this post, I presented the idea as if it was "easy", but Knuth seemed to be proposing it as a rather large undertaking. I skipped some parts of his original prompt for brevity, but since you bring this up, I can summarize a bit more here. I also found a copy of the address in PDF form online [0], if you want to read the whole thing. This is from the last few pages.
He compared this task to researchers who documented every square meter of a large tract of land to see how different factors affected plant growth. He also mentioned a study of 250,000 individual trees in a rain forest. It's not supposed to be easy.
Yes, we've doubled many times since then, but our power to analyze large piles of data has also improved dramatically.
> I'm not really sure that this exercise would be worth it today
I think it really depends on what kind of system you are going to analyze. He was probably thinking of big systems running a school or business back then. These days there are just so many more types of machines. Most are probably not interesting at all. Maybe some kind of life-or-death devices, though?
> correctness is more important than performance
One neat thing about this kind of lowest-level analysis is that you can probably check on both at the same time.
In Carl De Marcken's Inside Orbitz email [1] he has the following item:
> 10. ... We disassemble most every Lisp function looking for inefficiencies and have had both CMUCL and Franz enhanced to compile our code better.
In 2001 there was a series of three panel discussions on dynamic languages [2] that are an absolute goldmine: about six hours worth of listening, with various luminaries discussing deep ideas and fielding questions from the audience. Knuth is cited several times on different topics. This is also where I learned about the idea of stepping through every line of code you can get to (Scott McKay brought this up in the panel on runtime [3]. You ought to be able to find the other two panels (compilation and language design) from that one. Anyway, they discuss a lot of idea behind performance, for example
a) code that is locally bad but globally good
b) optimizing for locality and predictability of memory access (David Moon, in the Compilation panel, I think)
c) speculation that performance improvement could be gained via having an efficient interpreter residing in cache, over optimized compiled code (Scott McKay again, in the panel on runtime - incidentally I think this idea is proven in Kdb+ - at least, I understand that is their secret to performance, or one of them)
In this post, I presented the idea as if it was "easy", but Knuth seemed to be proposing it as a rather large undertaking. I skipped some parts of his original prompt for brevity, but since you bring this up, I can summarize a bit more here. I also found a copy of the address in PDF form online [0], if you want to read the whole thing. This is from the last few pages.
He compared this task to researchers who documented every square meter of a large tract of land to see how different factors affected plant growth. He also mentioned a study of 250,000 individual trees in a rain forest. It's not supposed to be easy.
Yes, we've doubled many times since then, but our power to analyze large piles of data has also improved dramatically.
> I'm not really sure that this exercise would be worth it today
I think it really depends on what kind of system you are going to analyze. He was probably thinking of big systems running a school or business back then. These days there are just so many more types of machines. Most are probably not interesting at all. Maybe some kind of life-or-death devices, though?
> correctness is more important than performance
One neat thing about this kind of lowest-level analysis is that you can probably check on both at the same time.
[0] http://www.sciencedirect.com/science/article/pii/03043975919...