The research community was "an arguing community", and knew how to argue "in good ways" (i.e. no personal attacks, only trying to illuminate the issues, etc.) This worked pretty well almost always ... (The weekly meetings were not for being creative, but for discussion ...)
Let me respectfully disagree with your contention. The point that is missed is not what the hardware could do, but what can you do without optimizing. It is very very hard to put on the optimization hat without removing the design hat, and once you've removed the latter you are lost.
The key to the Parc approach was to be able to do many experiments in the future without having to optimize. (There was a second part to this "key" but I'll omit it here)
I should have assumed, given the results that PARC had, that it must have been that way. It's fantastic that I just got the answer to that question from... you.
For my second point -- I definitely didn't mean optimization that way. I guess the word I meant to use was "targeted", as in "targeted for multi-core" but I see your point and respect it fully.
I'm thrilled and deeply, deeply honored to have had this time with you, Dr. Kay. Thank you so much again. If you're ever in the Bay Area on the first Saturday of the month and feel inclined to stop briefly by the 2020 group meeting (which is an attempt at a replacement for the Future of Programming Workshops that used to take place at Strange Loop), we would be over the moon. You can reach out to Jonathan Edwards if you need the details.
The basic principle of both points above is that "problem finding" is the hardest thing in "real research", so a lot of things need to be done to help this ... (this is a very tough sell and even "explain" today -- almost everyone is brought up -- especially in schools -- to solve problems, rather than to actually find good ones) -- and virtually all funders today want to know what problems their fundees are going to solve - so they underfund (to the point of 0!) the finding processes ...
The ARPA/Parc process "funded people, not projects" -- and today this seems quite outre to most.
As you probably know, Jonathan is doing some work with us ...
Yes I know about Jonathan, which is why I mentioned him because I didn't want to publish the details of the group here and he's been something like an adviser to the group.
And that group is probably also the reasons for my views. I would never qualify to be in HARC and yet I don't just want to be content to scorn the state of the industry and state of the art. I see that there's additionally interesting research coming from garages and weekend projects. I feel that "problem finding" is somewhat interchangeable with "point of view" and that can come from surprising sources. Funding might be needed for the hardware but my own experience, for what it's worth, hasn't borne that out. The main component then is time, and while weekends aren't much, they'll have to suffice. The last step is to see if we can't get further as a community, giving that feedback so critical at PARC, and so that is why the group was created.
Let me respectfully disagree with your contention. The point that is missed is not what the hardware could do, but what can you do without optimizing. It is very very hard to put on the optimization hat without removing the design hat, and once you've removed the latter you are lost.
The key to the Parc approach was to be able to do many experiments in the future without having to optimize. (There was a second part to this "key" but I'll omit it here)