It reminded me of two unrelated things: Erlang and FFTW.
Erlang due to its error handling philosophy and FFTW due to the following:
Question 4.1. How does FFTW work?
The innovation (if it can be so called) in FFTW consists in having a variety of composable solvers, representing different FFT algorithms and implementation strategies, whose combination into a particular plan for a given size can be determined at runtime according to the characteristics of your machine/compiler. This peculiar software architecture allows FFTW to adapt itself to almost any machine.
For more details (albeit somewhat outdated), see the paper "FFTW: An Adaptive Software Architecture for the FFT", by M. Frigo and S. G. Johnson, Proc. ICASSP 3, 1381 (1998), also available at the FFTW web page. [1]
FYI: FFTS [1] in many (most?) cases outperforms FFTW, without runtime calibration. It gets there by using a smarter algorithm algorithm. It doesn't have the bells and whistles of more complete libraries—e.g., restricted to power-of-two transforms for now—but hopefully this will change.
I was reminded of consensus reads for a replicated database value. The independence of implementation would be at the hardware not algorithm level. As is typical, Sussman's ideas inspire thought
Good paper. This type of research lives on under the banner of safety/security through automatic diversification or diversity. Another phrase used in many papers is "moving target." I called it Security through Diversity in my own work. A quick Google shows most of us were similarly [not] creative. ;)
I used diversity extensively in my previous security work combined with obfuscation and strong, interface protection. Best protection against nation-state attacks on software. Simplest example was running requests a group at a time on three separate systems with different ISA's and toolchains while checking results. Another form of that I derived was having mutually suspicious parties do different implementations of same behavior. The approach I used most was just constantly changing implementation details of key services, from the app to the code serving the interface. Interface attempts to hide all details of implementation plus sanitize the input. Result is indeed a moving target that they rarely hit or expand on.
Modern IT is expected to be much more efficient in terms of labor and productivity. Unreasonably so I'll add. The best way to do diversification now is doing (a) portable software + interfaces, (b) containerizing implementations for easy swapping, and (c) using automated methods being developed in academia to diversify code without human intervention. Two of these can piggy-back on existing work in OSS and cloud tech. Below is a paper studying some approaches to the third:
In a discussion I had recently we started talking about metaprogramming, which in practice is only the tiniest and most minimal execution of the implied idea: writing programs that operate on programs. Modern convention is to treat programs as black boxes that only the developers of that specific program can hope to reason about. And so metaprogramming is when a programmer writes or very carefully uses a component that operates on the program, but the result is just the program – you can't run the program without the applied metaprogramming nor can you apply metaprogramming to a program and expect it to work. Except debugging.
Underneath the suggestion in the paper I see something similar. "Programming" might be an overly specific term. But a call to apply reasoning to systems and computation. Not applying reasoning to specific systems, as a programmer might analyze or test a specific program (what we're already doing), but to systems as a whole. I can see this in other computing developments. Operating systems are the result of reasoning about hardware. The result looks very different from coding directly to the hardware, especially in our mature state. But programs still are binaries with machine code. Before the OS of course was the program itself. And so what would it look like if we applied this kind of reasoning to programs?
But oh, we already apply so much reasoning to programming! But in a certain way. We apply reasoning to compilers, which programmers use as they will. We build libraries and hope they are used well. And then we pull these together to some actual purpose in the program. And we assemble programs into systems. Each step empowers the next layer, but we don't apply (much) software to explore, verify, or extend these layers. Our tests are typically written as binary pass/fail systems, with no generalized attention to WHY something fails – and we can't apply generalized attention because the program is too opaque.
I agree and have proposed the same thing. I was trying to build on my old toolkit which aimed at RAD w/ C/C++ compatibility, BASIC safety, and LISP flexibility. I essentially implemented a basic-like language that was compatible with C/C++ functions in LISP. Let me have LISP macros and interactive development with result auto-generating C/C++ code that reused their compilers. Worked great while I had it. I know the BASIC angle is funny to many. However, it's how I started & iterated fast with readable, no blue-screen code. I'd have used a Wirth language if I knew of them then.
So, next question was "How to make an environment to allow easier development of high assurance, portable systems?" My platform was a 4GL w/out 4GL's problems. I want a 4GL for highly-secure or reliable software. Noticed portable by itself usually requires a bunch of variations on same basic thing. LISP macro's handled that for me prior. So, my concept was to use something like Racket Scheme for meta while building a mockup of constructs of imperative language (Wirth's, Go, or C). Transformations for specific systems are encoded as meta transforms that operate on base program.
Initially, the thing just generates code it feeds through existing compilers. From there, you model the basic constructs (eg control flow, stacks) in both a generic form and one optimized for certain situations (or ISA's). Language abstractions are modeled as compile-time transforms on these. Constraints, pre/post conditions, invariants, and so on are added as needed for next phase: type/interface checking + automatic production of verification conditions, tests, static/dynamic checks, covert channel analysis, and so on based on annotated code. Each refinement uses data in step before it for correspondence check (eg trace-based). Ends at typed assembly (or LLVM maybe) modeled in same system subject to same correspondence checks and extra analysis at that level. Any run reuses what's already done and not changed while supporting multicore to reduce run time. Critical checks run in background as developer modifies the software.
Written and specified to high level with metaprogramming transforming, analyzing, and optimizing it to lowest levels. Highest, assurance levels using tools such as Coq or HOL would be able to leverage HLL code for easier, spec-to-module/code mapping while trusting certified transforms to do the rest. Your thoughts aside from how much work this might be? ;)
There's an important difference between a formally verified system and redundant systems however. At least, I think so!
A formally verified system is verified "software". Different parts of a redundant system may rely on different parts of the hardware. If some hardware fails, a redundant system can still succeed, while a formally verified system is likely to fail.
Given some model of the way circuits work, it's possible to formally verify hardware as well. Maybe your model doesn't quite match reality and your system will still fail. But redundant systems can also experience simultaneous failures of their components. It would be interesting to know which approach is more effective.
Formal verification is incredibly brittle WRT requirement changes (one of the sources of problems mentioned in the article). There are compromises, of course; eg. a small, verified answer-checker, which we can use to select correct answers from a collection of unverified implementations.
Erlang due to its error handling philosophy and FFTW due to the following:
Question 4.1. How does FFTW work?
The innovation (if it can be so called) in FFTW consists in having a variety of composable solvers, representing different FFT algorithms and implementation strategies, whose combination into a particular plan for a given size can be determined at runtime according to the characteristics of your machine/compiler. This peculiar software architecture allows FFTW to adapt itself to almost any machine.
For more details (albeit somewhat outdated), see the paper "FFTW: An Adaptive Software Architecture for the FFT", by M. Frigo and S. G. Johnson, Proc. ICASSP 3, 1381 (1998), also available at the FFTW web page. [1]
[1] http://www.fftw.org/faq/section4.html#howworks
Edited for one more comment:
The last paragraph also reminds me of Alan Kay with its focus on the interaction between systems and not their specific logic.