A lot of the terminology here is different just for the sake of being different, I think. That, along with the feeling the author believes himself to be some sort of genius, kinda remind me of Urbit and TempleOS (google at your own peril).
Interestingly enough, all three projects seem to get some level of respect from HN, for reasons that are beyond me.
It wasn't clear to me at first if choiceless computation is present due to a mere design decision or if it is necessary to the Escher paradigm. I suppose it plays a role on the load distribution over different instances of a same computation unit --- as "computation unit", I mean reflexes, gates and circuits. Am I right?
It reminded me of languages like Lucid[1] and Quil[2], which treat non-von Neumann models of computation; Escher, though, seems to focus at the IPC level.
Choiceless Computation is how the "outside" world looks to an Escher program. This is easier to understand, if you study go circuit.org because it is a concrete product not just a semantic. There:
A program starts and sees nothingness. Then a host emerges out of nowhere. (A human provisioning engineer must have turned it on in the data center.) Then the program can do something with it (like start a database) or it can wait (indefinitely) for another emergence of a host (before it sets up an elastic DB, say). The point is that objects emerge in your "sight" and they are nameless. The namelessness is the choicelessness. And this might seem like a small difference, but it is huge.
Chomsky tells Linguists: Try to imagine the world from the new-born baby's point of view; and trust me that the baby is
born knowing nothing. The only difference is that the baby sees a "blooming buzzing confusion" (i.e. many hosts are online already). But the connection is that everything is nameless (at first). The baby sees many visual pixels. They have no meaning (i.e. no linguistic names). Later the baby sorts out the confusion and assigns names to all phenomena in its sight. Same for circuit programs. They see a nameless army of live hosts. They are all equally good, hence nameless. Then the program start purposing them differently (some are dbs, some are https, etc.). This is the same as the baby assigning names to pixels in its sight until it wakes up one day at age 5, thinking it understands the world. Ha :)
So is this a joke? The readme reads like satire, but it serm somewhat useful and interesting. Is this some guy's toy project where he just makes fun of academia in its documentation? I'm confused.
I am the guy. And this is not a joke. And the fact that I am making fun of academia is not co-incidental. The point is that I am using academia's own insights to bridge a major gap. Academia (say Theoretical Computer Scientists) don't take their next-door neighbors (the Linguists) seriously. This is why they can never invent what they want: the mind. If they ever collaborated, the singularity would be long behind us.
Please don't take this the wrong way, but you are coming across as a bit of a crank. Exaggerating the significance of your work is a marker for crankiness, second only to incomprehensibility.
I apologize, if it's a little over the top. But being over the top is the only way to get people to really think: out of anger to prove you wrong. It's just true. But I have no intention to offend anyone.
It looks curious. Perhaps if you published a longish commented program in Escher it would become more clear to me. Now it simply appears as a curiosity.
It doesn't look like a joke. The use of unconventional terms seems to be justified by the vast coverage seen on the references: arts, linguistics, cognitive sciences, just to name a few. It even looks a bit non-academic, in the best sense of the term: it departs from the classical aggregation of buzzwords backed by a properly chosen set of bibliographical references.
That's correct. For instance, I discovered the link to Choiceless Computation AFTER I invented Escher. It is a real mathematical connection, and I only bothered with it, because academics will not look at my work unless you shove some terms of their own into it. So I do, and it is real, because you can go and verify that Shelah's paper exactly matches the sematics of Escher. And the conclusion is that Shelah's paper wasn't necessary for my invention. It was necessary to convince an audience of a specific kind. (Not that this is accomplished yet. But it will. With time.)
Academics might look at your work, but you'll have to make lots of effort in explanation and providing context via comparisons with existing and previous work. It may or may not be worth it to you, but we aren't as unapproachable as you think; we even have a conference called Onward! for these kinds if ideas.
You never actually explain the semantics of Escher. I think this is why reading the README feels like a great setup with no payoff. Or maybe it's there, just not explained well enough.
For example, there's no comprehensible relationship to me between the "inputs" and "outputs" (if that's even what they are) in the diagrams labeled "project" and "Generalize." It looks a little like a spoof, like saying we combine {animal: cat} and {tail: orange} to get {animal: orange} and have a syllogism. Well, what is mechanically going on?
Also, you should be able to situation the programming paradigm of Escher among the vast universe of programming paradigms that have been explored. Otherwise, programmers are lost. For example, if there's no directionality to the "circle" gates, is that because they are relations, and the connections between the circles are joins? Or are they perhaps declarative constraints? There are many existing programming languages that you could be describing, most of the time, and drawing things as nested circles or giving them wacky names doesn't explain how Escher differs from other programming languages.
So it reads like Wolfram's "A New Kind of Science" -- this is "A New Kind of Programming," but reading it doesn't give me a new way to look at programming, the way it promises to.
Time is my only limited resource. Which is why it is poorly presented. Don't forget to notice that along-side Escher (which took 2 years to invent) I am maintaining
https://github.com/gocircuit/circuit
Which is a major piece of software that earns me my salary. if you kickstart what I want to be a non-profit software research foundation, gocircuit.org and escher.io, then the doc would be brilliant and interactive in no time flat :)
This seems very interesting - its like a graph based computation language! OP - it seems to me that executions of programs in this context are kind of orientations through the graph by drawing arrows all in one direction or the other..
So can this run programs in both directions? For example, orienting the NAND gate from "X NAND Y" to X and Y (i.e providing X NAND Y as say, TRUE) would generate the X and Y that could lead to the input? If we added that functionality to the basic gates (i.e allow running OR in reverse and have it return {(X=1,Y=0), (X=0,Y=1)} when given input (X OR Y=1)), and accumulated the possibilities at each step as a tree. Of course, it could take very long to run.
Actually, maybe it would even be possible simply fill in certain nodes and propagate out to all connected nodes, keeping track of possible inputs. So it can complete the whole graph (or connected component to be precise) with possible values given any random nodes filled out. So it can give you all possible executions subject to restrictions on any nodes values. That would be awesome!
OP - Does that make any sense? If it does, I may try to implement it and see what kinds of cool things we can do with it.
To me this seems to be exactly what constraint programming is about. Think prolog: you define the relationships (edges in this computation graph), instantiate some values and then ask the solver to fill the rest.
Is there anything fundamentally different in your idea?
The language for expressing constraints is what different, not the meaning. People are confused because they properly evaluate that you can do with this language what you can do with any other. But only up to a point of scale. Then the bugs that other languages make you introduce catch up with you. And you can't produce more software because you have to spend too much time fixing bugs of old software. In Escher, every circuit without valves is forever closed as design: for the same reason electrical circuits are rarely recalled. Did you ever wonder why that is?
Yes. All the functional programming language had the right idea (as you point out). But not the right grammar.
Escher has 3 grammar rules (reflex, circuit and valve). All these other languages have much much more. That's the point.
The whole point is that the gate-designer decides if their gate works in various directions (there are usually much more than 2 or 3). And this simply means that if they get a stream
of events coming in in the wrong order, they can choose how to "complain": They can stay silent and ignore the broken language sent to them. Or they can throw a panic and halt the entire program. You decide. The NAND gate makes little sense to go in anything but one direction. But a PLUS gate or the REASON gate make sense in multiple directions. You can read more about this here:
http://www.maymounkov.org/memex/abstract
Yes. Be patient a few more weeks, as I have other obligations too. I will have demos how you can make your own gesture-controlled robotics at home, using Escher bindings for the gobot.io library
You can see a pretty thorough treatment of these ideas in Ptolemy II, at http://ptolemy.eecs.berkeley.edu/ptolemyII/. Ptolemy II offers a variety of underlying computation models to execute the "program graph".
I am glad. The truth is: Eventually every mind, in trying to save itself from "repetitive" work, reaches the same conclusion: recursive metaphorical programming. It comes out in different buzz words "flow-programming", "metaphor mechanics", etc. They are all fuzzy but roughly correct. The only way to propose a theory of 1st-person consciousness, which is not ambiguous (the theory as communicated to others), is to give a programming language. This supersedes a philosophy paper that no one reads and is predicated on the author's "knowledge". And if the author is an academic, they can just assume "people don't know math" and people assume "we don't know math", so no knowledge is transferred at all.
How does this compare? What makes it specially useful for reasoning newer ideas. I've read some of the introduction about it using the PAC model, but can you give an example of how it goes about using that?
Say you compile 2 machine learning algorithm implementations. If the reverse compiler gives you an interface with a choice between say those 2 algorithms and a % accuracy rate with a visual representation this definitely sounds like a working idea of a global workspace theory. As it abstracts the complicated parts into simplistic choices that are selectable to be do-able in linear time with some statistics.
If gates are functions and their inputs are like function inputs and composing gates is like function composition, then how is this different from a normal programming language ?
It is not intended to be solely a general purpose programming language. It looks more like a DSL for inter-process communication. As "process", I mean not ordinary OS processes, but computation units written either in Escher or other languages, like Go, at the moment.
It's mostly systems programming research (IMO, at its best, given the poor state of the art nowadays), but at that intersection area with programming languages.
It looks familiar to every other language you know and yet to no one in specific. That's the point: It unifies them all as a common denominator and you have to break out of your conventional thinking to see the differences. Alternatively, you have to try to write many programs in Escher and then you will gradually start "getting it"
"Escher presents the world in a model called Choiceless Computation (...) Understanding the difference between Turing Machines and Choiceless Computation, while not entirely necessary, sheds much light on the profound difference between Escher and other languages.
(...)
Why you should be excited:
It may seem that Escher is not more than a new semantic to do an old job. But something nearly magical happens when transition to using the Escher semantic—various compiler intelligence improvements that used to be NP-hard become simple and tractable."
Just there is no misunderstanding: any polynomially-bound choiceless computation (CPT+C) can be carried out by a polynomial time Turing machine, so choiceless computation does not buy you any extra power strictly speaking (CPT+C \subseteq P); and might even be less powerful than P [1].
That is not so say there might be low-hanging prefix-free benefits in average case applications due to better parallelism or easier use of heuristics, which seems to be what the author is pointing at. So be excited, but not unreasonably so.
"Four beer caps are placed on the corners of a square table with arbitrary orientations...."
Please can you tell us the solution to this puzzle? I cannot find the answer anywhere online. What is the solution?
Suppose two caps (AB) face the same direction, if they're diagonal, then you can flip the diagonal. Regardless of which caps flip, all caps will then have the same orientation.
If the two caps are along a side, then you perform a side flip. If you get the other two caps, or you flip AB, then all caps have the same orientation. If you flip one of the AB caps and one other, you now have a diagonal setup, which we have already solved.
If you have one cap, then flip a cap, you will either have two caps in the same orientation (which we know how to solve) or all caps in the same orientation.
Three caps is the same as one cap in the same orientation.
There's other answers here that suggest it's enough to merely pass though a correct state, without having executed all of the operations. If that's valid, then the wording of the puzzle should state this more forcefully. Even then, the last sentence can't be satisfied. Imagine if you were competing against the machine, it could simply always select moves that left one corner in the down position.
From my reading of this puzzle, you can't sample the board in order to capture the orientation of the caps. Hence, I don't believe there's a way you could reach an outcome.
Also, the base case doesn't work. Imagine there was one cap. You can choose to flip it or not flip it. Can you get everything in the up position without sampling the board?
"Can you devise a sequence that ensures they all face up? Down?"
No. Garbage in, garbage out.
The only thing I can think of that might be useful at the moment: repeatedly hammering some combination of sides and diagonals, causing some statistical pattern to emerge. But without knowing the origin states, not sure how to use that.
From here, flipping two opposit corners will result in all 4 being the same way. One step before:
AA
BB
Flipping any two adjacent ones will either solve it or get you to the previous step.
So from here, we can see that any state with two up and two down is solvable. The state with all four one way or the other is already solved. The only thing left is the state with just one out of line.
AB
BB
No matter which individual one is flipped, you are in a solvable state.
Yes: At a smentic linguistic level. Not as a physical substance. At which point you ask: What is a physical substance formally? Well, that depends on your dictionary of the world. Cognition is relative, people don't get that :) There is no absolute knowledge. There are only interpretations of individual participants. And our programming languages have to reflect that.
It's true that the writing style here is a little grandiose. However, the vociferous rejection without understanding of new ideas exemplifies the crushing small-mindedness of HN. The author has schizophrenia? Really? It reads like time cube? Honestly? Apparently it's Boilerstrap or get out!
Thanks for the link! It's time to meet my peers. I will say to all people who find similarities between my work and cognitive scientists or neuroscientists: I have never read a line of text on either of these subjects, nor have I ever read anything written by Chomsky (other than email exchanges). I've only listened to Chomsky's you tube vides and skimmed the titles of his book. This was suggestive enough.
Not incidentally. The Voynich manuscript simply demonstrates that if you mix concepts at all scales of visual perception (color, texture, page organization, etc.) the document looks un-intelligible yet familiar. Incidentally, so do Escher's painting, and so does my documentation (to you). Now think about why? Think about Escher-Godel-Bach, think think :)
The author seems to have a thing against academia. This confuses me: his work is related to the Choiceless Computation work (which is academic), and though the author never cites it, the same ideas were explored by Sussman and Radul in The Art of the Propagator. Why the hatred of academia? Why the frustration with computer scientists? It's a cool set of ideas, that admittedly is not in the mainstream of PL research. But there's no reason to hate the mainstream for this reason: they are working on cool ideas too, and are making good progress. There's value, yes, in working on things off the beaten track, and there are academics that do just that. If the goal is the advancement of human knowledge, we're all in this together, and author's bile is unnecessary.
I don't really know enough about the maths here to interpret the background to this project, but going on the way it's written, I'm concerned the author may be suffering from schizophrenia. I'd recommend seeking advice from a mental health professional.