- I'm currently at the excercise 9.1, and this far PUSH is always the only operation you can do when dragging from an outer scope to an inner one. Seems pretty fundamental. (To the extent that explicit mentions of "pushing" is not even normally mentioned in textual proofs?) Could this be streamlined somehow, for example applying PUSH automatically just with the drag & drop operation from outer to inner scope?
- Instead of a linear "achievement history", keep records of all produced proofs for each lemma. Highlight the one with the smallest amount of "rules".
- It seems to allow circular proofs by default. Surely this can be detected using a dependency graph?
- Speaking of dependency graphs, I'd love to explore reverse mathematics using this kind of an interactive tool. Maybe add some challenge by counting not only the amount of rules, but amount of axioms/basic rules of inference used, and counting also the size of the whole dependency graph of each proof. Maybe there could be separate record-keeping for different sets of axioms.
9.3(d) was a real pain until I thought about how I'd write a function with type `(Either a b, Either a c) -> Either a (b, c)`. Before that, I kept wanting a negation operator.
(EDIT: Looks like some of the exercises are getting shuffled between versions -- 9.3(d) is the one I meant as of v1.2, but that might change over time. At least the type signature is a clue about which one I mean.)
I'm totally stuck. I would write an exhaustive pattern match to implement that function, but I have no clue how to do that in logic, without using negation.
f x y =
case x of
Left a -> Left a
Right b ->
case y of
Left a -> Left a
Right c -> Right (b, c)
Each `case` represents a couple of hypothetical contexts, applying case analysis to the Either (the OR). You're going to have to duplicate some work, since the `Left a` branch appears twice and you're getting the `a` from different places.
You know A|B and A|C. Prove A|(B&C) by assuming A and separately by assuming B, then use case analysis (by dragging the first conclusion onto the second).
To prove A|(B&C) assuming B, you have to use A|C and yet another (nested) case analysis.
Great works! We are making a VR logic game combining Alice in Wonderland and The Game of Logic. Both were written by Lewis Carroll. Our preview poster: https://sciencevr.com/img/carrollsriddles_s.png
I'm glad you mentioned LoF because I was thinking about it too, from an old article many years ago. I always get tangled up in the philosphy aspects, but what ever happened to it? Is it simply a different and consistent notation, or is there more to it?
> Is it simply a different and consistent notation, or is there more to it?
I've found that you can use that question to partition people (who have an opinion on the question) into two major groups: Those who dismiss LoF as just another notation, and those who do find something deeper in it or at least claim to.
Pragmatically it's strictly superior to conventional notation: it's more parsimonious and it admits the rule (due to W. Bricken, I believe):
A(AB) = A(B)
Which is unknown in other notations, and permits proofs in LoF notation to be extraordinarily concise and elegant (compared to proofs worked in conventional notation.) It also permits a concise implementation of a SAT solver that has the nice property that you don't have to put your problem into normal form.
Recently, George Burnett-Stuart (not Spencer-Brown), the author of the "Markable Mark" site, has worked out how to encode Predicate Calculus http://www.markability.net/prospectus.htm
Last time I spent some time trying to learn mathematical logic I realized that logic texts could benefit from syntax coloring of various languages involved (e.g., metalanguage, object language, logic symbols vs non-logic math symbols, etc, etc).
I'm not sure if this is considered as pedagogically beneficial, and/or feasible without interactivity (on a cursory glance, it doesn't look like Terence Tao has given any importance to coloring in this project).
What is the difference between "B [assuming A]" and "A implies B"? Is the distinction something that was introduced for the purposes of this game, or is it made otherwise as well?
Conditional assertion (that is, deducibility of B under the hypothesis A) is distinct from assertion of a conditional (deducibility of the statement that A implies B). This is a fundamental distinction in formal logic. See [1].
I'd love to see some additional features:
- I'm currently at the excercise 9.1, and this far PUSH is always the only operation you can do when dragging from an outer scope to an inner one. Seems pretty fundamental. (To the extent that explicit mentions of "pushing" is not even normally mentioned in textual proofs?) Could this be streamlined somehow, for example applying PUSH automatically just with the drag & drop operation from outer to inner scope?
- Instead of a linear "achievement history", keep records of all produced proofs for each lemma. Highlight the one with the smallest amount of "rules".
- It seems to allow circular proofs by default. Surely this can be detected using a dependency graph?
- Speaking of dependency graphs, I'd love to explore reverse mathematics using this kind of an interactive tool. Maybe add some challenge by counting not only the amount of rules, but amount of axioms/basic rules of inference used, and counting also the size of the whole dependency graph of each proof. Maybe there could be separate record-keeping for different sets of axioms.