Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Functional Programming, Abstraction, and Names (stephendiehl.com)
122 points by CarolineW on May 20, 2016 | hide | past | favorite | 72 comments


This is a bit defeatist. Parsing the definition in your head is only the first level of understanding you can have about a mathematical structure. You don't really understand something until you can reinvent it (and in particular give a plausible answer to "why these axioms and not others?")

For example, to motivate groups, you could introduce the concept of a symmetry as a mapping from an object to itself that doesn't change its properties, introduce the idea of an isomorphism as a mapping with an inverse (where f and g being inverse means they compose to identity maps), put them together and postulate that a reasonable formalization of a symmetry is an automorphism (isomorphism from an object to itself), note that isomorphisms are closed under composition, and arrive at the definition of a group by considering sets of symmetries closed under finite composition (thinking of identity maps as the composition of 0 morphisms).

I'm sure there's a similarly conceptual way to motivate monads in functional programming. Hyland and Power have papers on algebraic theories of "effectful" operations and how they give rise to (finitary) monads, as one starting point.


Indeed the typical wisdom in math is that the best way to understand a group (and the reason we usually care about one) is its actions.

There's some quote to the effect of: "Just as with men, a group is known by its actions." This was the only reference I could find off the top of my head https://gowers.wordpress.com/2011/11/06/group-actions-i/, but I've definitely heard the sentiment repeated in a couple different texts.

The sentiment expressed in this article might useful to hear if you're solely interested in learning how to use monad transformers or whatever in Haskell, eg, I know a fair amount of math, but very little category theory, and manage use monads just fine without any deep intuition as to why they're an important formalism. But if you want to develop your problem-solving skills, for example, I think "quiet contemplation" of axioms is pretty much always worse than working through several examples and absorbing some good exposition.

Here's a recent article that I found very though-provoking: http://cognitivemedium.com/invisible_visible/invisible_visib.... They quote some mathematicians discussing their visual intuition for one property of groups.


If I remember correctly, you could forget the "inverses" bit and define a group action with only the monoid corresponding to the group.

So not all the "groupiness" is captured by actions.

Also, Bill Thurston's beautiful "On Proof and Progress in Mathematics" [1] contains several ways to understand the derivative, in addition to it being a look into what a very distinguished mathematician thought of the nature of the mathematical pursuit. Pretty similar ideas, explored in depth.

Edit: Apparently this is linked to in TFA :)

[1]: https://arxiv.org/abs/math/9404236


Even the brightest brain will have been stuck on something which, after the penny has dropped, will seem trivial.

I agree with the author that there is a danger of "dumbing down" concepts which risks casting a lower ceiling on our capacity for lofty thoughts. There is another risk, though, that as someone becomes overfamiliar with a subject, they lose the ability to empathise with people who cannot immediately grasp the same concept. See [0] for a mathematician's reflections on revisiting his own DPhil thesis five years on.

I would counter that making abstract concepts accessible, broadens the reach of concepts which allows for the greater potential of new concepts being synthesised. Put in another way, the more people that have some grasp of an idea, the more humanity will progress.

Using Group Theory (strictly Point Groups) as an example, there are plenty of Chemists out there who are doing great work building catalytic organometallic compounds who use Group Theory. I'm sure plenty of them wouldn't be able to tell you what a Group was nor even express what the general basis of a Group are.

So yes, while there is a need to push ourselves our capacity to think increasingly abstract, we should also not be afraid to ruthlessly simplify as the world is big enough that individuals will derive benefit from diluted concepts for generations to come. What's more is that as concepts get simplified, the next generation can short-circuit their learning to achieve greater understanding at an earlier stage in their lives.

As I've always thought: if every human is born roughly equal, thank god we haven't had to "re-derive" making fire every generation otherwise how would we ever have time in our lifespans to learn about quantum mechanics and more.

[0]https://medium.com/@fjmubeen/ai-no-longer-understand-my-phd-...


Indeed. I think the point about allowing people to understand the subject at various stages in their lives very apt. We apply pedagogical simplification to almost everything we teach children anyways, but it's quite possible that we could be much more far reaching in our attempts. It would be interesting to see a world where children are taught basic abstractions of group theory.

Mathematics is one of those fields where it is difficult to simplify. Period. But in addition, the practitioners often actively resist this simplification. Mathematics is firmly rooted in proof, but not all mathematics education needs to be firmly rooted in proofs, and for that matter, not all mathematics education must be rooted in mechanistic repetition of solution techniques. There is perhaps a way to endow people with these concepts when they couldn't otherwise arrive at them through rigorous proving.


>> There is perhaps a way to endow people with these concepts when they couldn't otherwise arrive at them through rigorous proving.

Indeed it would be great if there is such a way. But sadly, without proofs, these concepts are half baked and are thus harmful as they give a false sense of understanding and thence the more harmful overconfidence.

I remember the great thought that applies to entire math education (and also to education in general) - "There is no royal road to geometry." [1]

[1] https://en.wikipedia.org/wiki/Royal_Road#A_metaphorical_.E2....


> not all mathematics education must be rooted in mechanistic repetition of solution techniques.

Actually, coming up with proofs often requires creativity. If anything, the emphasis on memorizing formulas and solution procedures comes from an aversion to teaching proofs.


Maybe. Maybe not.

I am coming to believe that computational thinking != mathematical thinking.

I'm able to reason logically in code because I'm able to reason logically about things in the real world and natural language supports that. I was able to code (early teens) and think computationally long before I ever knew there was such a thing as formal logic and truth tables and all that.

And let's consider names. In Ruby I knew what the module Enumerable did before I ever read the docs. Monoid? I don't think so. And I have some math background. What about all the potentially very talented programmers that have a non-math background? I don't buy that we need the beginnings of 3rd-level math to be decent programmers. In fact, I'll go further. I think think that the programming landscape is too math-oriented. I know someone who had a curiosity about recursion so I started explaining the concept via factorials as one does. Said person gets little panic attack. Our default explanations oughtn't involve math. Our names should avoid high-level math concepts divorced from everyday speech and concepts. By all means use aliases if you want but give the rest of us something to hang our conceptual hats on.


> What about all the potentially very talented programmers that have a non-math background?

If they don't know what something means, they can educate themselves. I learned about monoids in my first course in linear algebra at the university in the first month. My math background at that point was "finished highschool". Just because it has a name that does not appear in everyday language (unlike enumerable) it does not mean the concept is difficult to grasp.


> I learned about monoids in my first course in linear algebra at the university in the first month.

You just proved my point.


I think you missed his, though -- which was that any person with a high-school math background can easily grasp the concept of monoids with just a tiny bit of effort.


I see what you mean, but what I meant is that either we get much better at explaining math or we exclude a good portion of our population from being able to program. If you respond with "all you need is high-school math plus more math in college" you're missing my point!

I'm not in any way anti-math.† Imagine that monads were a part of Scratch (the children's programming environment), and I was trying to explain them to my young daughter. If she couldn't get it should I say to her, "this is only high-school math plus a bit more, come on, don't use the fact that you're 11 as an excuse". The other day somebody pointed out that in Tufts they have shown that children as young as five can think computationally. This is the level I'm talking about. I'm talking about being able to teach programming to people who are normally (for whatever reason) math-phobic.

(† I took three years of 3rd-level math.)


Could you explain GoF's design patterns to your 11-year-old daughter? Reflection and metaprogramming? Heck, would she be able to understand why the concept of object identity matters for some types (say, any user-defined type in Java) but not others (say, integers)? Are you proposing that we ban those things too?


Re: GoF. Interesting. And GoF got their ideas from the influential philosophical works by Christopher Alexander on architecture didn't they? (Which I haven't read, maybe I should. Christopher Alexander I mean, not GoF) So I could talk to her about patterns _in general_, but could I then make the leap to object-oriented patterns? I don't know. Tricky. I remember several light-bulb aha moments going off when I read that work. It'd be amazing to try to instil that type of wonder in her.

Re: Reflection and metaprogramming. I guess you're suggesting teaching her some variant of Lisp? In the Coder Dojo I brought her too they/we were doing either HTML/CSS/JS or Scratch or Python. And all simple examples. I guess a version of reflection and metaprogramming exist in Python (I don't know off the cuff (I'm a Rubyist) but I presume so).

I had to Google object identity. Is this [http://www.cs.sfu.ca/CourseCentral/354/zaiane/material/notes...] what you mean?


> Re: Reflection and metaprogramming. I guess you're suggesting teaching her some variant of Lisp?

I wasn't talking about Lisp specifically. Many libraries and frameworks written in “mundane” languages like Python and Ruby make use of metaprogramming facilities to great effect. Now, you can't say with a straight face that monoids are scary but metaprogramming is not.

> I had to Google object identity.

In most object-oriented programming languages, every object has a unique identity, which is assigned when the object is constructed, and from then onwards can't be mutated. The uniqueness of these object identities can get in the way when you want to program with compound values, for example, the Java expression `new Point2D(2,3) == new Point2D(2,3)` evaluates to `false`, simply because the objects have different identities.

> Is this [http://www.cs.sfu.ca/CourseCentral/354/zaiane/material/notes...] what you mean?

Yes.


Or even without effort, since all programmers have used monoids all their lives! This situation is very much like knowing about Fido, Spikes and Bones, but lacking a word for “dog”, and thus being unable to talk about a dog without referring to a specific one.


> I am coming to believe that computational thinking != mathematical thinking.

How exactly are you defining “computational thinking”? Is it “running the calculation yourself, as if you were the computer”? “Putting yourself in the computer's shoes”? “Empathizing with the computer”?

> And let's consider names. In Ruby I knew what the module Enumerable did before I ever read the docs. Monoid? I don't think so.

Every technical discipline has a jargon, precisely because their practitioners need to express ideas that can't be conveyed using only everyday language. Programming is not an exception.

And you picked a particularly bad example, because, while there exist mathematical objects whose definitions are very obscure and hard to get an intuition for, monoids aren't among them.

> I know someone who had a curiosity about recursion so I started explaining the concept via factorials as one does.

Recursion is just self-reference. For example, you could've told said person “your ancestors are your parents and their ancestors” - a perfectly good example of a recursive definition.


I'm not against all jargon, I'm advocating for less mathematical jargon and more ordinary language jargon.

If I didn't know about functions your explanation of “your ancestors are your parents and their ancestors” would give me some insight into the nature of recursion but not a whole lot.


> and more ordinary language jargon.

This is a contradiction in terms. By definition, jargon isn't ordinary language. It consists of words that either don't exist in ordinary language (e.g., “metaobject”), or whose technical meaning is different from the ordinary one (e.g., “kind”).

> If I didn't know about functions

Are you talking about mathematical functions or procedures in computer programming? If the latter, I insist: recursion is not limited to procedures, it can be any form of self-reference. For example:

(0) A list is either empty, or has a first element and a tail list.

(1) A binomial tree of rank r has a root element and, for every 0 <= q < r, a child tree of rank q.

(2) The sum of a list of numbers is 0 if the list is empty, otherwise it's the first element plus the sum of the rest.

In fact, one of the major lessons of functional programming is that recursive procedures are naturally defined on recursive data structures - they always go hand in hand.

> your explanation of “your ancestors are your parents and their ancestors” would give me some insight into the nature of recursion but not a whole lot.

It wasn't an “explanation”, just an example. But if you think a transitive closure (namely, of the parenthood relation) isn't a good example of recursion, arguably you don't understand recursion.


I meant jargon drawn from ordinary language as opposed to jargon drawn from technical language.

I don't need your pointless definitions, nor do I need your corrections, your insults neither.


I'm not sure what exactly you found insulting.


Hi. I've been coding for over 20 years. Do you think it likely that "arguably you don't understand recursion"? (I suppose it's possible, there are some pretty awful programmers out there who should never have become programmers but did because their parents made them because that's where the jobs and money are -- I've always seen /proper/† programmers like the craftsmen of old with guilds and folk-lore and accumulated wisdom and who treat the craft like a vocation.)

Perhaps where we were at cross-purposes is that when I said recursion I meant recursive procedure but you were thinking of recursive data structures (and possibly also of recursive procedures). Why I contradicted you was that I didn't think your parent/ancestor example was a good explanation for recursive procedures but, yes, I now see that it is a good explanation of a recursive data structure. In fact, as you well know, in a tree data structure we call the node upstream from the current node the parent, and the node(s) downstream the child(ren). Your dismissal of my knowledge felt like an insult. I probably shouldn't be so thin-skinned.

† I know, I know, who am I that I get to decide who is proper or not. But I once gave grinds to this guy who was forced to do computing by his folks and he had no interest and I couldn't get him interested and he didn't seem to be able to wrap his head around simple programming constructs. Maybe more a reflection on my ability to convey knowledge than his ability, sure.


> Hi. I've been coding for over 20 years. Do you think it likely that "arguably you don't understand recursion"?

If you thought the only recursive thing in existence is recursive procedures, yes, arguably you didn't understand recursion. (Which is fine, by the way - you can be a very successful programmer without understanding recursion.)

> Perhaps where we were at cross-purposes is that when I said recursion I meant recursive procedure but you were thinking of recursive data structures (and possibly also of recursive procedures).

When anyone says recursion, I understand recursion. Limiting your idea of “recursion” to recursive procedures is like limiting your idea of “black” to black T-shirts.

> Why I contradicted you was that I didn't think your parent/ancestor example was a good explanation for recursive procedures but, yes, I now see that it is a good explanation of a recursive data structure.

My parent/ancestor example was an example of a recursively defined relation, not a data structure. (I deliberately picked an example that most programmers wouldn't relate to code.) In Prolog it could be written:

    ancestor(X,Y) :- parent(X,Y).
    ancestor(X,Z) :- parent(X,Y), ancestor(Y,Z).
Where `parent(X,Y)` means “Y is a parent of X”, and `ancestor(X,Y)` means “Y is an ancestor of X”.


Without formal methods and math, what does programming do to reason about the mounting complexity of modern systems?


Better building blocks. Better programming tools. Understanding the craft of software engineering better.

To use a lever do I have to understand physics?


If you are building a bridge or a skyscraper, yes, you absolutely do. If you are building a house for your dog, then no.


TDD obviously. :-p


Yes and No to this article. Yes, abstraction is important, and yes yes, the point of mathematical abstraction is not to be vague, but to create a level of expression where one can be precise.

No, the point of abstraction can be to be vague, and to delimit the boundary of a concept that is not quite clear (yet). It is not a mathematical abstraction then, but something you are experimenting with to find the right kind of laws that are supposed to govern that concept. Most real-world programs contain many such concepts for which either no good mathematical abstraction exists, or for which nobody has had yet the time and money to find a good mathematical abstraction.


If you are interested in learning about Algebraic structures and group theory from a very practical standpoint I highly recommend

https://github.com/fantasyland/fantasy-land

Also - does anyone show me a practical usecase of the reader monad ? cant you just use partial functions ?

if I am not mistaken isn't reader monad just :

foobar.config ({'name':'hello'});

foobar.run ({'age':'34'});


The reader monad is like read-only thread-local storage. (State or ST would be thread local storage). It provides a command to get the "current state"

  function f() {
    var m = getState().chain(state => {
      // ...
    })
    return m;
  }
or with generators acting like do syntax replacement:

  var f = wrapped(function* () {
    var state = yield getState();
    // ...
    return;
  });
At the end you get an object that contains an unexecuted sequencing of computations that you can run with any state. For example you could run it from a HTTP route handler with the current user contained in the "state":

  var m = f();
  m.runWith({currentUser: request.user})
and now you have access to the user without passing it around everywhere.

All the other analogies in this thread are wrong in terms of what the real API looks like. The real API looks almost exactly like Promises, with `chain` instead of `then`.

This in turn also explains why monad transformers are so important. Having access to just the current user w/o passing it around isn't very useful if e.g. you can't do IO, but having both - now we're talking!


I do not understand how the reader monad is different than promise.

It seems the reader monad just delays execution of some code until you call 'runWith' and you can define state variables throughout your code before calling runWith.


Thats precisely what it does, yes. Its different than a promise because the implementation is different (everything is done synchronously, and getState() returns an action object which then the implementation works on. The result of doing runWith() is also immediately available (not possible with async IO):

  var result = m.runWith(...)
Other than that its almost exactly the same. And so are other monads. The Array monad (that is, if JS arrays actually had that method, which they don't)

  var m = [1,2,3]
    .chain(x => [x, x * 10])
    .chain(y => y < 2 ? [] : [y])
result: [10,2,20,3,30]

Same API (monad), totally different functionality.


so monads are just a way to define an api ?

so anything that is like this :

values = ['coffee','water']

AfterWebSocket = values.sendToWebSocket()

AfterWebSocket.compute (eachValue => x*2)

is that it ?


It has to be the chain method and it has to act in such a way that you pass a function that returns an object of the same type.

For array chain takes a function that returns an array. For promises, chain takes a function that returns a promise object. For Reader, chain takes a function that returns another reader object.

Here is a complete class Reader:

  class Reader {
    constructor(computation) {
        this.computation = computation;
    }
    runWith(data) {
        return this.computation(data)
    }
    chain(readerReturningFn) {
        return new Reader(data => readerReturningFn(this.computation(data))
                                  .runWith(data))
    }
    static of(val) {
        return new Reader(data => val)
    }
    static getState() {
        return new Reader(data => data)
    }
  }

Here is how you can use it:

  var r = Reader.of(5).chain(x => Reader.getState().chain(y => Reader.of(x + y)))
  r.runWith(2)
  > 7
For it to be useful, you want ReaderT, a monad transformer which takes another monad (such as Future) and creates a new monad that provides functionality from both. This gets fairly complicated fairly quickly :D But the result is also a monad, which means it will have the same familiar `chain` based API.

And yes, monads in programming are this simple. Mathematical monads are a wider generalisation of computing monads (just like Iterable would be something more general in the real world, not "things with a next() method that acts like this"), but in terms of programming its a shared API (chain) that acts in a certain way (chains to another object of the same type and flattens one level, whatever that means for the concrete implementation)


So its all just about types, ?

More like a specs that if people follow the world would be a better place ?

what is a monad transformer ?

I understand your code - but fail to see why its so useful :(


Yeah, its about the type / interface / protocol. But they really are very, very useful. There are just so many problems that can be solved with it.

For example there can be a GraphQL monad. Instead of Promise.all you would use GraphQL.parallel(otherMonads), and the implementation can transform that to a single request to the server instead of multiple ones.

Similarly there can be an ORM monad that avoids the n+1 queries problem. parallel(Users.get(id)).chain(users => ...) will transform the parallel query to a single SELECT ... IN query.

Also, have you tried passing the current user around in server side code, everywhere? Its really tedious to do, and Reader solves that!

A monad transformer is (more or less) a function that takes a monad class and returns a new monad class (yes thats the class object, not an instance). So if for example Future were a monad, and you wanted to add Reader functionality, you would create a new class PromiseWithReader:

  var FutureWithReader = ReaderT(Future)
Now you can use this monad to do both getState() and async IO operations within its chained functions.

  function getUserPosts() {
    return getState()
    .chain(state => Posts.getForUserId(state.user.id)) // cheating a bit here
  }
in the route handler:

  getUserPosts.runWith({user: req.currentUser})
  .fork(posts => res.json(posts))
The cool part is you can use getState() at any time to get the state. This will work no matter how deep your functions are: the example above is just with a single function, but that function can call another, which calls another, which calls getState() to get the user. But there is still no need to pass that user through all those calls. And if you use something like a State monad, you'll even be able to modify the contents of the state.

p.s. Future implementation, based on robotlolita's minimal Future class: https://gist.github.com/spion/b433a665ca51d6c38e9f


hey spion !

I read your post and sent you some messages on gitter, it seems you do not use gitter a lot :(


Sorry about that.

Here is an implementation of ReaderT, how it can transform a Future to provide "local readonly state", and a usage example:

https://gist.github.com/spion/b433a665ca51d6c38e9f

Since its JS, I also added automatic lifting, just to annoy Haskell programmers ;)


Reader fulfills a bunch of OO patterns. I like to think of it as "Environment", "Capability", "Configuration", etc., and is related in a way to Dependency Injection.

Basically, you have some computation/object that isn't complete without some "setup info" which, from the point of view of the code being written, is "just there". Like if you had a program that needed access to a particular interface, but you didn't want to manually pass it to every function that cares about it; instead you'd make an object with that interface as its state and write your program as methods on that object. That's just Reader in disguise.

In fact, C++ methods are basically in the Reader monad; they all have an implicit "this" argument which the compiler plumbs around for you.


Interestingly enough "Environment" is another name for the Reader Monad.


well, a 'reader' monad is a monad over a Function[A, B]. So yes, when you say 'the reader monad', what you (probably?) mean is the instance it operates on, which would be a function.

You use the reader monad the thread the same input to multiple functions that may also take additional intput, and also have their own outputs.

You have to keep in mind that since Monads are so common and useful, many languages have syntactic support for them. C#'s LINQ query syntax, scala for expressions and haskell 'do' are all 'monadic comprehension'.


So A reader Monad is similar to class inheritance. Where you provide a dependency ( like a inner class ).

The resulting class can also have other variables and inputs.

Is that a correct analogy ?


I think a better analogy with OO languages would be a Function object for which you've provided one of the arguments in its constructor.

Inheritance is more similar to monad transformers.


Yes, you can just use partial functions.

But partial functions are not always as clean as you think they are. When composing with pure functions for example, you'll save a lot of boilerplate lambdas if you use the monad.


I wrote a simple post showing the 3 main ways to pass state. Check it out! http://deliberate-software.com/haskell-state/


The Reader Monad lets you abstract away local state. You're on the right track with partial functions, but using a monad is cleaner (at least in Haskell...and I'd guess anywhere else). Essentially using the monad instead will let you remove all of the redundant function application you'd have to do otherwise. If you've managed to abstract this step away into a library, take a look at your abstraction. It's probably equivalent to Reader.


And check out ramda-fantasy while you're there!


I don't think this is on the right track. Obviously, learning is a matter of personal preference and experience, but trying to teach someone about groups by explaining the definition of a group defeats the whole purpose of even studying groups in the first place. The power of Group theory, and of mathematical abstraction in general, is that it allows us to generalize and extend existing concepts. I agree, that it would be ridiculous to rename all "groups" to "clocks", just as it would be ridiculous to rename all rectangles to squares. But for a student who has never seen a rectangle that is not a square before, it would be unfair and counterproductive for you to ask that student to put aside the example of a square when studying rectangles. For teaching purposes, it is rarely wrong to draw analogies between the generalized object and its everyday analogue.


I like using moves on Rubik's Cube to convey an example of a group. The idea of an identity becomes one of cancellation, there are obvious subgroups, etc.



I think the question is why explicitly represent these mathematical concepts in a programming language. Since they're difficult to explain, the cost of including them is high, and they don't seem to have compensating benefits that are worth the cost.


Interesting take but I disagree with one point:

> This style of designing abstractions is often quite foreign in programming in the large, and other schools of thought (see Gang of Four) actively encourage weaving cryptic metaphors and anthropomorphising code as a means to convey structure.

The GoF didn't try to anthropomorphize, all they wanted to do is put names on things. Sometimes, these names have some meaning (e.g. Factory pattern) and other times, they have none (e.g. the Flyweight pattern).

The important concept that the GoF book introduced was to name things. What word was used for these names matters little.

In other words, similar to what the author of this piece is trying to achieve (and he does so pretty convincingly).


GoF didn't try to anthropomorphize, but they ended up doing it anyway. The things they sought to name where simply tools for humans. It's the programmers equivalent of a book about hammers and screwdrivers. It's valuable for learning how to use tools to drive fasteners, but without the existence of such things the information becomes about as actionable as any other anthropological piece.


> other times, they have none (e.g. the Flyweight pattern).

They could've called it “handrolled hash consing”, which is exactly what it is.


I don't think that's much better than Flyweight in terms of intrinsic meaning. A little better, since it breaks down into hash + consing, but having not heard of either before, I wasn't able to easily guess what it was despite knowing what hashes and cons cells are. Of course, like everything else in Lisp it predates GoF, so I suppose it would have been nice to reuse existing jargon... But maybe a better name would be "memoized construction".


“Memoized construction” is a very good name, indeed. It's actually better than “hash consing”, because it isn't tied to a particular implementation strategy. But I think the “handrolled” part is also important, to stress that it isn't something the language implementation gives you for free.


This reminds me of a related thing to naming things in programming which I wonder about. Should the names of functions and types be intrinsic to the object itself (its inherent properties), or they should be extrinsic to it, describing the domain use?

The first option (intrinsic names) has advantage in reusability, the other option (extrinsic names) has advantage in readability for particular application.


I don't know how to read the monoid equations. <> to me means not equal. Is that what they meanin the article?


<> is a placeholder for an operation (a function, really) taking two inputs, mempty is a placeholder for a special value, and x and y are placeholders for arbitrary values. If you can find something that fits the pattern, then it is a monoid. Such as where <> is addition, mempty is 0, and x and y are numbers. Or where <> is string concatenation, mempty is the empty string, etc.


Thank you.


> At this point, I didn’t have much else to say.

Maybe because I started studying pure math later in life and I had to be more conscious about it, I have thought a lot about what it means to understand and think well about mathematical concepts. I think the author misses the point of the question, "What is a group, actually?" after giving the formal definition of a group. The student was asking, "How am I to understand groups?" And more generally, understand mathematical concepts. Here are some tools I learned for understanding groups:

- The informal definition. What's a group trying to model?

- The formal definition.

- Standard examples. What's the simplest? What are the "canonical examples"? Mathematicians love this question. See, for example (more links inside):

http://mathoverflow.net/questions/3242/canonical-examples-of...

http://mathoverflow.net/questions/4994/fundamental-examples

- What happens if we remove parts of the definition? Inverses? You get a monoid. Identity, too? A semigroup. What if you add additional properties? You get things like commutative groups, rings, etc. You'll want examples of these other structures, too.

- The motivating history of groups. Who defined them, and why?

- When mathematicians encounter a new object, the their first instinct is to classify it. Can we classify all groups? (No.) Are subclasses of groups that can be classified? Yes! Finite simple groups, finitely generated commutative and Lie groups are all hugely important examples.

- Related to classification, what are the general properties of groups? What can you prove about them? This help develop an intuition for "how they behave".

- The goal of math is to prove theorems. What are groups good for? What can you prove with them? This sets group theory in relation to other mathematical subfields.

- As mathematical objects go, groups are relatively concrete, but for many objects, it can be a challenge to describe actual examples. How do you describe examples of groups? Tables, presentations (generators and relations). When do two presentations describe the same group? (Hard.)

- What constructions exist on groups? Products, quotients, semi-direct (twisted) products, free products, fibered products, etc. Category theory suggests we should study not only groups, but relations between them. Groups form a category. What kind of category? What universal constructions hold in the category of groups?

- What are the kinds of groups? Commutative, finite, finitely generated, Lie, matrix, simple, etc. All the above questions apply again! Canonical examples, history, informal definition, constructions, classifications, categories, etc.


I can't understand the difference between the definitions of Identity and Invertibility - can someone explain?

I.e.:

Identity For any element a in G there is an element e where e⋆a = a⋆e = a.

Invertibility For each element a in G there is an element b where a⋆b = b⋆a = e, where e is the identity.


Identity: there exists one unique element e such that for any a it holds that e⋆a = a⋆e = a. This e is the same no matter what a you pick.

Invertibility: given any a, there is b such that a⋆b = b⋆a = e. For two different a's you get two different b's.

As example let G be a set of all strings and the operation is appending. There is element e == "". For any string a it holds that if you append empty string to any side, you get the original string back. However there is no inverse. You can not append two string to form an empty string. Apparently, strings with appending do not form a group.

Group example: let G be a set of all integers and the operation is addition. Then e is 0, as for all a it holds that a + 0 = 0 + a = a, and for each a you can find -a such that a + (-a) = (-a) + a = 0.


The integers >= 0 have 0 as the identity: 0 + n = n + 0 = n. But they don't have inverses, because e.g. there's no positive number n such that n + 1 = 0.


I think you mean the naturals.


People disagree about whether 0 is natural though. Better not to use the word at all :D


I wanted to avoid an argument about whether 0 is a natural number or not.


OP said "integers >= 0". Not much to "think" here.


It's about where "e", the identity element, is placed - in the "identity" rule it is one of the terms of the operation (e * a, a * e) while in the "invertibility" rule it's the result of the operation.

If you want an example, think Z (integers) and "+": there is an element of Z, namely zero (0), where the identity rule tells us that x + 0 = 0 + x = x and the invertibility rule says that for any x in Z, there is an y in z such that x + y = 0


Think of identity as neutral (not good, not bad). Add neutral to good or bad and they remain the same.

Invertibility is existence of Yin for a Yang, superhero for a supervillain, north pole for a south and combining them creates a neutral.


On my Android (Chrome) content looks like this: "For all \(a\), \(b\) and \(c\) in \(G\), \((a \star b) \star c = a \star (b \star c)\)." Maybe a javascript issue?


Sometimes there's no need for too general definitions. All groups can be described as transformation groups, likewise for monoids.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: