Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Confessions of a Math Idiot (funcall.blogspot.com)
72 points by blasdel on July 30, 2009 | hide | past | favorite | 44 comments


Learning math on the internet is hard. Different people use different notations and assume different backgrounds.

I encourage the poster to read and learn more about exterior algebras! Once you've seen Stoke's theorem, the Hodge decomposition, Maxwell's equations, etc. written using them you'll never go back and those old school t-shirts with "and god said..."[1] will make you cringe.

I hope/expect to see them taught in basic calculus and physics, but it takes generations for things to trickle down.

[1] http://images6.cafepress.com/product/96599546v11_350x350_Fro...


see them taught

I think it was done that way once, but somewhere around seventies people dropped or moved this to advanced courses. You can't immediately solve a transformer in differential forms like the way you can by mindless symbolic manipulation over the standard formalism.


I was chagrined to find myself in this very same position. Sadly, I have not yet had the time to track down the resources necessary to "re-educate" myself. Computer science has given me a new love for math that I didn't even possess during my education, but I know of very few ways to express that enthusiasm.


When I first saw this, it was the beginning of the beginning:

6

28

496

8128

33550336

8589869056

137438691328

2305843008139952128

2658455991569831744654692615953842176

191561942608236107294793378084303638130997321548169216

13164036458569648337239753460458722910223472318386943117783728128

144740111546645244279463731260859884815736774914748358 89066354349131199152128

2356272345726734706578954899670990498847754785839260071014302 7597506337283178622239730365539602600561360255566462503270175 0528925780432155433824984287771524270103944969186640286445341 2803383143979023683862403317143592235664321970310172071316352 7487298747400647801939587165936401087419375649057918549492160555646976

1410537837067120690632079580860631898814867435147156678388386 7599995486774265238011410419332903769025156195056870982932716 4087724366370087116731268159313652487450652439805877296207297 4467232951666582288469268077866528701889208678794514783645693 1392206037069506473607357237869517647305526682625328488638371 5072974324463835300053138429460296575143368065570759537328128

5416252628436584741265446537439131614085649053903169578460392 0818387206994158534859198999921056719921919057390080263646159 2800138276054397462627889030573034455058270283951394752077690 4492443149486172943511312628083790493046274068171796046586734 8720992572190569465545299629919823431031092624244463547789635 4414813917198164416055867880921478866773213987566616247145517 2696430221755428178425481731961195165985555357393778892340514 6222324506715979193757372820860878214322052227584537552897476 2561793951766244263144803134469350852036575847982475360211728 8040378304860287362125931378999490033667394150374722496698402 8240806042108690077670395259231894666273615212775603535764707 9522501738583051710286030212348966478513639499289049732921451 07505979911456221519899345764984291328

Perhaps OP should have studied number theory. No number theorist has ever complained about the quality of his teacher. It's just you and all that ever was and all that ever will be. Now go discover.


See also Sussman's talk on how terrible math notation is:

http://video.google.com/videoplay?docid=-2726904509434151616...


Sussman's critique of mathematical notation is hollow. The point of good notation is to hide the details while conveying the high level compactly and succinctly.

Lagrange's equations written the way he derides are a perfect example. His need to think about what spaces the symbols live in based on their context in the expression is typical when translating math to code. He should be talking about about what kind of type inference engine one needs to allow the original notation and end up with his positional notation internally.


Actually, the notation is confusing even to professional mathmaticians. The notation used in differential geometry is the archetype of this phenomenon; it's sometimes incredibly hard to work out what the simple-looking notation means.

In fact, differential geometry has been described as the study of objects "invariant under a change of notation" playing on the lack of clarity in this field, along with the standard description of a tensor as an object "invariant under a change of coordinates".

See, for example, the introduction to this piece: http://www.math.jhu.edu/~yrubinst/RiemGeom/Thesis-pp-23-30-a...


Einstein joked about this http://books.google.com/books?id=U2mO4nUunuwC&q=%22I%20h... and Penrose have made an elaborate prank out of situation too, http://abstrusegoose.com/130.

That said, notations here really correspond to different ways of thinking and are important and shouldn't be ridden.


Maybe, but a notation arrived at by an accretion of historical accident is not the way to introduce this stuff to beginners, inertia be damned.

Edit: Notation


"At school Feynman approached mathematics in a highly unconventional way. Basically he enjoyed recreational mathematics from which he derived a large amount of pleasure. He studied a lot of mathematics in his own time including trigonometry, differential and integral calculus, and complex numbers long before he met these topics in his formal education. Realizing the importance of mathematical notation, he invented his own notation for sin, cos, tan, f (x) etc. which he thought was much better than the standard notation. However, when a friend asked him to explain a piece of mathematics, he suddenly realized that notation could not be a personal matter since one needed it to communicate."

The story is either in "Surely You're Joking, Mr. Feynman" or "What Do You Care What Other People Think?". I forget which stories are in which book. Both are great though.


     I thought my symbols were just as good, if not better, than the regular
symbols -- it doesn't make any difference what symbols you use -- but I discovered later that it does make a difference. Once when I was explaining something to another kid in high school, without thinking I started to make these symbols, and he said, "What the hell are those?" I realized then that if I'm going to talk to anybody else, I'll have to use the standard symbols, so I eventually gave up my own symbols.

-- Surely You're Joking, Mr Feynman (http://www.gorgorat.com/)

That said, I'm inclined to say that at some point it makes sense to break backwards compatibility in favour of a better system.


Good teachers do try to use consistent notation, but they also understand the need to communicate with the rest of the world and so end up having to teach historical accidents.

Here's a fun one: pi is wrong! http://www.math.utah.edu/~palais/pi.pdf


So raise your kids in Esperanto? Or does it really matter that much in the end?


I thought someone might try that tack. Math is completely decoupled from its representation, so you can use whatever notation you want and not lose anything. I'm not gonna guess to what extent linguistic communication is bound up with its representation, but it's certainly more than math, meaning meaning can be lost. So your analogy doesn't work. Also, since mathematical notation is a tool primarily to facilitate mechanical transformations, and only secondarily for communication, you should use whatever lets you think biggest.


An excellent point as long as you do not need to actually communicate your results to anyone. As soon as you do, it becomes precisely as bound as language is.


So raise your kids in Esperanto?

I hope not.

http://www.xibalba.demon.co.uk/jbr/ranto/

A Nobel laureate in physics who grew up in a non-English-speaking country recommends learning English as a key to physics research.

http://www.phys.uu.nl/~thooft/theorist.html


The author of that article is well known in the Esperanto world for being the author of that article... and nothing more ;)

A true expert in Esperanto, the linguist and psychologist Claude Piron, debunked that article years ago:

http://claudepiron.free.fr/articlesenanglais/why.htm


Sussman's critique of mathematical notation is hollow.

I don't do enough math to have a useful opinion, but it strikes me that something as simple as 1 + 1 = 2 is ambiguous (does f = x+x or x+1?) and using unambiguous notation would be tedious and useless.

Unless you're trying to turn it into a program. I sometimes think of programming as the opposite of math.


type inference

Guy Steele's ‘Fortress’ language is headed in that direction.


Thanks for the link, its a shame Google video makes text unreadable though.


Try putting it on Full Screen and hit Pause as necessary. I was able to read most of the text, though it wasn't always easy.


Why does he say that the arity is different for derivatives and integrals? Isn't there an arity 1 symbolic integration? If you were numerically computing an integral over a range, shouldn't you be evaluating the derivative at a point? These don't have the same arity, but at least the ideas are parallel.


I'm a math idiot, too, but I know that he's confusing indefinite and definite integration. Definite integration returns a number; indefinite integration returns a function. Differentiation does undo indefinite integration.


> ... I know that he's confusing indefinite and definite integration.

Yes he is. The real solution is to banish the term "indefinite integral" from the language, and substitute "antiderivative". We should also not use the integral sign for antiderivatives, but unfortunately there is no well known alternative. :-(

Using the same terminology and symbols for antiderivatives as for (definite) integrals is confusing, as here. It also makes the Fundamental Theorem of Calculus -- a very profound fact if there ever was one -- appear to be something trivial about getting rid of limits on an integral sign. But consider: one can compute a definite integral using an antiderivative ... whodathunkit?


Isn't it weird how awful math terminology is? Even the word "derivative" is an awful choice. The special and wonderful thing about a derivative is that it's the rate at which another function changes, not that it's derived somehow.


And why is the verb for "take the derivative" "differentiate"? Both words make sense individually to describe things -- df/dx is derived from f by taking differences -- but it doesn't make sense that we use both words.

I had an abstract algebra professor, in my first year of grad school in math, who referred to operators D that satisfy the product rule D(fg) = D(f) g + f D(g) , which are not necessarily defined in terms of a limit, as "derivations". This is a natural generalization of the derivative, and apparently "derivation" is the correct technical term, but for a week or so I thought he was just not speaking English properly.

I'm not an algebraist now.


Sort of, for well-defined functions (for some definition of well-defined). I remember seeing an example in analysis which showed that it was possible to get some nonintuitive behavior in functions.


You might enjoy "Counterexamples in analysis" [1], it shows, for example, a two argument function for which the order of integration matters, while the geometric interpretation would suggest otherwise. A must have!

[1] www.amazon.com/dp/0486428753


Yeah, that brings up another of those peculiar things about math, and especially, dealing with math culture. It's hard to talk about a main idea without getting mired in weird exceptions.

There are indeed precise conditions that guarantee that a function can (a) be integrated and (b) the integral can be differentiated, and there are subtly different definitions of "integral" that lead to different sets of extremely weird functions being integrable or not. I can't remember any of the details despite having come across them many times.

In software culture, it seems much easier to talk in broad strokes when appropriate and in precise details when appropriate. You might get an argument, but it's usually about something relevant.


Duh. It just hit me. Math is mostly edge cases. Consequently, the urge to be have perfect logical validity (same as the urge to write a bugless program) requires you to spend most of your attention on the edge cases.

That urge is stronger in mathematicians than in software engineers.


I have a similar, but slightly different problem. While I know that CS and programming in general can be translated into useful math knowledge, I always mistrust my math. I suspect that i don't understand the math correctly because I understand it, if that makes any sense. The weird terminologies and reputation for being way hard always erode my confidence in my understanding.


I've found that mostly it's just that mathematics is usually taught extremely poorly. Mathematicians and mathematics educators typically gloss over definitions and use terrible notation. Further compounding the problem, the mechanical aspects of mathematical computation are rarely formally defined -- talking about elementary operators and objects of a mathematical subfield is rare.

I've recently discovered abstract algebra and I love it. We need to find and celebrate good explanations of mathematics both to do better work and to give future generations the mathematical education we didn't get.


What do you love about abstract algebra?

I'm asking because I found myself deeply disappointed with it when I took it in school. It seemed to me just a haphazard collection of definitions and puzzles. Why, for example, does anyone care how many non-abelian groups of order 18 there are? I found myself unable to retain much of anything, due to the lack of any coherent explanation of "What is the point?"

In computers, by contrast, no one hides the point. Searching, sorting, bits and bytes, languages that make it easier to express things without making mistakes—there's no mystery about the purpose or importance of anything.

I wonder, if someone showed me what is attractive about abstract algebra, maybe I would find myself just as addicted to math as to programming.


I love that it provides a systematic way of thinking about structures and proving properties of said structures that I am working with in code. It's amazingly powerful to say "type t defines a monoid with operation k: t -> t and identity Nil" because it comes with a whole slew of properties and theorems. It meshes very well with strongly typed functional languages that support algebraic data types (e.g. Haskell and OCaml).

I don't much care for theoretical abstract algebra (e.g. non-abelian groups of order 18) and I am by no means a master (or even an amateur!) but I find abstract algebra's concepts useful and I hope to learn more!


I know nothing of abstract algebra. In fact I don't even know what it is :). I do however view mathematicians in the same way I see the programming languages crowd: they like to play with these ideas and extend them for their own sake. The solve these puzzles and show that this principle is true, or that group of formulae can be generalized like so. It's up to the rest of us to notice when a particular problem fits some solution they have provided. Another part of this is: math is like alpha, and computer programming is blub, at least according to the math guys.


My high school math teacher refused to teach calculus. We spent year after year on algebra, trig, geometry, and did proofs. Lots and lots of proofs. He told me not to worry about when I went to college and studied calculus as I would be much better at it than the students that studied calc in high school.

Smart man, he was right. The first two weeks in my first college calc class I felt a little uneasy and understood that I was the only person in class that had not studied this stuff in high school. After those first two weeks though, it was not a problem and I swept through 6 calc classes without a sweat.

20 years later of course, I'm a math idiot again...use it or lose it ;)


I enjoyed this post a lot because I traveled the opposite path. I started out in computer science with little interest whatsoever in math all through high school.

Then when I got to college they introduced the notion of "program correctness" where you tried to prove, mathematically, that your computer programs were correct. This convinced me that computer science was simply a knock-off approximation of mathematics and I drifted away from it.

From your derivative example above I’d say it is this inexact nature of computer science that you especially like. Different strokes for different folks, I suppose.


It's not "different strokes," it's a change of perspective causing massive confusion all 'round. Computer science isn't a knock-off approximation of mathematics, it's a subfield of mathematics. Programming, which is what the post is talking about, is a third thing. The derivative is the inverse of the indefinite interval (and has the same arity), but the function he refers to computes a definite integral which is not the same thing. Etc.

Confusion about this stuff is a sign that you didn't really understand it the first time. Believe me, I've been (am still) there...


There is even more confusion out there. Think of abstract algebraic structures.

Who needs groups? No one. Until one realizes that certain simple equation transformation rules are not based on natural numbers, or real numbers but on groups. Once something like that clicks in your brain, you can suddenly solve weird equations with sets and symmetric differences, or bitstrings and xor's or other pretty akward things which looked really scary earlier.

Who needs rings and semirings? No one, until one realized that a certain algorithm requires a structure... and this structure is a semiring! Thus, if one can prove that two operations and a set form a semiring, you can apply this algorithm without any effort, because it will just work! :)

Or, even more. Who needs the theory of katamorphisms, Anamorphisms and such? No one. Until one realizes how beautiful recursive datastructures are and how easy it is to program them once you understood the idea behind them (Check http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.41.1... for a nice paper about this)

Who needs integration (besides those numerical people)? No one, until one realizes that any simulation, most iterations and such are discrete intergrations in very, very akward and convoluted algebraic structures.

Ach, I somehow wish I studied math before studying computer science by now.


Regarding where CS stands in relation to math: http://news.ycombinator.com/item?id=690798


Yeah, I tend to say "computer science" when I mean "theoretical computer science." To me there is a clear split between the stuff that's math/logic and the rest of it which I'm happy to lump under the catch-all of engineering (and don't feel comfortable labeling as any sort of science).


I'm not comfortable labeling the concepts behind operating systems, networks, computer architecture and programming languages as either math or engineering.


I think what he's getting at is that the math taught in high school and applied college courses isn't fully formalized, often isn't fully explained, and even (as taught) contains ambiguities. I was a math major (and took graduate level analysis, topology, and set theory as an undergrad,) but the applied math classes I took felt really rushed and confusing. I actually felt kind of lost and dumb in sophomore DiffyQ, and what's worse, I was surrounded by engineering students who were having no problems. I sometimes felt dumb studying Lebesgue integration and transfinite induction, but I never felt lost like I did in DiffyQ. It didn't feel like math at all; I loved math but hated sophomore DiffyQ.


My experience with math in school is that it was taught as if it were modern art: a bunch of meaningless procedures (up through calculus) and meaningless puzzles (beyond calculus).

If it were taught as providing some insight or serving some purpose, it might be a lot easier to learn and retain.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: