Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Caml trading (janestreet.com)
33 points by apr on Aug 28, 2009 | hide | past | favorite | 12 comments


I was confused when he mentioned the importance of code correctness. Wouldn't it be easy to write more software to monitor how your trading software is doing and intercede if it seems to be losing you a lot of money?


Did you really watch it? He explains that at the speed and volume with which they handle transactions, an error could cause them to disintegrate in a handful of milliseconds.

Besides – why write more code to check if your original code is correct, when you can just write correct code? And if you write software to check your software, then do you have to write software to check the checking software? Police police police police...


>He explains that at the speed and volume with which they handle transactions, an error could cause them to disintegrate in a handful of milliseconds.

Right. That's why you should have software monitoring things instead of a human.

>Besides – why write more code to check if your original code is correct, when you can just write correct code? And if you write software to check your software, then do you have to write software to check the checking software? Police police police police...

Same reason it makes sense to have proof checking software like Metamath. It's fairly easy to write software that has an 90% probability of being correct. It's maybe three times harder to write software that has a 99% probability of being correct, so it's cheaper to write 90% likely to be correct software and then another piece of 90% likely to be correct software that checks it. That's what my intuition says.

Anyway, I think monitoring software would be much easier to write than actual trading software.

if bank_account < 0.9 * prev_bank_account: stop_trading()


Try to think it through.

First of all, they cannot increase the latency of their software; in fact, as is explained in the presentation, they work hard to keep it as low as possible and to continue to lower it. An extra layer which must monitor everything would increase latency.

Secondly, what are the algorithms used for determining sanity for this monitoring software? The same ones used to determine behavior of the original system? Then what is the point of writing the second, monitoring system? If it's different, then one of them is incorrect.

if bank_account < 0.9 * prev_bank_account: stop_trading()

What does this mean? As a demonstration of generalization, you might as well have written "if (should_stop) stop;" and similarly affected the discussion.

It's fairly easy to write software that has an 90% probability of being correct. It's maybe three times harder to write software that has a 99% probability of being correct, so ...

Their software needs to be as close to 100% correct as can be reasoned. The speaker notes that the static type checking in OCaml helps a great deal with both reasoning about specifications (what do I think this should actually do?) and preventing errors. Easy factorization in functional programming style helps break the problems into smaller components with little state, as well.

it's cheaper to write 90% likely to be correct software and then another piece of 90% likely to be correct software that checks it.

Assuming those statements are true and it were possible for two discrete systems to function in that manner, that still leaves you with a large amount of known probability for error.

You should watch the video, I think you will find especially interesting the part where he describes how they were unable to pay enough money to get people to even carefully review certain types of code.

Additionally, I think you may be conflating static checking (which OCaml does) with live monitoring of a working system: Same reason it makes sense to have proof checking software like Metamath.

That is a static check. Metamath does not fork a process and monitor your proof, assuming that were somehow possible.

That's what my intuition says.

Intuition is often correct when dealing with things which are intuitive.


>First of all, they cannot increase the latency of their software; in fact, as is explained in the presentation, they work hard to keep it as low as possible and to continue to lower it. An extra layer which must monitor everything would increase latency.

Right. So there is a sweet spot: you want to monitor often enough that you aren't bankrupt in a millisecond, but not so often that your profit becomes 50% of what it could be. I'm not familiar with the algorithms used in trading, but assuming they're complex, the sanity check that I'm proposing would not use a very large portion of your cycles.

>If it's different, then one of them is incorrect.

The presenter spoke of humans shutting off the system when it was causing them to lose a lot of money (in older days when their system was slower?) Presumably the humans were using some algorithm other than the one they implemented to decide whether to shut the system off. Which algorithm is incorrect?

How complicated do you think the algorithm the humans use would be to implement?

>You should watch the video, I think you will find especially interesting the part where he describes how they were unable to pay enough money to get people to even carefully review certain types of code.

I watched the first half. I stopped watching when it became apparent that I would get more out of the video if I knew OCaml, and I plan to learn it at some point.

>That is a static check. Metamath does not fork a process and monitor your proof, assuming that were somehow possible.

I'm talking about the effectiveness of using one system to confirm the correctness of another. For some reason, someone put a lot of hours into Metamath when they could have put the same hours into checking their own proofs. Why do you think they did that?

A final point: He mentioned that their software is in constant need of updating. Every update is an opportunity to introduce a critical error. It seems likely to me that monitoring software would not be in need of constant updating, since the algorithm that the humans use to determine when to turn off the machine probably does not change along with the things he mentioned.


I would reply again, but you have missed points made by the speaker, especially those in the context of OCaml. Please watch and make an effort at understanding before any offhanded dismissals.


I watched the other half of the video. It was pretty good, but I didn't see anything relevant to our discussion. I'd appreciate it if you replied.


If a single wrong transaction can make you lose a lot of money, monitoring the outputs will always be too late, right?


My guess is that the value of the assets traded by Jane Street is much smaller than their bankroll. If the assets are something like 80% of the size of their bankroll I'd suggest doing a sanity check after every trade.


p.s. to everyone else reading this thread, please do not down-arrow him, as his question was worth replying to and I think contributed to discussion.


I think it would help if he understood how these trading systems work in the first place. You can't even compare writing correct code with monitoring the profitability of the system. It's like comparing football with driving.


Well, that's all the better in that case. You've got two radically different computer programs, and you only blow up if both of them go wrong. I'd feel much safer with that scheme than a scheme where the computer programs are similar because then they might both share the same flaw.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: