Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Rust web framework, Iron (ironframework.io)
137 points by lding43 on Aug 16, 2015 | hide | past | favorite | 111 comments


The first thought that should pop into your head is probably, why use this instead of language XYZ, where XYZ is Ruby/Java/Python/Go. It's actually a pretty interesting question because Rust is very much unlike those languages in the sense that it's a true systems programming language like C/C++ are.

No one would (should!) ever consider writing a webservice in C++, simply because it's unsafe to do so, it's much easier in other languages and the performance downsides of those languages don't matter any way (i.e. Ruby powers a bunch of high throughput websites yet it's notoriously 200x slower than any other language).

But here comes Rust, it takes away the unsafety, arguably being even safer than the managed languages in some aspects, is (in most aspects) easier to use than C++ (coming close to the managed languages in ease of use), yet has similar performance aspects (theoretically identical performance aspects) to C/C++.

There's a web gateway in our cluster that receives binary blobs over REST and puts them onto the message queue. In total not much over 100 lines of Ruby. We've thought about reducing the server load a bit by porting it to Go or some other more performant language. We probably wouldn't go for Rust, since Go is simply easier to learn (should a new person ever have to perform maintenance). Just the possibility of implementing it in Rust in roughly the same timespan and code complexity while having a theoretically optimal performance is very interesting.


> No one would (should!) ever consider writing a webservice in C++, simply because it's unsafe to do so, it's much easier in other languages

I wrote some years ago https://github.com/allan-simon/cppcms-skeleton

Which I used as a MVC C++ framework to create webapp, even a quite full featured wiki (multi-language, image uploading, markdown, history, conflict resolution with diff etc.) https://github.com/allan-simon/tatowiki

which is used in production there http://wiki.tatoeba.org for now several years too (without crash, of course one error 500 from times to times because there was some bug)

And it's not that hard to read and write I think https://github.com/allan-simon/tatowiki/blob/master/app/src/...

Of course it was missing a lot of tooling (ORM, database migration etc.) but I don't think the language per se was the problem. And if counting the full stack of dependencies, my solution certainly had less line of C/C++ involved than your typical php application :)

For this, if i was to redo it in 2015, I would however consider Rust without any hesitation, especially as I think it has one major advantage compare to C/C++ regarding web services

=> the community wants to go that way and make library/framework to create webservices in Rust, first class citizen of the ecosystem


I can write web services in Haskell, OCaml, .NET, Java, Ada, Delphi, D, Go.

All compile to native code and I don't have to fight with the borrow checker.

Rust needs a good story to be sold as alternative for web development.

We don't need system programming features in a web application. Unless maybe when offering a web interface to an embedded device, instead of coding it in C.

EDIT: Before the downvotes continue, I am not attacking Rust and feel it is good that it also gets web frameworks.

The point being that for web development there are already better alternatives and unless one needs Rust special language features, there isn't a compelling reason to use it instead of the more mature alternatives which also make use of native code.


The borrow checker also eliminates data races (and a variety of other bugs), something that most of the other languages on your list do not. While avoiding GC is probably the most prominent reason for the borrow check's existence, it is not the only one. You shouldn't have to fight with it.

(Haskell in particular has a much more restrictive borrow checker than that of Rust: purity.)


As a Haskell programmer I feel this isn't exactly true. Haskell's garbage collector and referential transparency make immutability not really an issue. Impure code is simply of a different type, so it's easy to avoid. What makes dealing with the borrow check hard is that it triggers on code that intuitively is correct, it can be hard to envision why it's not. I feel Haskell does a better job at diverting you from incorrect code.

That said, dealing the borrow checker is just a skill. You take out a few hours to learn to deal with it, and then you just can. As you said it's super powerful and definitely your ally, so saying its a downside of Rust is silliness.


> What makes dealing with the borrow check hard is that it triggers on code that intuitively is correct, it can be hard to envision why it's not.

Because it's more expressive than just forbidding mutability. You could program in Rust the same way you program in Haskell, by encapsulating mutability in functions with different types (&mut). Then the flow sensitivity and other issues would never bother you.

Rust's flow-based control of mutability is strictly more expressive than the monad system of Haskell. If you don't like that expressivity, don't use it.


I disagree. I find immutably with GC much easier than mutability with the borrow checker for most tasks. The tradeoff is that you have to trust the compiler a lot to perform well, and that trust is not always warranted.


You can use immutability in Rust and the borrow checker will never bother you. The borrow checker allows you to take all the immutable references you want. What I think you are talking about is the lifetime system, which is not something that pjmlp mentioned.


I think you overestimate how willing most developers are to fight with the borrow checker instead of using actor libraries or task libraries (TPL, TBB, PPL, Fork-Join, Akka) instead.

So far the Rust tricks with unsafe for certain data structures (e.g. double linked lists/DAGs) seem like fighting with the borrow checker. Or the issues to be solved with upcoming 1.3 changes for pattern matching.


> I think you overestimate how willing most developers are to fight with the borrow checker instead of using actor libraries or task libraries (TPL, TBB, PPL, Fork-Join, Akka) instead.

That seems like a non-sequitur. What does the borrow checker have to do with actor libraries?

Are you implying that these actor libraries give you the data race freedom guarantees that Rust's mutability rules give you? If so, that's trivially false. Those libraries guide you toward patterns that eliminate data races in simple cases if you follow them, but nothing enforces the rules. Moreover, being forced into actors or message passing for concurrency severely restricts the expressiveness of your program. Rust's approach lets you use shared memory if you like, or message passing if you like, or a combination of the two, with enforcement of the rules no matter which you pick.

> So far the Rust tricks with unsafe for certain data structures (e.g. double linked lists/DAGs) seem like fighting with the borrow checker. Or the issues to be solved with upcoming 1.3 changes for pattern matching.

Yes, they're tough. This is a downside of manual memory management (though one that can be improved in future iterations). But doubly linked lists/tree-like data structures are hard in Haskell too. "Tying the knot" is not exactly easy or intuitive.


> Are you implying that these actor libraries give you the data race freedom guarantees that Rust's mutability rules give you? If so, that's trivially false.

No they don't give the freedom, but they give them comfort for the average developers.

I deal a lot with 9-5 enterprise developers that don't even know that HN and Reddit exist, or whatever languages exist out there besides what they use on the job.

Yet they do write multi-threaded code with help of such libraries, happy that someone gives them training wheels.

> But doubly linked lists/tree-like data structures are hard in Haskell too.

True, but laziness helps. However Haskell is hardly a mainstream language.


> Yet they do write multi-threaded code with help of such libraries, happy that someone gives them training wheels.

And they create a lot of bugs in the process. Empirically, a message passing framework does not eliminate data races as a very common class of bug.

> However Haskell is hardly a mainstream language.

Well, you brought up Haskell earlier. I agree of course that both Rust and Haskell make it hard to write doubly-linked lists and trees, and that Haskell is not a common industry language.


Haskell was just one of the languages on my list. I was basically referring to all languages with available compilers and better web frameworks than what Rust currently offers. And I did miss a few ones.

As for data races elimination, I don't see people really moving away from those libraries, but I might be wrong.


> Rust needs a good story to be sold as alternative for web development.

I don't think anyone is advocating Rust as an alternative for web development. At least not if any of the managed languages are viable alternatives. Of course you should pick C# if it's viable for you, working in it is 10 times more comfortable than working in Rust.


Rust is faster than most of the languages you listed and has some nice language semantics e.g. generics.

Why should we use Go when more mature alternatives exist ?


Except for Go, all the languages I listed have support for generics.

I doubt Rust is faster than the languages I listed. Assuming a comparison with all available compilers, not just open source ones.

I only had Go in the list, because it also compiles to native code and has support for web on its standard library. And it is a good alternative to C for anything that touches the network.


Why do you think other languages will optimize as well as Rust? Specifically, how do you propose to get the same level of memory alias analysis with unrestricted aliasing and mutation? How do you propose to deal with escape analysis for higher-order functions without a region/lifetime system?


Years of effort put into their compiler optimizers.

For example, without any benchmarks I wonder how Rust current code generation relates to optimizing Ada compilers, specially the commercial ones like Atego.

It is hard to believe a language with an 8 year old compiler already generates better code than a 20 year old compiler battle tested in many production systems, even with the help of LLVM as backend.


Perhaps you're discounting the importance of LLVM and at the same time overestimating performance of some of the languages you mentioned.

.NET: type system has some performance oriented features, but JIT compiler isn't all that great (currently).

Java: java has terrible performance model but JVM does heavy lifting to recover a good amount. However, there are still problems once your perf targets get tighter and tighter (ergonomics also degrade the more you push performance).


I am not overestimating the othe languages, because unlike many HNers I differentiate between languages and implementations.

Just to pick on your example, for Java there is OpenJDK, Oracle's commercial JDK, IBM J9, Excelsior JET, HP-UX JDK and many many other compilers.

The new .NET compiler makes use of the Visual C++ backend, as another example.

Ada compilers have around 20 years history of production code.

Same goes for some of the other languages.

So when one bluntly states that Rust 1.2 compiler already generates faster code than any of the compilers for the other languages, including commercial ones, it is a bit hard to believe.

Of course Rust's compiler will improve code generation.


Language vs implementation is Sufficiently Smart Compiler phrased differently. The main difference between Rust (and say C++) and .NET/Java is the former has a stronger performance model baked into the language which makes it easier to make an implementation fast.


However not all implementations are made alike.

For example, when one speaks about Java, are you aware that IBM's J9 compiler supports value types via the packed objects extension?

Or that Mono cannot compete with Visual C++'s backend for .NET native code generation?

Or that any commercial Common Lisp runs circles around the open source offerings?

So speaking of language X, always matters to refer to implementations.


I'm aware of PackedObjects. Note it's not a value type in the sense it doesn't get allocated on the stack (if used as a local), it's a flattened object graph on heap with a shared header.

More to the point, implementations are interesting to compare when they're for the same language, but not so much across languages as languages have different semantics and features that dictate how amenable they are to optimization with today's compilation techniques.


I disagree.

There is a good reason why we don't use e.g. Tiny C to discuss C performance vs other languages.


Maybe I didn't make myself clear. If you want to compare speed cross lang, take fastest impl of each and compare that; so long as we're talking about real compilers existing today.

Also, fundamentally C# and java aren't designed on zero-cost abstractions as core principle, which is fine in and of itself. Implementations try to recover performance but it's an uphill battle as compilers will always be limited by various constraints.


> Maybe I didn't make myself clear. If you want to compare speed cross lang, take fastest impl of each and compare that; so long as we're talking about real compilers existing today.

I agree.

> Also, fundamentally C# and java aren't designed on zero-cost abstractions as core principle,....

Sure, but then again Ada, Delphi and D are also on my language list.


Rust uses LLVM for codegen, which has had a _lot_ of time and resources poured into it.


True, but having a good motor is not enough for having a good car.

For example, apparently SIMD is only getting done on 1.3.

Again, my whole argument goes against someone stating that Rust 1.2 compiler already generates better code than all the existing compilers for my language list, some of them with 20 years of production code.

It is quite clear to me that Rust compiler will improve and take better advantage of LLVM. I just don't believe that 1.2 is already there.


I think there's no way we're going to know either way in terms of the current state of the art. I do think, however, that Rust has language advantages that will enable better optimization going forward than languages with unrestricted (or less restricted) aliasing and mutation.


I agree Rust will get better, but the comment I has replying to was about code quality today.


if you're "fighting the borrow checker", you're doing it wrong. borrow checker is there to help you.


People often use the phrase to indicate their initial difficulty when learning Rust, not deliberately attempting to subvert it once they understand it.


Not when implementing graphs and double linked lists apparently, without using unsafe.


Implementing a doubly-linked list using RefCell is no worse in difficulty as implementing a doubly-linked list in Haskell.


How many average joe developer do you see adopting Haskell in the industry?


Yeah--so, Rust's mission is memory safety without garbage collection.

Is there ever really a situation where you need to write a web service but cannot afford garbage collection?

Rust is a really cool and novel language. I considered learning it, but since I mostly program web apps and services, I decided to go with Haskell instead. it has much better performance than dynamic languages, but is still garbage collected, because managing memory manually in my code seems like a wasted effort.

I chose Haskell rather than Go just because I like functional idioms, but you could just as easily replace Haskell with Go in the above paragraph.

I mean, I guess the fact that you can write a web app in Rust is cool, I just don't understand why you'd really want to, when there are languages like Haskell and Go (and Nim and D...) that give you fast binaries and garbage collection.


If you can write programs without garbage collection, why should you use it? Because using it makes your life easier?

To me, Rust is easier to use than both Go and Haskell. Regarding Go, I don't like its inexpressiveness.

Regarding Haskell, I'm tired of its various aspects like

1. Insane defaults. `String` being `[Char]`, `-Wall` is almost always necessary, no record types, `import` imports all names by default, etc. Haskell is clearly showing its age.

2. Lazy evaluation. Not only does it complicate inspecting what's going on, but also it leads to memory bloat. Don't say you can manage it easily. I had tried that.

3. No stack trace. No, enabling profiling won't help, because thanks to lazy evaluation the exact location of the error will not be given to you.

So, to me, Rust feels like a "saner Haskell". Having a trait system similar to Haskell's type class, but without many of its warts.

That said, if there's a mainstream language that is more similar to Haskell but uses strict evaluation, I will definitely be sold. Idris is one of the contenders, but it isn't quite mainstream yet.


The most mainstream lang similar to what you're talking about is F#. Excellent tooling support; lots of libraries via .Net, and a reasonable ML with some fun stuff like active patterns. Web dev can use the ASP.NET stack or F# specific stuff like WebSharper (cross compiler to JS, plus web framework).


Yeah, actually I recently installed Visual Studio and wrote some elementary F# code. Seeing the IDE and the debugging facility work out of the box was quite pleasant. I'm also sure I could get advantages from the huge .NET ecosystem, although I have no idea where to start because I haven't used MS products until recently. MS's recent open sourcing move made me think about liking and using their products.

That said, I have a few concerns before actually being sold to F#.

1. F# must interact with the remaining of the .NET ecosystem. So, I think F# may have some interoperability features to ease the process. Wouldn't that increase the complexity of the language? This is a similar concern when I evaluate Scala as an alternative to Haskell. The feature set of Scala looks a bit messy to me, nearly on a par with C++. It could be OK if I use only the "functional" subset of the language. But does it really make me free from e.g. null pointer dereferences?

2. Do F# guys actually develop their "native" libraries not relying on the .NET ones? This is similar to #1 but slightly different. I think if the two languages are somewhat similar, and even share the ecosystem, there may arise a tendency to make libraries that work on both languages. But if one makes a C#-focused library it must not be very easy to use in F#. Is this not a concern in the F# community?


F# has very slick interop. It has first class support for .Net objects, so nearly all C# libs work fine from F#. There are a few exceptions, where library writers take really moronic shortcuts to make up for C# lack of basic features, like tuples and list literals. But that just makes them sightly silly to use, that's all. That's probably less than 5-10% of libraries. F# contains some sugar to gloss over a lot of ugly C# idioms so things generally work surprisingly smoothly.

What I find more confusing is C#'s love of mutation. For instance, "fluent" interfaces, where you call an object's method and it returns the same object. I find it confusing in C#, and it seems even more out of place in F#.

F# feels less complex than C# - less "axioms". I've not evaluated this even remotely formally, but it's the feeling I have, being familiar with both. F# really is faster and easier to work with. While yes, F# can't eliminate null references (even inside the language, you can force null) it is far less of an issue.

F# has some of its own libraries, and then also you'll find wrapper libraries to F#-ise other libs. The F# community seems very helpful and excellent. Even the F# team is far more engaged than C#'s ever was. I'm so impressed that once I emailed them with an issue, and Don Syme offered to get me a custom build to tide me over until my problem was officially fixed. This was well after F# has started gaining stream. They are very nice people, and amazingly competent.


I've been writing a web app in rust for a while now (using a different framework though, iron's ergonomics didn't suit me for some reason). Overall I've found it a pleasant enough experience. The reason I like the it is that I write code for a wide variety of domains, from web apps to compilers, and so far rust has been a decently usable language in all the domains I've tried it in. I really like the "one language to rule them all" aspect of it because it simplifies my life - chances are good that whatever problem I'm looking at will either be trivial enough for shell scripting, or will be amenable to a rust-based solution. Narrowing down the language set seems like it could also lead to some interesting interoperability potential, like an easy web UI for a disassembler library or something.

I still use half a dozen other languages on a regular basis, so the dream of being able to really focus on just one is still just a dream. However, rust is getting closer, and I'm gradually migrating most of my work to it when the choice is up to me.


The only real time Rust would be better due to lack of garbage collection is when you need performance consistency.

It might be better than some of those languages for other reasons though. For instance I think Rust is more expressive than Go. I think it's easier to learn and more practical than Haskell (though I do enjoy me some Haskell too) and I think it has better tooling than all of those mentioned (except Nim, I don't know enough about Nim to pass judgement).


> Is there ever really a situation where you need to write a web service but cannot afford garbage collection?

Yes, anything that has a real-time requirement.


Doesn't the latency of connecting over the web make it unsuitable for real-time anyways?


Sometimes the important criterion for "realtime" is jitter, rather than latency. GC means jitter (inconsistent or unpredictable delay).


> No one would (should!) ever consider writing a webservice in C++, simply because it's unsafe to do so

That's a pretty bold statements. By the same account, you should not write a webservice in rust because it's unsafe. One example is Rust's treatment of OOM cases.

Today, if you run into OOM Rust will panic. Technically, you can have it call a oom handler, but you cannot call many common function in the handler since they allocate memory, so it's not super useful.

Here's the bug, https://github.com/rust-lang/rust/issues/27335 . And, here is the original one https://github.com/rust-lang/rust/issues/14674 . (Side note: I find the habit of closing bugs and reopening new ones kind of annoying, since it's hard to track how long it's been know / around).

I imagine as a malicious client you could try to DOS an application by 1. forming a message that will cause the application to allocate a lot of memory (as temporary buffers). 2. Send a lot of messages causing pressure on the memory subsystem. You can of course try to not have these issues in your application, but Rust doesn't explicitly save you from the,

On the other hand in C application, you can try to close the offending client's connection and in C++ if you get a bad_alloc (and you deal with it) you'll end up unwinding your stack (and free enough memory to continue on). C++ makes it even easier to handle it at the top level and let RAI take care of the rest.

Yes, in Linux you can die in the system oom case. But, 1. you can turn that off (and you should at least on some systems), 2. depending on your over commit policy you'll get an error instead of OOM if you allocate in a too big chunk.

I personal like Rust and I'd like to see it succeed. But I think the Rust marketing is getting away from what Rust is able to do in it's current incarnation (1.2). Simply, in it's current incarnation there's lot's of unsolved problems (big and small).

So please stop making outlandish claims.


You say like C/C++ applications are particularly more resilient to OOM situations. That hasn't been my experience. You can deal with OOM in C/C++, but that doesn't mean it is quite viable. Even popular C libraries just abort when memory allocation failed. Regarding C++, is there really a code that even tries to capture `bad_alloc`? If so, did they test the various OOM situations that the program may encounter? I highly doubt it.

I don't think that the "just panic" strategy used by Rust is pretty. It is mediocre at best. But I also don't think the way C/C++ handle OOM is great. They produce similar results in practice.


C libraries are worse, esp. the desktop focused ones (glib and all of gnome). Network focused libraries tend to be better (lib libevent / libev), and you see people send patches to fix OOM cases since enough people disable over commit / don't run Linux.

In my experience C++ libraries tend to do better because of culture using RAI for everything. And the C++ servers I've worked with most of them had a std::exception handler some where near the top of connection / request handler and deal with it fine. And over the years, the bad_allocs helped us deal with some dumb (but not malicious) clients without going down. In one case the client was essentially causing a multi GB buffering window to happen.

I'd be curious if anybody at fb could share their experiences since it seams like they building a lot of stuff using their C++ folly / proxygen libraries.


That's interesting, it seems that I'm biased because I've primarily seen desktop-ish libraries.

Regarding your usage of `bad_alloc`, I think relaxing the current behavior of aborting will help implement such a behavior. Long-running Rust applications should be resilient to panics anyways, so request handler may regard OOM as just like many other panicking situations. But I have no idea how it would be hard to change Rust's OOM handler to panic. Currently it just aborts.


While I think this is an important problem to solve, it's not lost on me that this isn't an easy problem to solve. It's complicated further by design decisions like no stack unwinding and current behavior. I'm not sure they can change the current behavior without having a backwards incompatible change.

It's also not a unique problem. The Linux guys have been working better handling OOMs in the kernel. There's been some interesting discussions in the last year how to guarantee forward progress in filesystem transactions in the face of OOMs (in the middle of transactions).


What? Stack unwinding on OOM would not be a backwards incompatible change if it could be made to work (which is no harder than doing it in C++).

"Design decisions like no stack unwinding" is a confusing thing to say, given that Rust uses stack unwinding.


Stack unwinding on OOM can't work properly right now given that stack unwinding can allocate memory. If anything trying to make that catchable locks you into aborting on double panic. It's a bad idea. The correct way to approach this is to design a standard library variant that doesn't assume success on allocation.


Sure, I agree that these are challenges, but C++ has exactly the same problem with std::bad_alloc, so I don't see how this is an argument that C++'s approach is better than that of Rust.


The expected behavior for C++ destructors is pretty well documented. Eg. don't throw in a destructor / don't do things that would throw in a destructor. You're free to try to skirt these rules, but if you get caught that behavior is documented. If you're building an application when you care about std::bad_alloc, you probably know that already.


The library ecosystem needs to know whether it's allowed to make allocations in destructors. With the current situation, it's no problem, because there's no unwinding on OOM. This is not backward-incompatiblity, but it makes it hard to make unwinding on OOM to something useful.


When one says unsafe or safe, it's primarily about type safety and more so memory safety, so it depends if you consider OOM a violation of memory safety.

If you don't have a hard limit in terms of your buffer or memory allocations (and have an unlimited upper-bound) you'll be in trouble sooner or later. I'd argue that it would be a far more robust principle to institute an upper-bound than trying to catch OOM errors.


Rust's goals are both type and memory safety. I consider an OOM with a panic/abort a memory safety issues. Linux also does these kinds of things to your process, but there at least you can opt out of it.

I agree with you, in the real world you need some kind of well tuned upper limits above which you apply back pressure / start dropping connections. But in the absence of this the runtime should try to make every possible attempt at forward progress (even if we're just limping along).


How is OOM unsafe? By that definition, all languages are unsafe.

And note, this is waving over the real safety issues that C/C++ has, which allows small bugs to turn into remote shells.


It's not a memory safety issue any more than any other kind of abort is. If it were, abort would be marked unsafe.

> But in the absence of this the runtime should try to make every possible attempt at forward progress (even if we're just limping along).

So what is your proposal?


It's not a very bold statement. I bet Strousup would back me up on it, and if not he would add slight nuance. The basic thought process for choosing C++ is very simple: Can I do this in any other language, if yes, choose one of those languages. The cool thing about Rust is that it covers a selection of problems where normally the answer would be 'no'.

I totally believe you that there's still problems where the answer is still no, but I don't think a webservice is going to be it.


I'm wondering how this is honestly seen in production, especially on Linux since malloc() never fails, how does rust detect its OOM ?


> Ruby powers a bunch of high throughput websites yet it's notoriously 200x slower than any other language

Have you considered the costs of the hardware that compensates for the software's slowness?


All that hardware is cheaper than a developer who can optimize the software without adding vulnerabilities. Plus Ruby/Rails apps rely very heavily on caching.


Considering the CPU architecture, everything relies heavily on caching. Not to mention it's the first thing any developer thinks about when performance is concerned. I'd be surprised to see any large application that doesn't rely heavily on some kind of caching.


Unless you go facebook. Then a performance improvement of 0.5% can be worth 100k a year.


> No one would (should!) ever consider writing a webservice in C++

I can think of one area where that is done (and where Rust might fit quite nicely): Bittorrent trackers!


This surprises me; what's so special about bittorrent trackers?


To add to what throwaway2048 said, as well as having high performance requirements, they're small and don't change much. You wouldn't want to write your typical CRUD app in Rust or C++ because developer speed is more important than application speed. But bittorrent trackers, while using http, are operating to a strict protocol, and are nowhere near as large (in terms of features and code size) or as fast changing as typical web apps.


they are extremely performance demanding (the actual tracker itself, what the clients communicate with), and use http. So much so that major trackers are constantly running into performance limitations.

They often have to issue 1k+ requests per second.


> No one would (should!) ever consider writing a webservice in C++

Depends on what kind of web services you're talking about. C++ is widely spread in game development. It's nice to have the client and server code in the same language, so in game development, C++ web services (including multiplayer servers, if you count that as a web service) are not too unusual.

Two years ago, I even had a customer who wanted me to create a C++ backend for a large website. That was a surprise, of course. But it was fun, it's safe, and it works. You can easily test for safety, and the security logic stuff is just what you have in any other language.


> You can easily test for safety

How do you define easily? Afaik, state of art is still: static analysis, maximum pedantic level from the compiler, 100% coverage, lots of fuzzing, lots of logic testing... and it still doesn't stop you from getting owned via one of the libraries you include (JSON, http parsing, ...).

Is that not the case? It seems far from easy, if you compare to (for example) python framework where almost all of the code is pure-python.

Edit: I meant practical state of art. Sure, you can prove your webapp secure if you have enough time. But nobody would seriously do that.


Pretty sure Gmail's backend is C++, along with many other Google web services.


> Pretty sure Gmail's backend is C++,

Define "backend". Pretty sure the core business logic of Gmail is written with tons of Java. C++ is used for generic "infrastructure" software, not for writing Gmail's business rules.


> There's a web gateway in our cluster that receives binary blobs over REST and puts them onto the message queue. In total not much over 100 lines of Ruby. We've thought about reducing the server load a bit by porting it to Go or some other more performant language.

On a note unrelated to the actual thread, how do you port a piece like this to another language? Is the Golang part another webservice? Do you fire up a thread to start a process? Is it called via a bash script? If there's an error how do you get the error back from it?


Are you asking what application server I'd use? Or what framework? Currently the service is a Sinatra app managed by Passenger, it logs errors to stdout and airbrake. I guess if we would port it to Go it would be exactly the same. Frameworks or app servers are details I haven't thought about yet.


> But here comes Rust, it takes away the unsafety

There are no memory leaks at the language level in Rust but lots of other unsafety exists in web apps at a higher level: SQL injections, XSS, etc.

A battle hardened framework like Rails probably gives a safer end result until a Rust based framework is more mature.


    > There are no memory leaks at the language level in Rust
Rust absolutely can leak memory, trivially. There's even a function that does it, http://doc.rust-lang.org/stable/std/mem/fn.forget.html


(To be clear: "memory safety" doesn't mean "freedom from memory leaks", it covers things like buffer overflows and use-after-free violations, https://en.wikipedia.org/wiki/Memory_safety .)


Regarding your web gateway, you might be able to use heka with the HTTP Listen input and AMQP output.

https://hekad.readthedocs.org/en/v0.9.2/


Main author of iron here, on mobile but happy to answer any questions.

Here's a link to the first chapter of an iron tutorial I've been working on, which explains the "hello world" example in great detail and introduces some of irons core abstractions: https://github.com/iron/byexample/blob/master/chapters/hello...


Thanks for all the work on iron / hyper / web-related rust! You've done some impressive amount of coding!

I was wondering what are your plans regarding iron-related extras. It seems like logger, staticfile, and others are not updated as often as iron itself (some fixes I noticed lately were PRs, staticfile fails on travis). Do you have plans on how to expand iron in the future? Will you produce more basic blocks yourself, or do you expect some community modules to start growing as people use iron itself?


I try to maintain and ensure the basic building blocks are usable and high quality, but I am just one person, and there is much to do. There are several other people associated with iron who also help with maintenance and feature additions to the key packages under the iron organization, and they make everything much much easier (thanks again!).

There are already some community crates that integrate with iron, providing things like handlebars templating; I hope that as the community expands, more third party crates will appear and make using iron even easier.

If anyone is interested in working on iron or wants to write a community crate using it, you should reach out on the #iron channel on the Mozilla IRC network. I and others hang out there and we can answer questions and provide help.


Crates.io uses Rust as a backend, Ember on the front. It uses about 35MB of resident memory, and (other than some weird DNS issues that aren't the server's fault) is just super rock solid. (It doesn't use Iron, though.)

I'm still not sure the application tier isn't best served by something that's easier to prototype in, but if you already know Rust, the web stuff is shaping up pretty nicely.

We also got Hyper, Iron, and Nickel entries into the Techempower benchmarks, I'm really interested in seeing the eventual results.


Just curious - what are you using on the backend? From-scratch rust code?


https://github.com/rust-lang/crates.io

It uses 'conduit', which is a web framework that's basically only used by crates.io, as far as I know.


We use conduit at Dropbox as well, but mostly for PEP 333 type reasons ("universal" request/response types and a pluggable coupling layer between handlers and servers.)


Ah, neat! Thanks for filling me in. I almost never hear people talking about it.


Based on my experiences thus far, I don't think Iron or Hyper will do very well in Techempower. A framework based on mio can handle far more connections. But let's wait and see.


There's ongoing work to get Hyper and mio to play nicely together.


Lacking an event loop is not the biggest problem. Both libraries have a lot of allocation and other abstractions that get in the way of performance. Mio allows you to work with the sockets directly and does not allocate (and its APIs are designed to be usable without allocating), making things much easier to reason about.


Warning: very little knowledge of Rust and Iron

Can someone explain why there is an extra Ok(...) in this? (I want to call it a function, but I'm not even familiar enough with Rust to be sure that it is a function).

Is it something that could be removed? Right now it just looks like boilerplate.

Edit: Thanks everyone! Ok is similar to a Right or Success of a Either/Result type of object.


You mean in the handler? The return value has to be of a type `IronResult<Response>`. That means it can be either `Ok(...)` for success or `Err(...)` for failure.

In other languages/frameworks (python/pecan for example) you'd throw an exception in case of things going wrong. In Rust exceptions are for very exceptional things only (it's called panic). So the more calm way is just to return `Err(...)`.

It's not a function, it's more like a tagged union (in rust called an enum). So in practice it's like C's union, but you do know which member was chosen and only that one is accessible.


Small correction: Rust doesn't have exceptions. panic!() crashes the current thread with an optional error message, but it doesn't do stack traces and there's no try/catch in the language (there's the try!() macro, but it works with Result<T, E> values, has nothing to do with panics).


It records stack traces. Use the environment variable `RUST_BACKTRACE=1` to print them.


I assume you mean in the example at the top of the page.

In theory, a view function should just take a request object and return a response object that encodes 200 OK if all went well, and something wilder like 404 Not Found or 500 Internal Server Error if something goes awry. In practice, problems can happen at a much lower level than HTTP error codes are designed to handle, like "database connection refused" or "template file not found".

Rust's general-purpose error-handling system is the Result<T, E> type, where T is some useful return type, and E is some type representing an error. A function that does error-handling is declared as returning, say, Result<String, MyError>, and then in the body of the function you can "return Ok(somestring)" or "return Err(someerror)".

I see that the example function returns an instance of type IronResult<Response>, which I assume is a wrapper around Result<T, E> that hard-codes E to be some Iron-specific error type (in the same way that Rust's std::io::Result<T> is shorthand for Result<T, std::io::Error>), so the Ok() is telling the framework "this is an actual legitimate response you should send to the browser", as opposed to an excuse for not producing a response.


Because hello_world returns an IronResult, which in turn is just a simple Result type from Rust. Note that the status::Ok is different from the Ok(...).

Iron::new expects a function that returns a Result type, so it has to be there.


Also, because it is a Result it forces the caller to either handle the error or explicitly panic on it, right? They can't ignore it? (which is awesome!)


Yes, because it is a Result, the caller cannot access the success value without acknowledging the existence of an error -- via match , unwrap, or try.

If the caller does not want to use the success value (esp. in cases which return Result<(),E>, i.e. return an optional error), there is a lint which tries to ensure that the error value is handled, though as steve says you can circumvent it (which is done in an explicit way so that it's pretty obvious that there is an error being ignored).


They could do something like

    let _ = something_returning_result()
But without the let, there's a warning. And culturally speaking, let _ is discouraged for this reason. If I want to ignore an error condition, I unwrap() instead, so at least my dumb decision will blow up in my face later instead of faint silently.


I think they were talking about the compiler-enforced exhaustive match, not must_use.

The fact that you must acknowledge the existence of the error value before accessing the meaty success value (either with a match, an unwrap, or a try) is a great feature in Rust.

must_use forcing you to handle errors for things you don't need the meaty success value of is just icing on the cake.


It is needed because the handler doesn't return a Response, it returns a Result that wraps a Response if there isn't an error. This provides the means of error handling.


Every time I see new languages and frameworks being used for web applications I get scared.

One reason I am super gung-ho about Golang for an API or website, is it came with the net httplistener package, html/template package, and has several fairly well made open source packages (and standard packages) for secure sessions, CSRF tokens, output filtering, shelling out, so much more...

Without these types of features, people are doomed to build web-apps no more secure than everyone's first, broken, PHP webapp.

I'm not sure what the purpose of a truly system's oriented back-end is similar to this[0], it kinda of scares me. Some have mentioned for embedded systems. I don't know if people are using rust in embedded systems yet. But please be aware of just how wild-west writing a full webapp or API with this would be.

[0] - http://www.gnu.org/software/libmicrohttpd/


> One reason I am super gung-ho about Golang for an API or website, is it came with the net http listener package, html/template package, and has several fairly well made open source packages (and standard packages) for secure sessions, CSRF tokens, output filtering, shelling out, so much more...

I'd argue that it's not a good thing. net/http is flawed. They either should have done like Nodejs , provide something barebone or provide something with more features. Why provide some embryo of a router(since it supports some form of string pattern matching) without supporting route variables? The result is people go through hoops to try to keep a certain level of compatibility with the default handler signature while trying to add features, and it leads to a galaxy of horrible packages,because net/http is all you need ... no it isn't .

> Without these types of features, people are doomed to build web-apps no more secure than everyone's first, broken, PHP webapp.

But Go net/http doesn't magically make an app secure either, on the contrary. It doesn't put some CSRF token by magic in your forms, or force tsl , or force strong parameters when de-serializing form data to an entity that has to be persisted in the DB , or come with any kind of extensive data validation framework , or force http only cookies by default ... so it's not much more secure that your first broken PHP webapp ...

My point is Go isn't magically more secure nor superior to the things you are "afraid" of.


All of the features you mentioned have Rust equivalents in the crates.io ecosystem. So what in particular do you prefer about Go?


What's the CSRF library? I couldn't find one on crates.


Yeah, I guess crates.io is missing that one. It should be straightforward to write though.


But PHP does come with lots of built-in functions for stuff like secure sessions, templates, output filtering, and shelling out! A lot of them aren't really designed well or cohesively, but you could argue that the fact they exist and are bundled with PHP causes people to not look for better third party libraries, which can evolve and not need to eternally keep all old functions for backwards compatibility. (I have a similar complaint about Python. Python has both urllib and urllib2 bundled with it, and neither are that great, but the number of different ways to install 3rd party Python libraries appears to be ridiculous -- pip, distutils, setuptools, virtualenv, easy_install... -- which lead to me just avoiding them and settling for the built-in things when I use Python.)

Node has a great package manager for easily using 3rd party libraries, keeping them up-to-date, and avoiding dependency hell between different applications. The package manager is bundled with Node, so all libraries use the same system and people are encouraged to look for (and make) libraries for what they need. Rust also has a recommended package manager: Cargo.


I still haven't found a decent authentication library for Go that you can trust.


You'll never find a decent authentication library for any language that I can trust.


If you want to be pedantic about it then sure ;)

Show me a well known, battle-tested and mature authentication library for Go? btw, I was just showing the parent that Go doesn't have everything for a secure website/api as he claims and other languages are way ahead of Go for making average Joe programmer write secure web services because of their mature library ecosystem.


By your own statement why would I bother with Go ?

It is equally a new language/framework that is new and hardly used compared to Java, Scala, Ruby, Python etc. All of these have dozens of well made open source packages which will be far more full featured and understood.

And for me I would be deferring front end security to the web server layer i.e. Nginx.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: