Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Writing a website in Rust (viraptor.info)
221 points by viraptor on May 30, 2015 | hide | past | favorite | 135 comments


Well, IMHO, "the right hammer for the right nail".

Rust is a great systems language (besides the lack of bitfields) but I wouldn't use it as a web language. Just as much as I wouldn't use C++ to develop webapps.

Sure you can do it but then again there are far easier ways to achieve your goal.


Sure, that wasn't rust's main goal. I did the app in rust simply because I had time to do that and wanted to get more comfortable with lifetimes. (they're in my nightmares now)

But honestly, I came away surprised how good the experience was. I wouldn't start a big team project in it (at least not until more common middleware is available and some higher level frameworks), but when rust 1.1, or 1.2 lands I'll definitely consider it for http microservices or RPC endpoints.


Some would say Scala is not a "web language" either, but I think Play Framework developers would disagree. The rules of the game are not as clear as they used to be. For instance - web frontend is being dominated by JS frameworks and all the backend does is handling REST calls. I would argue that when you need scalability/performance/reliability, when milliseconds start to count - static languages make very much sense and when Rust can offer a comparable performance with C - it looks a very appealing option.


This may miss your point, but I would easily argue that Scala is a "web language," just as Java is and was before it. Scala's focus on concurrency helps drive this point, as it's not response time so much as throughout given IO constraints that make a good environment for web applications.


Yeah, I don't really think people would argue scala doesn't make sense for web dev. I think the main is that webdev can generally tolerate GC and it's not worth giving it up.


Out of curiosity (I've never done systems programming), what are some of the things that make a language more suited to systems programming rather than web (other than libraries and community support - I'm more interested in the intrinsic qualities of the language)?


Outside of userland tasks like file manipulation, it is impractical to use the popular web application languages like Java or scripting languages such as Javascript or PHP or Python for systems programming without employing interfaces or hooks to libraries or programs written in languages that are compiled to machine code, simply because programs in those languages are commonly deployed using virtual machines or interpreters that have only userland privileges. So good luck writing a driver. And I don't know ... if your vm has heap allocated to it, are you able to address system memory out side of that to address hardware? Also, the language runtimes usually add a lot of overhead as well and employ automatic garbage collection, which is generally something you don't want in performance-critical systems.


That is not strictly true. Look at Snabb switch, written almost entirely in LuaJIT for example https://github.com/SnabbCo/snabbswitch - you need some kind of ffi to mmap device memory, but it is quite possible. (LuaJIT is fast of course which helps for 10Gb ethernet).


That's kinda what I was thinking when I commented, and there are sure to be exceptions. But I don't see anything different with your Snabb switch example. You can probably use Java and JNI for that matter to call mmap, but would you be able to write an implementation of mmap in LuaJIT ?


Yes you can call mmap with the ffi in LuaJIT just fine. It is not very complicated, it just asks the kernel to do it. Implementing what the kernel does, well that does need some assembly.


Indeed we agree. Using any ffi for that matter.


For Prometheus's console templates (basically offering a html templating engine to end users to create monitoring consoles with), I used Go's own templating language as nothing else seemed mature at the time.

Go is strongly typed, and so is it's templating language. This means that you need to take a much more rigorous approach to using them, as a single unhandled edge case value can break the entire page. Contrast this with something like Jinja2 where if you mess up it'll only break a small bit of the page.

I managed to produce a usable template system, but it's still a tad more verbose than I'd like. It'd have been nice if a more scripting-language style templating language had been an option.


> Contrast this with something like Jinja2 where if you mess up it'll only break a small bit of the page.

Do you have some examples of error that are easy to make with Jinja2? I personally find it to be a thing of beauty, but it's possible that I don't try to make very elaborate things with templating engines and thus might be missing out.


For me it's strong static typing, control about memory layout of you data, memory locality (arrays are arrays and not linked lists in disguise, you know what goes on the stack and what's on the heap, etc), support for bit based operations (which Rust lacks a little as there's not bitfields), deterministic memory management and a few things I didn't think of.


Some form of controlling the memory management. In GC enabled systems programming languages like e.g. Oberon and Modula-3, there there will be APIs to have control about memory.

Ability to convert between language types and raw addresses, in the form of pointers.

Toolchain should also provide the ability for aot compilation both to static and dynamic executables/libraries.


Super fast, type safe (and thread safe in Rust), compiled, low/no cost abstractions



The benchmarks game's results are not very correlated with real-world performance.

In this context, Rust's semantics do allow C-level performance- they make it straightforward to describe the same machine code that C would. In some cases, they even enable more compiler optimizations than C does.

Java, on the other hand, makes it hard or impossible to do that (straightforwardly at least). A more mature compiler will make a lot of difference- compare early Java performance to where it is now, for example.


>>not very correlated with real-world performance<<

Please share your measurements of "real-world performance" that are "not very correlated" with the benchmarks game's results.

edit: Please provide a better response than downmods.


Servo layout, painting, DOM performance. JSON serialization in serde.


Obviously not in any way an answer to the question asked.

edit: Please provide a better response than downmods.


Go actually check the things pcwalton mentioned instead of complaining.


Please provide comparison measurements of the things pcwalton mentioned.

Just a URL to published comparison measurements would be so much more interesting for anyone interested in Rust, than downmods.



How about some measurements made with rustc 1.0.0?

(There seem to have been plenty of breaking language changes in the last 6 months.)


http://xania.org/201505/on-rust-performance

(The "faster" bit isn't important; various reasons might have made C++ slower than Rust for this, but what is important is that perf was comparable)


Do you think a 100 line ray tracer is any more "correlated with real-world performance" than 100 line benchmarks game programs?

What I questioned was Rusky's technical-sounding but apparently unsupported claim that "the benchmarks game's results are not very correlated with real-world performance." For Rusky to know he'd need some kind-of "real-world performance" measurements and a correlation coefficient - neither of which have been shown.


We've been focused more on language semantics than the Benchmarks Game for a while now. A few months ago, iirc, we were faster than C on some of them.


It looks like the worst offenders (vs C) are regex-dna (which is a young regex + unicode vs pcre + ascii) and fasta (where the rust version is still single-core). The others are at 1-2x cpu and on par for memory. I suspect most of them could be on par or better with a little effort - only regex looks hard.


> only regex looks hard.

I'm working on improving the regex engine now. Hopefully I can submit some improvements soon. :-) It will be hard to get near the top, but I think there's some low hanging fruit I can tackle first.


Let's not make this about the benchmarks game.

A couple of weeks ago you said -- "… optimizing performance hasn't been a focus, shipping good language semantics has been."

https://news.ycombinator.com/item?id=9565405

edit: Please provide a better response than downmods.


Optimizing language performance has been a focus.

Optimizing the workloads tested on the benchmarks game has not been a focus.


You seem to have a disagreement with what steveklabnik said, so please take that up with him.

https://news.ycombinator.com/item?id=9554676

edit: Please provide a better response than downmods.


Please provide better manners than those in your comments. :)


It isn't bad manners to ask Rusky or pcwalton to show measurements in support of their claims.

It isn't bad manners to trust that when steveklabnik answers "No, as optimizing performance hasn't been a focus, shipping good language semantics has been." that's what he means.

https://news.ycombinator.com/item?id=9554676


Okay, so I just tested the n-body benchmark [1] for Rust and C++ (program #8), and where they list runtimes of 24.62/9.4 (a factor of 2.62), I actually got a factor of 1.25 (5.45/4.35) on my machine [2]. I used the same version of Rust and nearly the same version of gcc (4.9.1 vs 4.9.2).

Conclusion: Don't draw conclusions from the benchmarks game.

[1] http://benchmarksgame.alioth.debian.org/u64q/performance.php...

[2] i5-3470 @ 3.20GHz


Conclusion: "Measurement is highly specific … Measurement is not prophesy." (benchmarks game homepage.)


> what are some of the things that make a language more suited to systems programming rather than web

the answer was to what qualities make a lang suited for systems programming not just specific to Rust


I think seriously the intrinsic thing is that the language itself doesn't marshal (right word, not sure) anything that's particularly fat. The classic is if allocating memory require dealing with a general purpose garbage collector. If that's the case you're pretty much dead as far as systems stuff goes because now you can't predict how long anything will take to run.

And the minimal libraries usually need to be light weight as well. Or at least such that you can strip out what's not wanted. Here again garbage collection becomes an issue. If the language supports it natively then it's going to be impossible in practice to prevent the libraries from using it for something unexpected.

The last bit is escape. There are always some cases where the language doesn't have the ability to implement a critical bit of stuff and you need a way to seamlessly mess with that something. In simple you need to be able to insert a bit of arbitrary code and have it work properly.


> If that's the case you're pretty much dead as far as systems stuff goes because now you can't predict how long anything will take to run.

Erlang would argue with you. GC works fine as long as you have a bunch of really tiny heaps (attached to a bunch of really tiny processes) which can be collected in parallel. It just sucks for Unix-process-sized heaps.


Standard library, look at PHP it's all about web!


Plenty of people use C++ for web apps. It doesn't take a good coder a lot of time to use them. Those apps also run crazy fast compared to Python apps, even on embedded boards. Plus they support every security, speed, or reliability-enhancing tech ever developed for native code.

So, I'd rather use a higher-level language (and do) for most web-application development. I also love Python for its tradeoffs, esp productivity & readability. Yet, there are valid use-cases for native web apps esp where performance or memory-usage matters.


I think I saw an article about people using c++14 as a scripting language since infered type, closures and other syntax niceties can make simple c++ as short as python.


"As short as" doesn't necessarily mean "can be written as quickly as". I write a lot of C++ at work, and it's usually quicker to write a Python script to do glue type work.

There is another advantage - I mostly write python for the 'glue' scripts in large build systems. It saves some complication if your build system doesn't first have to compile chunks of itself!


> "As short as" doesn't necessarily mean "can be written as quickly as".

Meaning, c++ semantics still require more care to have a running program, even a low-LoC one ?


Bitfields are evil...the fact that the language designer left them out signals he/they have good taste...I have to get around to trying out Rust.


No, ill-defined non-portable C bitfields are evil.

If you can specify bit endianness, byte endianness, spans of bytes, and so on... bitfields are mighty nice.


But the thing is...the spec is ill-defined in C, and given the fact the C runs on almost any processor, you can't specify endianess in the abstract and pandemonium ensues.


Whenever someone writes bad things™ about some language, one of the language's fanboys shows up and starts nitpicking! Yes, now is the time!

> Another bad part is Rust’s JSON handling. It badly needs macros which make things easier.

Actually this is not the end of the world. As you mentioned, Rust's JSON library supports `ToJson` for primitive types, but also provides compile-time code generation for arbitrary `struct`s. Quoting the code from my project:[1]

  #[derive(RustcDecodable, RustcEncodable)]
  struct Msg {
      cmd: String,

      id: i32,
      x: Option<i32>,
      y: Option<i32>,

      speed: Option<i32>,
  }
What the mystical `#[derive]` thing does is to direct the compiler to create the boilerplate for converting the `struct` to/from a string automatically. So, now you can do this:

  // Decoding a JSON string
  let text = r#"{"cmd": "move", "x": 5, "y": 10, "speed": 200}"#;
  let decoded: Msg = json::decode(text);

  // Encoding a Msg object to JSON
  let msg = Msg { cmd: "new", x: 10, y: 20 };
  let encoded = json::encode(msg);
The `Msg` struct is full of `Option`s because my project allowed many fields to be missing, but if the JSON messages in your protocol are in fairly similar format, you can eliminate most of them.

One more benefit of this approach is that it type checks. If the JSON you received is missing some fields that are not defined as `Option`, the decoding process produces an error! So you can be certain that you're handling a valid JSON after the decoding stage. This is analogous to schemaless vs. schema-enforced database design.

> Compared to many static languages, the handlers look tidy. Compared to dynamic languages, they’re terrible.

This is the property of the Iron framework, not of the Rust language itself! Another web framework for Rust, namely Nickel.rs[2], provides much simpler APIs.

I believe the author of Iron is trying to establish essential things first, and build more "user friendly" APIs on top of them. (FWIW, the Rust project itself follows the similar strategy.)

[1] https://github.com/barosl/pgr21-online-server/blob/master/sr...

[2] http://nickel.rs/


> Actually this is not the end of the world. As you mentioned, Rust's JSON library supports `ToJson` for primitive types, but also provides compile-time code generation for arbitrary `struct`s. Quoting the code from my project:[1]

Pardon my self-promotion: if you just need some one-off JSON and are willing to use the nightly compiler, I wrote a compiler plugin that expands JSON-like literals into an expression that expands to the tedious way of building up a JSON object.

https://github.com/tomjakubowski/json_macros

I find it's super handy for testing libraries or applications that emit JSON meant to go across some well-specified protocol. Some day I'll get around to expanding the library to support pattern matching with JSON literals as well (pull requests very welcome!).


I haven't actually gotten a chance to use Rust yet, but would something like this work?

    #[derive(RustcDecodable, RustcEncodable)]
    enum Msg {
        Basic { cmd: String, id: i32 },
        Positioned { cmd: String, id: i32, x: i32, y: i32 },
        Speed { cmd: String, id: i32, speed: i32 },
        Full { cmd: String, id: i32, x: i32, y: i32, speed: i32 }
    }
That would allow you to prevent weird cases like x being provided but y not being provided.


Yup, you can encode enums just fine. ADTs get encoded with an explicit `variant` tag at the front, and then an array of field values. For e.g:

    { "variant": "Basic", fields: ["cmd-value", 42] }
    { "variant": "Positioned", fields: ["cmd-value", 42, 0, 0] }
    //etc ...


> Actually the compiler checks cover most of the things I’d normally unit-test, so this is probably the only non-trivial project I wrote without checks and I’m OK with that.

This is interesting. First time I read about Rust I had this gut feeling that it's design might reduce the number of unit testing cases. Someone care to comment on that?


Just to add some context, here are the tests which I didn't write, but would in Python/Flask usually:

- Proper behaviour if I don't pass a parameter. Not needed, because it's an explicit `Option<>` which I have to handle.

- Is results list constructed properly / what's the None-vs-empty behaviour. Not needed, `Vec<>` is verified at type level and None is not possible.

- What happens if various database functions don't find a result. Not needed, because type signatures force either a returned `Entity` (if it's not there, it's an implementation bug: panic + 500), or `Option<Entity>` which again needs to be explicitly handled.

But these are only unit tests. If it was a critical app, I'd still write functional / logic tests to make sure the right data goes all the way to the database and back in known scenarios. Types can't guarantee that.

One kind of tests I'm tempted to write is for templates rendering properly, because handlebars-iron takes parameters as Json - all type guarantees go out of the window there. But it may be the same amount of effort to migrate to some type-safe templating like Maud.


I've had a similar experience writing web-apps in Scala.

The Option type alone handles so many cases which would otherwise require unit-tests in Python or another unityped language.


I'm a Python/Flask dev who has knowledge of the basics of Scala. Which web framework and ORM would you recommend to develop a (potentially) non-trivial web app?


If you want to use Scala, Play Framework 2.4 + Slick 3.0 would be the deadliest combination in my opinion.


Second, I'd certainly recommend play framework.


This is true IME. There are a lot of things you don't need to check for. Some guaranteed by Rust. Others guaranteed by the APIs themselves; Rust gives you a lot of powerful tools for designing APIs with static checks. For example, if you return a Result<T>, you don't need to check if the programmer forgot to handle a failure mode in your tests. The mode will be handled, or there will be an explicit panic.

Rust is harder to get to compile, but a lot of your boilerplate testcases go away. And if you design your API right, even higher-level guarantees can be provided statically.

A lot of times when writing Rust code in Servo I'll just ensure that stuff compiles, and then run the tests once before making a pull request. When contributing to python codebases I generally make smaller, incremental changes and run tests.

Edit: Just to clarify, this doesn't mean you should go all #yolo and abandon testing entirely. But (a) you can do so temporarily without adverse effects, and (b) once you start, you'll find that you only need to focus on the higher level tests.


I've dabbled with rust, but did quite a bit of stuff in haskell.

A lot of unit testing is verifying failure modes. What happens if they pass in null? What happens if they ask for a value i don't know about? stuff like that. Haskell (and rust) give you a lot more control over what a function is willing to accept at compile time.

You could spend your time writing a test to ensure null is handled gracefully. With rust, you have a bit more power, and you can simply ensure the function can't be called with null. It's more general than just null checking, but that's the flavor of what happens.


That's my general feeling as well. Coming from Scala transition into "dabbling" with Rust was actually quite smooth. I try to stick to functional style where possible, exhaustive pattern matching, everything wrapped in Result/Option, etc - eliminates the need of unit testing in many cases.


Thanks for the detailed write-up. You are a pioneer in the rust webdev space and have created a map (with the dragons labeled) for others :)


Haha, I didn't intend to go for detailed :) I'm pretty sure there's a lot more to tell about the details. I just wanted to at least list all the modules I've been looking for / using in this webapp. Thank you for the feedback.

I intend to write another post with some very basic skeleton / hello-world of a new Iron application using database with connection pooling, templates, logger, parameter parsing. But that's for another day.


Please do so, and tell HN. Many of us are paying attention.


I think D has shown that you can get a nice web stack in a compiled systems programming language:

http://vibed.org/

Lots of packages for it on code.dlang.org as well.


Interesting project. If Rust is to replace C++, then we need to see it exercised in all domains C++ has succeeded in. As I told another commenter, C++ with good frameworks does very well in web applications albeit with fewer libraries & utilities. A team of Rust programmers could catch Rust up to one of the C++ web frameworks and do a shootout between the two on realistic web apps. The results in terms of efficiency, productivity, and maintainability for each should tell us plenty about Rust's success in terms of its objectives.

That experiment among others, of course.


Regarding compile time, you can track the passes with `-Z time-passes` and wait for borrowck/lints to get over (or just typeck if you're only worried about types).

There also was [this plugin](http://www.reddit.com/r/rust/comments/2krdbu/rest_easy_a_lin...), but it's outdated at the moment (I can upgrade it later).

Regarding imports, `use foo::bar::*` works.


For use, I meant that there are so many packages you just take space just listing them. For basic iron you need: iron::prelude::*, iron::status, staticfile, logger, router, handlebars_iron, and more. And these are all from different crates.


This is the same in even many dynamic languages, though, isn't it?


Almost. In Rust to use a trait, you have to explicitly import it. That means if you want to run `.to_json()` on something, you need to import trait `ToJson`. Then again if you want to force a type of something (for example an empty hashmap) you need to import the types which are included.

So what in a dynamic language could be:

    a.to_json() if a else {}.to_json()
In rust will start with:

    use rustc_serialize::json::{ToJson,Json};
    use std::collections::BTreeMap;
    ...


I'd love something similar to be part of Cargo, maybe as an option in the manifest. Like the author, I've learned to guess when the codegen starts, but having this information displayed would be useful.


I'm actually working right now on a `cargo check` command which only runs those phases of the compiler to do with typechecking, to accommodate workflows based around tweaking types and then running the compiler to check your work. Given that the vast majority of compilation time is currently based in code generation and linking, this should drastically improve usability for this sort of rapid-iteration, dynamic-language-style workflow. (Though also note that improving the speed of codegen and linking is an ongoing task as well.)


This is a great idea. I wish other compilers would offer this feature (scalac is in dire need of such an option).

Does the Rust macro system require compile-time compilation before type-checking?


Not really. Macro expansion occurs during its own phase of compilation (which, in my experience, is generally really fast).


Macro expansion can lead to programs that don't typecheck, unless a very restrictive typing system is used (e.g. MetaML, MetaOcaml). I don't think Rust has such restrictions, therefore I assume that type-checking happens after macro expansion.


It does, but what I was getting at was that full compilation doesn't need to occur first. Macro expansion is one of the very first phases of compilation (and doesn't have access to typechecking information, incidentally).


So Rust macros don't offer full compile-time meta-programming?


Not in Rust 1.0. In the nightlies there are syntax extensions available that give you more power, but those are likely to see significant revisions before they're available in a stable version since they're a major backwards compatibility hazard.


Who's working on this? I may have something to contribute in this direction.


Not something being actively worked on, but I and some others care about it.

Rust macro-esque things are of three types:

- Macro by Example (MBE): These are easily defined by the user via `macro_rules!`, which can match on their input and expand to some output at compile time. These don't need to be defined as a plugin; you can define a macro directly in your code.

- tokentree expansion plugins: These take in a token tree, run arbitrary code, and output an AST (syntax tree) node. Ish.

- AST expansion plugin: These take in the parsed AST, run arbitrary code, and output another AST to replace or augment it.

There also is support for custom lints and llvm passes.

All of this is at compile time.


I'm not sure what a token tree is. Usually compilers feed a token list to the parser, which then generates an AST. The AST is what compile-time meta-programming such as macro expansion works on.

In fully-blown compile-time meta-programming systems such as Converge or Template Haskell, the "AST expansion plugin" can and typically does itself invoke the compiler to compile arbitrary code which in turn outputs a new AST for further compilation.

I'm not fully sure what macro-by-example is.


I'm not sure who specifically is working on improving the macro system, but it's definitely something that's been prioritized. My primary recommendation for contacting the relevant people would be either the Rust internals forums (https://internals.rust-lang.org) or the #rust-internals channel on irc.mozilla.org.


Very nice write up. As a Go programmer, Rust is really interesting. Can't wait until you can build web APIs with it.


Crates.io is built this way: Rust serving JSON on the back end, Ember consuming it on the front end.


How mature could this setup be considered? Thanks for pointing it out.


As mature as anything for a language which just hit 1.0. Alex, the main developer, said he likes it a lot. https://m.reddit.com/r/rust/comments/2v1fe3/hows_rust_workin...

I think it's a pretty good architecture overall. We'll see how it all shakes out.


What kind of database does crates.io use?


Postgres: https://github.com/rust-lang/crates.io

It also stores the index via git and S3...


I've been using typed racket for backends https://www.classes.cs.uchicago.edu/archive/2014/fall/15100-...

Recently where I work we switched to Golang for any network facing daemons and it's been awesome. Rust was decided as 'too alpha' to try yet.


The tooling for the web written in JavaScript is excellent, so I was curious, are there any options there are for calling JavaScript from Rust code? I know there is the ffi module for NodeJS that allows you to call Rust from JavaScript, but I was wondering if there is something that works well in the other direction.


Embedding JS in Rust by using V8 (as kibwen has mentioned) would be no problem, but using node.js modules might not be so easy. It's like calling ruby gems from C.

What do you mean by "tooling for the web"? If it's preprocessors like LESS (that are written in JS), you could run the JS in V8 to transform the CSS efficiently, but maybe the compiler "binary" (script) is enough. In this case it's the best approach, since performance will be good, and you don't have to reinvent everything.

But if it's something like express.js, it's probably not worth it.

It's surely possible somehow, but in a pretty hacky way and at the cost of high overhead and complexity caused by calling from a static language to a dynamic one.


I once experimented this, and it worked. I wanted to do this because I needed a decent and fast CommonMark library, but Rust didn't have one. (Yes, I tried to embed JavaScript in Rust just to parse some MarkDown text!)

There are two libraries that embed V8 in Rust, but all of them seem to be abandoned for now. However if there become enough interests, a new project will arise I think.

(FWIW, I ended up calling CommonMark library written in Python from Rust, because embedding CPython was easier than embedding V8. Unfortunately CPython's performance is worse than V8, but as I know Python better than JavaScript it was a reasonable choice.)


You know there's a CommonMark reference implementation in C, right?

https://github.com/jgm/cmark

I haven't tried it, but I would expect it would be easier to link with Rust than an entire other language VM.


Oh, yes. I had also tried that, but found some convenient features missing, e.g. line breaks treated as `<br>` (not two consecutive blank lines), GitHub-flavored tables, and the strikethrough syntax. I'll have to wait until the official CommonMark spec includes them.

Also the third-party CommonMark libraries are usually easier to extend, with your own syntax addition.


It's an interesting question. You can embed V8 in Node, so theoretically you could also embed the entirety of V8 in a Rust program just as you could embed Lua or Python in a Rust program. But just because it's possible doesn't necessarily mean that it's easy... it would be a fascinating project for someone adventurous. :)


Hey OP, have you tried the other "future" languages like go or Elixir? If so, how does it compare with your experiences in rust?


No, sorry. I did some erlang (directly, not elixir), but not for websites. I really missed strict types and compiler validation there. I know there's dializer and others, but it's just not the same level of comfort.

I've done one website in C however and can tell you that Rust is way more predictable ;-)


63 dependency crates? That's insane.


Cargo makes it trivial to add dependencies to a project (which themselves may have their own dependencies), and Rust has been designed with versioning in mind from the start so that even incompatible versions of the same dependencies can exist in your dependency tree without any problems whatsoever. 63 dependency crates isn't insanity when your tools take 100% care of them for you. To me, a large collection of dependencies represents a proper and fine-grained separation of concerns.


Last I checked, a 'rails new' gives you over 40 dependencies off the bat.


Yes, the Rails and Node communities are known for insanity.


I put all node functions I might want to re-use in other projects on NPM by themselves (though this also usually pushes me to make a lot of unit tests for each of them, which is a nice benefit). I'm not really clear on what's insane about re-using a lot of small modular parts. This is way better than copying and pasting functions between codebases, and then trying to keep them (and their tests) all in sync.


Node/npm is terrible. Look at Popcorn Time. This client app has over 4000 files in npm_modules. Most of them totally unneeded, and many of them tiny and for things that should be in a stdlib or rolled with others.

I think it's the same kind of folks that create a huge Java object model, always one file per type. Gives the feeling of being big and doing real work, even when you're not.


Let's please not look to Rails to determine the way forward for web development.


I don't see anything wrong with that, particularly when they all get linked into one executable in the end.


So you're saying code reuse is bad?


Yes, it's like anything else: the dose makes the poison.


I'm not sure what that actually means.


The Rust community has tended towards lots of tiny libraries. Hey I made this cool function -> library.


When you need to do 63 things... maybe you need 63 dependencies


Here's something that most of you will probably learn the hard way: popularity and merit are not the same thing.

Rust is very popular now. That doesn't mean it has a great design. One of the very first design decision/belief they made was "no code can perform with garbage collection". Well, garbage collection does cause quite a few performance problems, but that doesn't mean it can't work if you engineer it right.

Personally I think Go and Nim are better languages as a general statement, both for applications and systems programming.

I know I am going out on a limb making a negative statement like that, and previously Nim developers have tried to discourage me from saying negative things about Rust. Those guys are smart and have social skills and they realize they should be careful to be nice to Rust developers because sophisticated developers looking for things like performance, type safety, etc. with a C or C++ background are really going to benefit from Nim if they give it a chance and Rust developers are prime candidates for Nim conversion.

So let me just say clearly that I don't associate with the main Nim community, the core developers, or anyone really. Nothing I say here reflects on them I hope.

I literally have no friends in fact.

I don't understand people, or pay much attention to them, or interact with them very much. When I do interact I say what I really think, not what people want to hear.

What I DO understand is technology. From a very early age I have been programming in everything from different types of assembly language to C to C++ to Ruby, Javascript/CoffeeScript/ES7, different variants of SQL, Rust, Forth, OCaml, etc.

Mozilla is a leading technology organization with many contributors. Rust is a new technology with quite a few people innovating on it.

Unfortunately, Rust starts with a bad design decision and never really recovers from that design decision. The values and perspective are lacking a modern, contemporary perspective.

The Rust worldview is trapped in the C++ era.

I have been primarily a JavaScript developer both on the front and back end for many years now. Why? Because I like to make useful applications and I am sane and paying attention.

But in a world where people were better informed and had better judgement and the best things won out rather than the most popular, Rust would be an obscure language being toyed with by a few academics for (perhaps?) implementing certain parts of kernels, the new browser from Mozilla would be fully peer-to-peer capable (using IPFS/gittorrent/swarm/ndn etc), written in Nim of course, use JIT/VM/something Nim for scripting, and have thrown out JavaScript/ES6/ES7 AND CSS for good. Of course, in this ideal world there would be no google monopolizing all advertizing and capturing most good engineering talent, since semantic markup and p2p query would also be be built in to this browser, making google irrelevant. The default mode for the browser would be virtual reality, with the ability to render 2d operating systems or markdown/images on arbitrary 3d surfaces.

But instead we have the world we have. All advertising must follow the dictates of a giant all-powerful global corporation. Mozilla is trapped in a pseudo-C++ mindset and spending most of its efforts trying to reproduce the intractable mess of decades of CSS hacks, in the time the engineers have left after going through in excruciating detail all of the possible ways to 'borrow' memory. CSS, a brilliant system that is a pain the in the ass not only for programmers but also designers, so complex that only two computer programs in the world are known to do a reasonable job of rendering it accurately.

Its time to stop dragging the old tools, mindsets, and technical debt forward. Stop judging things on the basis of momentum or authority (Google/Mozilla/Microsoft) and start using your brain to select things rationally.

Mozilla as an organization is not going to be capable of admitting they made a mistake with Servo/Rust and recoding it in Nim. Just like the world is not going to accept that we should throw out CSS and use any other simple system that can reproduce graphic designs. And we are not getting rid of JavaScript with its horrible threading model and garbage collection anytime. But we should. In a sane world we would learn from our mistakes and throw all of that out.


> The very first design decision/belief they made was "no code can perform with garbage collection". Well, garbage collection does cause quite a few performance problems, but that doesn't mean it can't work if you engineer it right.

That's not Rust's philosophy. Garbage collection is great—when it makes sense to use it. Rust is designed to make garbage collection optional, following the way systems software has been designed for decades.

> Mozilla as an organization is not going to be capable of admitting they made a mistake with Servo/Rust and recoding it in Nim.

I've already gone over multiple times why Nim would not be a good fit for Servo (which is not to say that Nim is a bad language, just that it would not be a good fit for Servo).


If you're going to write a block of text like that, you could at least fact check the main premise. Rust had GC. They started off with GC, so it wasn't the very first decision that "no code can perform with garbage collection".

Here's a post about removing GC from core rust 2 years ago: http://pcwalton.github.io/blog/2013/06/02/removing-garbage-c...

Next: they didn't kill GC. They removed it from core, because rust is capable of having gc implemented as a library. Standard library still has two gc-enabled pointers - Rc and Arc. You can use them in current code to have garbage collected values.


Rc and Arc aren't exactly GC types... they are just reference-counted pointers, Rc being akin to C++'s shared_ptr. I get that in a sense, this is garbage collection, but certainly not full-featured like the ones you find in other languages.


You can use full GC in Rust--we use the SpiderMonkey GC to collect Rust DOM objects, for example. It's not the most easy-to-use thing, however.

Most systems software gets by fine with a combination of thread-safe and thread-local RC. Reference counting is a form of garbage collection that works really well when it's used only for the subset of data that needs GC--which is the style that Rust encourages anyhow.


Hmm.. Rust was created to support the new browser, browsers mainly manage DOM objects.. I wonder why you are using a legacy garbage collector in order to handle primary tasks rather than having that part of your core design?


SpiderMonkey's GC is very modern, it's not legacy at all. Furthermore, while Servo influences the design of Rust, it does not dictate the design of Rust (or else, for example, Rust would have had struct inheritance years ago).


Spidermonkey uses generational gc with some kind of compacting these days. Why do you think that's legacy? What models are strictly better than that?


Nim's garbage collector is better than Spidermonkey's.


Irrelevant here.

For Servo we need a javascript engine. We're already using Spidermonkey. It has a GC for Javascript; we're already paying those costs. The Rust-side representation of DOM objects is also managed by the GC; that makes sense because these are tied to Javascript things.

We don't use the GC elsewhere. I think at some point we did, but the only place now in Servo where GC is used is where the data is strongly connected to data already managed by the SM GC.

Nim's GC is for general-purpose use in a language. Spidermonkey's GC is for GCing javascript, which already has an extensive runtime (which the GC ties into heavily). "Nim's GC is better than Spidermonkey's" is a statement of no value (and oversimplifies the situation) unless the context is specified. Using the SM GC to collect random Rust objects would be a bad idea. Somehow rigging up spidermonkey to use a Nim-like GC in Rust (all other things being the same) would also be a bad idea. Two different scenarios, two different GCs.


No, it's not. Generational, incremental, precise GC generally outperforms non-generational, non-incremental, conservative deferred reference counting.


Because rewriting Spidermonkey is both tedious and doesn't benefit from Rust safety guarantees.


Refcounting and tracing are two different forms of GC, but you're right in the sense that most people mean tracing.

At the same time, we're putting a lot of thought into how to properly add an optional tracing GC. It's important that it doesn't impact the no-GC case, which is still, of course, primary.


I'm glad to know there is ongoing work on a tracing GC. Rust has many strengths aside from lifetimes and ownership (algebraic data types, sane generics, strong module support, very strong type system), so a few features to make it more usable for use in contexts where performance isn't as important as expressivity would be very nice to have.


IMO, "contexts where performance isn't as important" aren't very relevant to Rust (hence why I'm strongly against, for example, hardcoding a global GC into the language, or splitting the language into a GC'd and non-GC'd half). But I do understand why some people would like to use the same language for all these use cases, I suppose.


I don't think the primary use case for a GC in Rust would be "contexts where performance isn't as important" so much as another tool for lifetime management.

The current reference counted types don't just get used for convenience, they get used because they describe the actual life cycle of the data they contain. A tracing GC would be similar.


Actually, std::shared_ptr is more like Rusts Arc type, because std::shared_ptr is atomic, and Rusts Rc is not.


I have disagree. I know what you mean in practice - the GC is not the default and some types will always be different. But reference counting and types with destructors is a valid implementation of GC.

Gc is "just" an automatic memory management. Rust is not a garbage collected language, but it does have optional garbage collection available.


Nim's GC uses reference counting, so I'd say it's a least a fair comparison: http://nim-lang.org/docs/gc.html


Ok thanks. It was not the first decision then, just one of the early main design decisions. I edited the text to reflect that.


It was not an early main decision. Rust has been actively developed for a full decade now, GC was removed from stdlib 2 years ago.

The decision was only done at the point when the devs were confident that the ownership/borrowing system could manage all the normal workloads done by GC, and that GC could now be implemented well into the language as a library.


You're getting downvoted because people who have spent a lot of time engineering and building don't like it when people drive-by with hypotheticals. You have some neat ideas; show us we're wrong by writing code and profiling. Not claiming that something that was published earlier this week (gittorrent) is how the web (going on 23+ years) should be rearchitected. Claiming you don't get along with others doesn't give you a free pass to shit on other people's work.

As a side note; it's really easy to see the wrong in things. It takes more work to try to find the good in things. You might find that you're better able to connect with people by being more optimistic.


While I agree with some of your arguments, namely that "popularity and merit are not the same thing", I think saying that Mozilla is "trapped in a pseudo-C++ mindset" and that its new browser would ideally be written in Nim is plain wrong.

The fact that using Rust to write a simple website isn't very handy doesn't mean it's a bad language. Web development simply isn't its primary focus: this is a systems programming language we're talking about. The mere fact that it is being considered for writing web apps is impressive since Rust is supposed to be a better C++, not a better Ruby, Python or Node.js. Rust mainly emphasizes performance and safety. This means all the 'magic' that happens in a dynamic language is exposed to the programmer, and has a cost: the code is more verbose, and seemingly simple things are more complicated.


The idea that you can't get something good without a trade-off involving something bad is a false belief, an over-generalization that is very popular among people from all walks of life, including, unfortunately, technologists. Trade-offs do exist, but even a sword in fact has many more than two aspects.


OK, but we know precisely what the downsides of GC are (for some applications).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: