Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As a JS dilettante, I found this incredibly surprising.

    const data = { foo: 42, bar: 1337 }; // 
>>> can be represented in JSON-stringified form, and then JSON-parsed at runtime:

    const data = JSON.parse('{"foo":42,"bar":1337}'); // 
>>> As long as the JSON string is only evaluated once, the JSON.parse approach is much faster compared to the JavaScript object literal, especially for cold loads.


I hate these microperformance hacks. There’s no way you’re going to have a bottleneck here and you’re making the quality of your code dramatically worse.


I hate that this micro performance hack actually is better - why don’t they just fix this in the JIT so that the literal is just as fast as the JSON parse? Why do we have to think about this?


This is happening way before the JIT phase, at the parsing phase.

And the reason it can't be as fast as a JSON parse is explained in the article: The grammar of JSON is much simpler. The literal parser in normal JS needs to deal with things like:

  const data = { foo: /* something*/ 42, [barRef]: 1337 };
so needs to do more branching than a JSON parser. And yes, you can pre-scan the string to make sure it's not doing anything like that... but that probably already costs more than the tokenizer phase of a JSON parse, and it's overhead you still need to do that a JSON parse doesn't have to do.


It isn’t a jit issue it is a parsing issue.

Also micro performance becomes macro at scale. This is not something that you worry or think about with a handful of literals. It would be good for an inner loop or code generator.


You wouldn't write this in an inner loop, you would do it once in the entire program. It's a literal object that doesn't change.


The runtime can't know whether it's parsing a plain old JSON-compatible object literal until it, well, parses it.


Sounds like some sort of compiler hint like ‘expect json’ or ‘use json’ right above a json literal would be useful. Assuming it matters enough to be worth it.


JSON.parse sounds like a good compiler hint :)


IANAProgrammer but couldn't it do like try..catch and assume it's JSON compatible and fall back to the [much] slower method if the parse fails? Does that cost too much time/resources?


Isn’t that effectively what a backtracking parser does? My naive guess is that’s already roughly how the JS parser works, and that process is what makes it slower than the simpler JSON parser.


That would make all JavaScript that has non JSON object literals (most JS code) slower


How much slower?

We already established it's far slower parsing js; in most cases JSON parsing would fail - IIRC - on the first character ^[^{] ... so would it be worth being slower by the time it takes to check the first character, and then only JavaScript that started with { would be slower. Which I guess is virtually none.

I suppose those sorts of questions are part of what makes language design interesting.


While it's possible that there's room for improvement, it makes sense that parsing a subset of JS (JSON) is faster than parsing JS. JSON is just so much simpler.


I don’t think this is meant to be hand written though. But a bundle/compiler can do this optimization for you.


This did not occur to me but is obvious now that you have said it. Interesting!

I wonder how many such tweaks it would take it to be worthwhile for companies to add it as a build step


I understand what you and OP are getting at and I'm saying it's wrong but it misses the broader point that others posters were positing. The exact same point I came to make but found myself in good company. Why?!? And it's not specific to JS or a direct criticism of JS, this issue it becoming endemic. It's happening everywhere now. We shouldn't have to depend on language specific hacks to make things "work" in that language. JS has gotten so out of hand now we have/need new metalanguages like TypeScript, CoffeeScript, Bable, (insert buzzwordy JS lang here) that transpiles into JS because it's become so unwieldy. All this just to be able to make a simple web app. The fact that V8 can parse JSON faster than processing a native JS literal is a compiler problem. Why is it suddenly my problem? It's one thing if it's a bug that we have to temporarily work around (these things happen) but now it's becoming the norm. How are people new to the industry supposed to learn something like JS if the only way to be proficient at it is through an endless, ever changing array of hacks? Especially when it's something as counterintuitive as parsing a string is somehow faster than the actual equivalent code. The "move fast and break things" mentality that has permeated tech as of late has done just that, got us nowhere fast and broke everything along the way.


Javascript is messy but it's everywhere and has gotten significantly better with recent versions. Typescript is just a different flavor offering strong typing and other features, just like Scala or Kotlin can also replace Java if you want.

This particular hack is absolutely nothing you have to worry about, just like esoteric performance tweaks available in any language stack. Write the JS you need and use it. Then profile and optimize as necessary.


You can apply all of the same reasoning to assembly. People did write that by hand at some point. Then they abstracted that away into higher level languages. I’m sure there’s a ton of hackery going on there as well.

It just seems like a human nature to paper over.

It’s an evolution.


I met someone recently who had to do pretty high performance JS to ship webapps targeting feature phones - presumably every bit of performance helps there and as someone else mentioned, the compiler can probably do this bit for you


Sure, but 99.99% of devs don’t need to know this and probably shouldn’t know this lest they decide one day “this will be faster so I’ll just do it this way” not thinking of how awful and confusing it will be for everyone else.


Plus, in some future implementation, it could happen that the current faster way becomes slower, due to changes in the JS interpreter / JITter etc. At least, think it could. Not a language lawyer or implementer.


Yeah I'm getting a frustrated feeling like the entire ""+"" debate in Java all over again.


And others will copy this pattern without knowing why. so quickly you have these weird patterns all over your code.


It's actually pretty common for single-page apps to have a large amount of configuration data (e.g. all the client-side routes) and potentially a large of amount of application data sent from the server-side renderer. And code compilation can be a significant part of the cold-boot time for large single-page apps. I haven't verified, but it's very conceivable that this would significantly speed up time-to-interactivity for a very large single-page app.


Henrik Joreteg has a recent blog post on this: https://joreteg.com/blog/improving-redux-state-transfer-perf...

(note caveats/questions at the end of the article)


A good rule of thumb is to apply this technique for objects of 10 kB or larger — but as always with performance advice, measure the actual impact before making any changes.

Might be worth investigating if you have a ton of JSON.


> There’s no way you’re going to have a bottleneck here and you’re making the quality of your code dramatically worse

A tweet was going around that this change lead to a 30% increase in time-to-interactive.

If you're dumping Redux state from the server onto the page, you could benefit from it, and it's really only one place so its not actually making your overall code "dramatically worse".


The article suggests that for large static things like config data. It's not suggesting anyone write their object declarations that way.


yep, be nice if the normal way of doing things was the fast way, in the language we decided to make ubiquitous wouldn’t it.


hear hear!


I worry this is going to become new accepted wisdom without being tested fully (it's probably already a Webpack plugin). My questions:

- What about other browsers?

- how big does it have to be to have a meaningful effect? I've seen an example out there with someone demonstrating amazing gains... on their 163KB of initial state. That's not typical. Or at least I really hope it isn't.


What do you mean "other" browsers? Legacy non-Chrome browsers are deprecated, anyway.


Umm, Firefox? Or am I misunderstanding?


It was a joke.


needs a /s


How the heck is this faster? What's going on underneath? This violates the hell out of Principle of Least Surprise lol


Since JSON’s syntax is much simpler and smaller, it’s faster to parse a very large object as JSON and return the newly built JavaScript object rather than parse the object tree in JavaScript, which has a much more complex syntax parser. Obviously there’s diminishing returns for smaller and smaller objects.


The Json version only requires the full, complex JS parser to find the end of the string parameter. The string itself is then passed to a specialized Json parser that is faster because the language is a lot simpler and it has to check less possibilities at each point.

Apparently this is faster than running the whole string through the full JS parser that has has to deal with all kinds of portential shenanigans in that block like references to variables instead of literals etc.


But that only applies to the parsing step. With JSON.parse you will parse it every time the code executes, so if that code is in function, JavaScript probably will be better, because compiler can't just optimize out JSON.parse (because someone might redefine window.JSON object or redefine its parse property).


That’s why you should put all your json literals at the topmost scope in the file. After all, global variables are the best variables.

(Per the article’s advice, unless I’m missing something)


> can be applied to improve start-up performance for web apps that ship __large__ JSON-like configuration object literals

They don't say how large but clearly it's not going to be better for a 2 keys object like in the example.


I'd be interested to see some data on the real impact of "much faster". I can't imagine object parsing would be a large cost in the big picture.



It make sense if you think about it, but I would really not figure it out myself, that is true.


I read the comments before the article, and it took me a minute to figure out why: Of course, JSON is much simpler to parse than JS.

For anything large enough to matter, I think you'd want to deliver it as an application/json blob, anyway. Or use protobuf.


indeed! and i was going to say the same thing. it makes perfect sense if you are a programmer with a basic understanding of how languages work. but like someone else said, i would not have thought about it first thing when writing code.


Perhaps the reason many (like myself) did not immediately expect the speedup is that they started on or are still using another non-scripting programming languages.

There, the language parsing penalty is paid during the compile step, and as far as a user is concerned the object approach is obviously faster than parsing from JSON.

This does leave me wandering - is the result here applicable to other scripting languages too? The argument in the article would seem to apply in the general case, but perhaps other languages have some particularities that change the result.


This is great! Is the optimization applicable in Firefox?


Is the perf discrepancy any different with 'use strict;'?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: