Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

To avoid overhead people implement specialized compilers suited for the task at hand. If anything, going down to C, and especially assembly will hurt performance as low-level code is much harder to optimize for obvious reasons. Above all of that real-world performance comes from proper system-level design, not micro-optimizations, and using a low-level language (be it C, C++ or assembly) will prevent one from quickly iterating over different ideas.


will prevent one from quickly iterating over different ideas.

I consider that a good thing. Making implementation harder means you'll be forced to be more thoughtful in design; Asm is so explicit and "low density" that you will naturally want to make every instruction count. You won't be easily tempted to make copies of strings, allocate memory, or do frivolous data movement, because those things all take instructions - instructions that you have to write. Even if you're calling functions, you still have to write the instructions to call them and pass parameters every time. You'll be more careful about not doing work that you don't have to.

Contrast this with high-level languages that make copying data around and allocating huge amounts of memory as easy as '=' and '{}'. They're good for prototyping high-level "does it work" types of things and exploring concepts - the "quickly iterating over different ideas" that you mention - but once you decide what to do, are a lot less controllable with the details because of their high-level nature. And the details, the constants in algorithms, do matter a lot in the real world. Moreover, the difference in constants can be so big that even "proper system-level design" in HLLs can't beat a theoretically less efficient design in Asm, because the constants with the latter are miniscule.

See KolibriOS, MenuetOS, or TempleOS for an idea of what Asm can do.


Asm can do anything. :)


> Above all of that real-world performance comes from proper system-level design

There are situations where "proper system-level design" just doesn't cut it, and even traditional "rewrite this module in C" doesn't work, because there is no single module to optimize, but rather system is being slowed down by many little overheads all over the place. JITs help with this, but they are not always available. It really pays off to switch to the language with less overall overhead and a focus on performance if you find yourself in such a situation.

> and using a low-level language (be it C, C++ or assembly)

C++ is not a low-level language.


It is by all accounts. It pretty much relies on the underlying machine memory model, and ordering of operations in a C++ program directly corresponds to the resulting ordering of the operations on the machine it is running on. Furthermore, the language itself is clearly speicific to register machines and follows the corresponding semantics -- it'd be hard to target other kinds of computers from C++.


You start with a proper design and then hack it to squeeze performance out of it.

Low level code is not that difficult to optimize... especially not assembler.

Let's just say it's going to be a lot easier to get 200,000 req/sec from asm than from rails.


> low-level code is much harder to optimize for obvious reasons

What are the obvious reasons I'm missing?


As cgabios noted below low-level code obduscates intended behaviour. Ever tried to write a C optimizer? A trivial example is a for loop vs map -- the former has inherent ordering semantics and compiler does not have any way of knowing if this behavior needs to be preserved, while the latter just tells that a particular operation needs to be applied to each element so compiler is free to reorder/parallelize/etc. There are much worse situations that arise from low-level language having to preserve the underlying machine memory semantics (that is one of the reasons why it is hard to compile low level languages like C or C++ to e.g. Javascript. Compiling x86 assembly would reuire a full machine emulation).

This is discussed in detail in most introductory CS books if you would like to learn more.


In asm you write loops because they are easy, map is hard (and generally slow because it's a lot more code.)

ASM derives performance from specialization, ASM asks, how often is this code ACTUALLY going to run on another architecture, OS, etc? And then gains performance by not supporting those things via abstractions, etc.

Throw away your CS textbook and run benchmarks, reality dictates theory, not vice versa.


Specialization is what compilers do really well. :) Humans -- not so much.


I'll give you a counter example.

GPU drivers spend a lot of time trying to optimize beneath their corresponding high-level API. This more-or-less equivalent to compiling GPU machine code on the fly based on GPU configuration - that is, very much like optimizing a high-level language.

If everything goes smoothly, the drivers can do a pretty good job of optimizing everything.

However, if you deviate slightly from the "fast path", the whole thing falls off a performance cliff, and because it's a high level language with a secret black-box optimizer behind it you're actually worse off investigating performance issues than you would be if you'd just written things at a lower level. Not coincidently, graphics APIs are moving to lower levels precisely to remove the complexity from the compiler, increasing transparency and making things more predictable.

Now you might suggest that a "sufficiently advanced compiler" wouldn't do that, but such a thing is a fiction. In practice, the compiler is never sufficiently advanced to optimize in all cases effectively.

---

Consider Javascript, where exactly the same thing happens. Your definition of a "high level language" may not include JS, but it's hard to argue it's not higher than ASM.

Modern day JS engines do a pretty good job of optimizing code JIT. However, you make some innocuous code change and suddenly your function is running in the interpreter instead of being optimized (see https://github.com/GoogleChrome/devtools-docs/issues/53 for examples)

If you were using a lower-level language, your chances of falling off mysterious performance cliffs is significantly reduced. Further, you have the capacity to do low level optimizations that your compiler literally cannot do.

So what if your high level language can now do parallel-maps, if it ignores cache thrashing, or hits load-hit-stores or any one of myriad actual performance holes that real code can fall into?

Or you add a field to the objects you're iterating over and the parallel map implementation hits a weird memory stride and perf drops through the floor. How do you even debug something like this in a high level language where all you see is "map()"?

---

I also think your compiling-ASM-to-JS example is a bit of a strawman, FWIW. The parent was talking about how high-level languages yield higher performance than lower-level ones, not about the portability or transpilability of ASM->JS. (A "suitably advanced transpiler" would handle this problem perfectly anyway)


Low-level code often omits high-level intended behavior (description, pseduocode, documentation, test cases, etc.) and semantic meaning like variable names. In such styled codebases, the absence of these makes it harder to refactor, reuse and/or modify than say concisely & precisely documented codebases in higher-level languages (Python, Ruby, Go) or quality asm.


I'm struggling to see what you mean about variable names.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: