Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Really. Hash tables are one of my most used data structures.


In fact, what performance oriented devs need to be taught about these days is how to not use hash tables, or, more generally, dictionaries. They are a super-easy data structure to reach for, but often it is possible to arrange your data such that their use is unnecessary.

One example that was brought up elsewhere in the thread is the way python looks up variables in successive scope dictionaries. This is obviously terrible for performance, and that's a big part of why Python is slow.

But how are other languages fast? Doesn't every language have to resolve variable names to memory locations? Well, yes, but fast languages do this mostly at compile time. How? Well, I'm not an expert in this, but at a high level, each variable is assigned a location in memory or registers, and then future references to that variable are rewritten to refer to the memory location by register name or memory address. This takes the whole "looking up a name" issue out of the path of work that has to get done at runtime. And you've switched from looking up things in tables to operating directly on registers and memory locations.

BTW, this has nothing to do with high-versus-low level. It's more about how mutable the process of name resolution is after program compilation. One could theoretically write an assembly language where memory locations are names, not numbers. If reflective features like runtime scope editing are available, this would be a very low-level language that still requires some kind of dictionary lookup at runtime.


> In fact, what performance oriented devs need to be taught about these days is how to not use hash tables, or, more generally, dictionaries. They are a super-easy data structure to reach for, but often it is possible to arrange your data such that their use is unnecessary.

A lot of devs are completely unaware of what's happening at the layer of abstraction below them and this is one of the ways that comes out. The number of elements it takes before hash tables are faster than iterating through an array can be surprisingly large and yet it's one of the first "optimizations" that get made.

Some other related problems are not knowing how expensive network calls are, not knowing what the ORM is doing, caching in memory even though the database already is and more. They just don't think about performance issues in code they don't write.


99.999% of projects are not going to have any meaningful hot path in variable resolution.

If your program is sensitive to that, even a simple GC pause is going to destroy your performance and you need to be out of managed memory languages at that point.

There are a lot of reasons python can be slow, but this is far from one of them.


> 99.999% of projects are not going to have any meaningful hot path in variable resolution.

It won't show up in the hot path because the performance cost so pervasive that profiling tools will ignore it, it's everywhere and you can't escape it without compiler optimizations. This cost will be in the hot and cold paths. This and data locality are the two biggest performance issues in dynamic languages and a lot of effort goes into compiling it out.

Here is a good article on how V8 tries to deal with it: https://www.html5rocks.com/en/tutorials/speed/v8/

For statically compiled languages it can show up but often you'll have to write a dictionary free version to see it. Some profiling I've done with c# at various times over the years shows that it's slower than a list until you have more than 50 to 100 items. The caveat is that I normally keep the dictionary version because it's more semantically correct.


Eh, one of the standard performance tips for hot loops is to store everything relevant, including functions, in local variables because the interpreter looks in the local dictionary first. The last time I optimized Python code that made a significant difference.


To be clear, I don't think this is the only thing that makes python slow. It's probably one of the top five or ten things preventing it from being fast, though.


> Well, I'm not an expert in this, but at a high level, each variable is assigned a location in memory or registers, and then future references to that variable are rewritten to refer to the memory location by register name or memory address. This takes the whole "looking up a name" issue out of the path of work that has to get done at runtime.

Python does this for local variable names in functions. Because of this, moving a tight loop from global scope to a function can make it a couple percent faster.


Are you serious that dictionaries are a problem for performance oriented developers?

Performance oriented devs should be concerned with bottlenecks, not incredibly minute details. There's almost no situation I can think of where smallish dictionaries are much better or worse than any other data structure when it comes to performance.

Of course, if you're writing a compiler then it can be a serious difference. Most developers don't write compilers though.


Yes, if you write performance sensitive code you have to be very careful with dictionaries. See for example this nice talk by Chandler Carruth https://www.youtube.com/watch?v=fHNmRkzxHWs


Well of course. :) If you don't write perf sensitive code, don't worry about perf. But if you do, in many cases avoiding hash tables can become important.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: