Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Concurrency was the primary selling point for Go. Setting GOMAXPROCS=1 not only failed to deliver on that but also ran the risk of people not noticing concurrency issues because all of their development happened without parallel execution.


Prior to Go 1.5 the default of 1 made sense for most typical Go programs, but I know of many users who set the variable differently for better performance.

Data point: Inside Google we have been setting GOMAXPROCS=runtime.NumCPU for many years because most of the Go programs we run are highly concurrent network servers, for which a higher GOMAXPROCS value gives better performance.


Interesting, can you share much about the other work on the same machine? In particular I'm curious if there are many other services running no those machines. I've written some services in Go (thanks!) but actually left GOMAXPROCS at 1 because there were other components running as separate processes on the same hosts.


When I was at Google the machines where typically oversubscribed and the workload varied quite a bit. Google has done a lot of work to allow you to ignore the other processes on the machine and just specify the resources your process required. Then Borg which they recently published a paper about would take care of scheduling it on an appropriate machine.

As a consequence, answering the question of what other processes where on the same machine was not as easy as on an EC2 instance. We didn't mind though because Borg was so good at scheduling we pretty much didn't have to care about the machine level. Instead we thought at the datacenter level.


Some machines run many other jobs, some machines run few. It depends on the resource allocation.


I think 1 was fine for many people but I wish it'd been 2 simply to avoid people getting as far before realizing there was a class of problem they weren't testing.


Concurrency in Go is a programming model, specifically CSP. They have been very clear about this from the beginning and saying they failed to deliver on it is disingenuous.


Note that I was not talking about the actual implementation or the indeed perfectly open discussion in the docs but rather the way the language was sold, most of which happened outside of Google. Even though e.g. Pike was very open about this, there was a lot of excitement of the magic scaling pixie dust variety around HN, Twitter, meetups, etc.

Almost every person I know who started with Go got a long way in before doing some benchmarks and realizing that they were only using one thread. I know the scheduler had issues but I think setting the default to 2 would have avoided people going so far before learning that the language can't handle everything they need to care about.


No, they also talked from the start about how Go makes it easy to leverage many CPUs...


And it did and does, provided you have a workload suited to many CPUs. It's just that with the ancient scheduler, setting GOMAXPROCS>1 with a concurrent but not inherently parallel workload would often be slower. It got a lot better around 1.2 and is finally a non-issue in 1.5.


Don't forget the race detector! It has also gotten easier to detect subtle concurrency errors which in turn means less deterministic execution is easier to handle.


Concurrency doesn't rely on the number of available processing cores. You're thinking of parallelism.


I'm quite familiar with the distinction you're trying to make but the term concurrency is not that strict in common usage:

“In computer science, concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other. The computations may be executing on multiple cores in the same chip, preemptively time-shared threads on the same processor, or executed on physically separated processors.”

https://en.wikipedia.org/wiki/Concurrency_%28computer_scienc...

If you were to go back and reread my comment, note that I used the broader term concurrency at the start – which includes both classic async patterns within a single thread where operations may be interleaved but shared resources wouldn't be updated simultaneously as well as true parallel execution – and then specifically referred to parallel execution at the end as a source of potential pitfalls.


sure, but now some programs will break. This is a breaking change.


This just changes the default, which anyone can do in their OS. If your program breaks when run by someone who set gomaxprocs.. that's a bug in your program.


Those programs were already broken – the developer just didn't know about it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: