Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Title mentions multithreaded programming, while the first line of the text (as well as major emphasis through the article) is on parallel programming.

And surprisingly, there is no mention of GPU.

Almost as if they want you to think parallelism can only be achieved through CPUs (cores, and threads) but don't want to admit it in the title.



Of course, every article has bias but I still think it does the title justice as it talks about exposing concurrency via processes/threads.

Also note that it was written in 2011, not sure if GPU based HPC was as common then but I could be wrong.


I think the article structure is quite standard. It first introduces the more general concepts (parallel programming, zooming in on MIMD on shared-memory machines) and then focuses on a subset (multithreaded) while also alluding to other parallel programming techniques (MPI etc).


Can a SIMD system be really considered "multi-threaded"?


SIMD is the classic example of how you don't need concurrency to have parallelism.


Technically, most modern GPUs are SIMT meaning that they aggregate lockstep threads into vector instructions, so yes.


I wouldn't think so. Not sure why you asked though.


There's a question upthread asking why GPUs are not mentioned in "Nuts and bolts of multi-threaded programming". Hence my wonderment.


Then you didn't read the comment upthread fully.

It did acknowledge the title, and contrasted it with the tone of the article (the first half at least) which is way broader minus a mention of GPUs (the basics of parallel algorithms, parallel APIs).


But GPUs aren't SIMD. (In some sense they're almost the opposite.)


In an incorrect sense, yes. :-)

SIMD = Single Instruction Multiple Data, meaning the same instruction being applied to multiple different values simultaneously. That's exactly what GPUs do.


That's exactly what GPUs do in a single thread, but GPUs are also about threading, usually sporting tens, if not hundreds of small cores to effectively allow them to compute that many pixels (or rather shader outputs) in parallel. Otherwise, it wouldn't scale much.

That being said, I understand you only wanted to point out the error in the upper post.


actually they are.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: