I think the article structure is quite standard. It first introduces the more general concepts (parallel programming, zooming in on MIMD on shared-memory machines) and then focuses on a subset (multithreaded) while also alluding to other parallel programming techniques (MPI etc).
It did acknowledge the title, and contrasted it with the tone of the article (the first half at least) which is way broader minus a mention of GPUs (the basics of parallel algorithms, parallel APIs).
SIMD = Single Instruction Multiple Data, meaning the same instruction being applied to multiple different values simultaneously. That's exactly what GPUs do.
That's exactly what GPUs do in a single thread, but GPUs are also about threading, usually sporting tens, if not hundreds of small cores to effectively allow them to compute that many pixels (or rather shader outputs) in parallel. Otherwise, it wouldn't scale much.
That being said, I understand you only wanted to point out the error in the upper post.
And surprisingly, there is no mention of GPU.
Almost as if they want you to think parallelism can only be achieved through CPUs (cores, and threads) but don't want to admit it in the title.