TIL how the old #Cray #supercomputer worked. Apparently it would run the same sequence of instructions over many different data sets at the same time. It could do lots of math in a hurry, but only if it was lots of the *same* math.
It could also run ordinary one-thing-at-a-time software like Unix, but then it wasn't any faster than an ordinary one-thing-at-a-time CPU like the Motorola 68000.
This all sounds awfully familiar. Isn't that how GPUs work?