QuoteOriginally posted by: CuchulainnQuoteOriginally posted by: ExSanQuoteOriginally posted by: CuchulainnLinus Speaks QuoteSo give up on parallelism already. It's not going to happen. End users are fine with roughly on the order of four cores, and you can't fit any more anyway without using too much energy to be practical in that space. And nobody sane would make the cores smaller and weaker in order to fit more of them - the only reason to make them smaller and weaker is because you want to go even further down in power use, so you'd still not have lots of those weak cores.Read the comments. There is no consensus :(I have read the comments.Of course. I was quoting, not endorsing. Do you agree/disagree with the quote?
:)He mentions the exceptions to the "parallelism is a crock" as being graphics and servers. But what if more and more applications become graphics-like (4k video, Oculus VR glasses, multifinger/gesture touch UIs, auto voice/image recognizing digital assistants) or server-like (end-users' devices become servers/hubs for the end-user's internet-of-things sensor networks)?At one level, he may be right. The biggest cores in the system are not going to get smaller and the number of big cores in the system will probably remain modest. But systems are (and will) be adding smaller cores both for low power (e.g., Apple's M-series coprocessor or the ARM's big.LITTLE architecture) and high performance (GPUs). In that regard, he may be wrong in that end-users will expect the kind of low-power and high-performance that's only possible with parallelism implemented in a very heterogenous set of cores.