QuoteOf course, NVidia/CUDA boards is happiest with SPMD (data-driven) designs?QuoteI would say CUDA forces the developer to design in a certain way. I don't think it's just GPUs -- data parallelism similarly applicable to CPUs: Not only SIMD, but also superscalar multicores -- in a sense the ever-present ILP is dependent on this, too (the design with data dependencies -- including the ones implied by pointer chasing resulting from a "hierarchical objects' tree" design -- may be equally hostile to ILP as it is to vectorization and multicore-parallelism). This usually cannot be added on top (as in the idealistic "optimize later" stage) of a parallelism-unaware design, unless we're okay with "optimize later" meaning a "complete rewrite."True, also implications for modularity and maintainability.One organization vs. another -- different trade-offs:
http://gameprogrammingpatterns.com/data ... erestingly, we haven't lost much encapsulation here. Sure, the game loop is updating the components directly instead of going through the game entities, but it was doing that before to ensure they were processed in the right order. Even so, each component itself is still nicely encapsulated. It owns its own data and methods. We simply changed the way it's used.QuoteHardware engineers do not have this problem.It's interesting, because it's the exact same problem as in computer architecture -- people living in "their" separate layers ("algorithms", "ISA", "microarchitecture", "digital logic") -- in the past not communicating with one another, not enough real inter-disciplinarity ("it's okay if I design hardware and I don't understand algorithms, if anything I can always collaborate with CS theoreticians" _and_ "it's okay if I design algorithms and don't understand hardware, if anything I can always collaborate with hardware engineers" both problematic!). Assumptions becoming increasingly outdated with the end of the free lunch (Joy's Law) -- a thread-unaware but highly-optimized (for a single CPU) memory controller may work great for a single-core uniprocessor, but be terrible for a multi-core CPU.QuoteInteresting! Yet this conclusion rests on the assumption that "model of the world" is a noun-focused model (which may be the prevailing thought pattern in Western cultures that the world is composed of distinct and separable objects). But if "model of the world" is a context-based one (which may be the prevailing thought pattern in Eastern cultures) then perhaps the "model of the world" is correct.Interesting, context may be a good thinking tool.At the same time doesn't that exacerbate the "no objective test to evaluate benefits" problem?QuoteI'd also quibble for lie #1 in that many (but not all!) software systems actually are platforms in that they define an instruction set that other programs or the end-user can use in different ways or sequences to accomplish their goals. A "word processor" really is a processor and different "word processors" are akin to different CPUs in how they used.I'd call that a "virtual platform" -- as in, distinct from a "physical platform" :-)One more interesting talk:A system is not a tree - Kevlin Henney:
Both beautiful and useful. But we're not talking about the green, oxygen-providing ones. As abstract structures we see trees all over the place -- file systems, class hierarchies, call trees, ordered data structures, etc. They are neat and tidy, nested and hierarchical -- a simple way of organising things; a simple way of breaking large things down into small things.The problem is, though, that there are many things -- from modest fragments of code up to enterprise-wide IT systems -- that do not fit comfortably into this way of looking at the world and organising it. Software architecture, design patterns, class decomposition, performance, unit tests... all of these cut across the strict hierarchy of trees. This talk will look at what this means for how we think and design systems, whether large or small.