QuoteOriginally posted by: eDaveQuoteAt some level, physical engineering disciplines have it easy because there's a one-to-one congruency between the blueprint drawing and the final object. Even a child can tell whether a blueprint (e.g., of a triangular peg) and an object (e.g., a round peg) do or don't match.. I'm not sure that software has any such analog of a design document that can be easily compared to the final manufactured object by a novice inspector.That's interesting. I actually had thought of it almost the opposite way. One of the reasons I think that software engineers have it easy is because, given the blueprints (code), the final object is known because the computer (after a compilation or interpretation step) will execute the blueprints exactly as they appear. So to me, there is, in that sense, for software, a one-to-one correspondence between the blueprints (code) and the final object. True, you don't know ahead of time how the user or environment will interact with the software, but, for a given interaction, in a given state, you know (at least in principle) what the result will be. Consider physical engineering blueprints -- say, for example, an architect's blueprints for a house: Given the blueprints (a plan for a house), the final object is not known. The carpenters who build the house may or may not execute the blueprints exactly as they appear. In fact, in a typical situation, the carpenters will, here and there, intentionally do something that is different from the blueprints.So, while it might be easier for a novice inspector to read the architect's blueprints than it is for him to read the software engineer's blueprints (code), this inspector will have no idea if the house built from the architect's blueprints will actually be built according to the blueprints. On the other hand, he does know that software will be built according to the blueprints (code). Maybe. In the mechanical engineering and aeronautical engineering realms, the machinist is not free to change the design so it's much more like the use of a deterministic compiler. A part that does not adhere to the exact shape and dimensions of the blueprint is a rejected part and the machinist will lose their job if they make to many mistakes. Architecture is much looser and the as-built structure may vary from the blueprint design for consensual, unintentional, and financial reasons.Nor is the case in software isn't as deterministic as it seems, except in the cases of trivial code. If one uses more advanced features of a langauge, one might find discrepancies between how one thinks the feature works versus how one's compiler treats it. And any code that uses third party libraries is very likely to run into discrepancies between the documented design of the library and the as-built runtime behavior as well as face issues with different versions of a given library.QuoteOriginally posted by: eDaveThat still leaves us with the problem you brought up: How is a "novice inspector" going to review my software design?In my experience, the usual solution to the problem of "novice inspectors" needing to review software designs has always been to generate "higher level" documents from the blueprints (code). Once one has code, it is fairly straightforward to generate higher level documents, in much the same way that a computerized architecture system might generate a high-level "walk-through" rendering, leaving out the details concerning where all the pipes, conduit, ventilation shafts and so forth need to run (traditional blueprints are also difficult to read if one includes all the little details). However, to be honest with you, I do not think that is a good solution in most cases -- it's just been the usual solution at most of the places where I've worked.What the "novice inspector" (typically a Business Analyst in the finance world) really wants to know is: "Will this software actually do what I intend for it to do?". Thus, to me, a better approach is to be able to provide the "novice inspector" with tests that he can run himself, with his own input data that are written in a language that he can quickly learn to read. I've used two frameworks that provide such functionality: Cucumber (
http://cukes.info/ ) and FitNesse (
http://fitnesse.org/ ). (There are others -- these are just the two I've used.) At one investment bank, we did not use either of these frameworks but provided similar functionality with a framework of our own, so I'll use that project as an illustration. It worked as follows: I realized that their Business Analysts preferred to communicate information in the form of spreadsheets, so I asked them to express their requirements as spreadsheets. Their Business Analysts were excited at the idea, so they outdid themselves and created huge spreadsheets with expected inputs and outputs, along with expected performance metrics (speed, load and so forth), and we wrote a test harness that those BAs could use to suck in the data from the spreadsheets and exercise the system we were writing. That was one of the first things we did on the project, which means that, initially, all the tests failed. As we added more features, more and more of the tests passed (and, at the same time, the Business Analysts were busy adding more data to the spreadsheets, trying to break our system -- they weren't being mean, we asked them to do that). By the end of the project, we delivered software that the "novice inspectors" knew was *exactly* what they wanted. In a sense, the spreadsheets and our test harness (which they had their in-house software engineers review carefully!) were the "novice inspector level blueprints". Not all software can be developed in *exactly* that way -- that system happened to have very well-defined interfaces. However, in my experience, most software can be developed in an similar manner, with "novice inspector level blueprints" testing system performance and functionality.That's a very clever solution! The customer creates the prototype with whatever tool they like best and then the programmer translates it into an efficient code that adheres to the prototype's pattern of behaviour whilst also achieving performance requirements unattainable with the prototyping tool. Nice!But the real key isn't the "tests" because that seems to imply the programmers are monkeys at a keyboard banging out random code until the code passes the test. The missing piece is some common, industry-standard way for the customer/analyst/designer to define "intended behaviour" and a common industry-standard way for a programmer to propose a solution (a design document) that a non-programmer customer can review. Code doesn't feel like that design document -- it's too close to the programming-side and too far from the customer/intended behavior side.