Wednesday, August 19, 2009

Six sigma due diligence

Questions to ask before applying six sigma:


  • Does your data have a normal distribution? See http://www.pyzdek.com/non-normal : "Observe that the best-fit curve analysis shows the process to be capable of meeting the requirements easily, while the usual (normal) method shows the process to be incapable. Of course, the opposite situation may also occur at times." If not, make sure you use a curve that fits the data. From the same article: "For instance, most business processes don’t produce normal distributions. There are many reasons why this is so. One important reason is that the objective of most management and engineering activity is to control natural processes tightly, eliminating sources of variation whenever possible. This control often results in added value to the customer. Other distortions occur when we try to measure our results. Some examples of “de-normalizing” activities include human behavior patterns, physical laws and inspection." Based upon this, applying six sigma (which is all about eliminating the sources of variation) can actually cause the distribution no longer to be normal.
  • Is Six Sigma relevant for my process? See http://www.davenicolette.net/articles/six_sigma.html . "The similarities between software solutions are at the level of the general patterns that may be observed in the "environment." The individual solutions themselves are quite unique. In software development we use a solution "a million times over, without ever doing it the same way twice." A quality control mechanism that seeks to minimize variation applies to a process in which the solutions are done the same way twice, ten times, or a million times. Therefore, such a mechanism is fundamentally at odds with the basic nature of software development. "
  • If six sigma is applicable, what are you measuring? Popular measures seem to be the number of defects, size and time. The process that finds the defects is arguably not a fully repeatable process.
    • Defects are not randomly found in software. Their location and when they are found is strongly related to the usage of the software.
    • I have spent a blue Monday doing QA and the one paradigm that we lived by was: finding defects is not the goal of QA. Very fundamental rule. However, when the number of defects becomes our main measure, aren't we starting to violate that rule? Our measures (multiple, that can be expressed quite differently) should express compliance to the goals or specifications we set.
    • There is no 'default set of tests' that we can apply to each project. So the measured defects between two different projects are the result of different tests. That's quite different from a widget that we apply the same set of tests to every time we make it.
    • It's also not always clearly defined when something is a defect. If it wasn't specified, should it be counted as a defect? In some cases it would just be an enhancement. However, if it was an oversight in the specification, would that make it a defect?. Or is a defect simply "not implemented as specified" ? A first requirement is a consistent way of labeling defects across projects. This is different from a well defined widget that has been fully designed and tested before going into production and has a set of tests applied to it with well defined expected outcomes. Software is more like the widget prototype than a mass produced widget.
    • The size of the software is not a true measure of complexity. The complexity of a task is not only related to the task but also to the skills of the people executing the task. Take for example a team of software developers that has been successful developing large corporate data driven applications. Then have them develop a small module for an embedded operating system. Due to the completely different nature of the project, you'll probably see a lot more defects per line of code, even though the overall size is smaller. And are you sure that measures like size can be compared across projects that use different technology stacks? Finally, using size as a measure of complexity feels to me almost as a last resort: we can't find anything better to measure than the lines of code. A final acknowledge that we fail to even grasp what complexity means for software development.

Befor applying six sigma, make sure you brush up your degree in applied mathematics :) . And apply six sigma to your six sigma project. Define your measurements. Apply your measurements. Analyze the fit of your measurements. Improve your measurements. Control six sigma and don't let it get out of hand. Repeat as often as needed to make sure your measurements still fit your process.

No comments:

Post a Comment