Reigle Stewart
@ReigleStewartMember since May 3, 2004
was active Not recently activeForum Replies Created
Forum Replies Created

AuthorPosts

May 16, 2005 at 11:33 pm #119594
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.To Wuth:
The following exert is quoted from the “Ask Dr. Harry” section of this website:
The capability of a process has two distinct but interrelated dimensions. First, there is “shortterm capability,” or simply Z.st. Second, we have the dimension “longterm capability,” or just Z.lt. Finally, we note the contrast Z.shift = Z.st – Z.lt. By rearrangement, we assuredly recognize that Z.st = Z.lt + Z.shift and Z.lt = Z.st – Z.shift. So as to better understand the quantity Z.shift, we must consider some of the underlying mathematics.The shortterm (instantaneous) form of Z is given as Z.st = SL – T / S.st, where SL is the specification limit, T is the nominal specification and S.st is the shortterm standard deviation. The shortterm standard deviation would be computed as S.st = sqrt[SS.w / g(n – 1)], where SS.w is the sumsofsquares due to variation occurring within subgroups, g is the number of subgroups, and n is the number of observations within a subgroup.
It should be fairly apparent that Z.st assesses the ability of a process to repeat (or otherwise replicate) any given performance condition, at any arbitrary moment in time. Owing to the merits of a rational sampling strategy and given that SS.w captures only momentary influences of a transient and random nature, we are compelled to recognize that Z.st is a measure of “instantaneous reproducibility.” In other words, the sampling strategy must be designed such that Z.st does not capture or otherwise reflect temporal influences (time related sources of error). The metric Z.st must echo only pure error (random influences).
Now considering Z.lt, we understand that this metric is intended to expose how well the process can replicate a given performance condition over many cycles of the process. In its purest form, Z.lt is intended to capture and “pool” all of the observed instantaneous effects as well as the longitudinal influences. Thus, we compute Z.lt = SL – M / S.lt, where SL is the specification limit, M is the mean (average) and S.lt is the longterm standard deviation. The longterm standard deviation is given as S.lt = sqrt[SS.t / (ng – 1)], where SS.t is the total sumsofsquares. In this context, SS.t captures two sources of variation – errors that occur within subgroups (SS.w) as well as those that are created between subgroups (SS.b). Given the absence of covariance, we are able to compute the quantity SS.t = SS.b + SS.w.
In this context, we see that Z.lt provides a global sense of capability, not just a “slice in time” snapshot. Consequently, we recognize that Z.lt is timesensitive, whereas Z.st is relatively independent of time. Based on this discussion, we can now better appreciate the contrast Z.st – Z.lt. This type of contrast poignantly underscores the extent to which timerelated influences are able to unfavorably bias the instantaneous reproducibility of the process. Thus, we compute Z.shift = Z.st – Z.lt as a variable quantity that corrects, adjusts, or otherwise compensates the process capability for the influence of longitudinal effects.
If the contrast is related only to a comparison of short and longterm random effects, the value of Z.shift can be theoretically established. For the common case ng = 30 and a type I decision error probability of .005, the equivalent mean shift will be approximately 1.5S.st. If the contrast also accounts for the occurrence of nonrandom effects, the equivalent mean shift cannot be theoretically established – it can only be empirically estimated or judgmentally asserted.
I hope this helps you.
Regards,
Reigle Stewart0April 28, 2005 at 2:30 am #118568
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Issa, a process can be in a state of statistical control
(random variation only), yet be too wide relative to the
performance specification limits. For example, consider a
given manufacturing technology that is centered on the
target value such that M = T, where M is the process
mean and T is the design target. To further our
discussion, it will also be understood the individual
“deviations” that comprises the variance are fully
unpredictable (i.e., random). In this case, it would not be
practical or economically feasible to track down and
eliminate the source(s) of such deviations. Hence, it can
be said that the given technology is centered and
operating at its maximum capability, say 2 sigma. In this
simple example, we can see that the process is “in
control” but unfit for use. This process would be “in
control,” but producing a high defect rate. To resolve this
problem, one must; a) live with the variation, b) increase
the specifications, c) find a robust solution that minimizes
the variance, d) upgrade the underlying technology, e)
block the effect of causative variables, or some
combination of the above.0April 27, 2005 at 8:47 pm #118541
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Tom:
I love your last sentence: “Randomness is just another word for ignorance about the causes.” Great numerical example.Reigle
0April 27, 2005 at 8:44 pm #118540
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Tom:
Great post! The concept of “significant many” is interesting. But how should one define the term “significant”? This is another tributary of the analytical Amazon that should eventually be explored.
Do you believe Dr. Juran’s meaning to be “statistically significant”? Could he mean “practically significant?” Yes, the cumulative influence of the “trivial many” might be large in some situations. But, would any of the contributing effects prove “significant” in any way? I would think not.
For any reasonably complex causeandeffect scenario, would you agree that the slope of any given X will likely prove to be statistically and pragmatically insignificant? Accordingly, for any complex transform involving a large range of independent variables, it is doubtful that the associated distribution of partial derivatives will prove to be uniform. I would tend to think that such a distribution would be skewed, to such extent that only a few of the variables could be declared as “statistically AND pragmatically significant.”
Respectfully Submitted,Reigle Stewart
0April 27, 2005 at 7:33 pm #118535
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Tom:
I would agree that this topic is worthy of further discussion. I would also agree that these two words, as well as their union, are frequently misunderstood and inappropriately applied (by novices and experienced practitioners alike).
Some would say that nothing in nature happens by chance, everything moves in some sort of trend, shift or cycle. If this is true, then one would also tend to believe that every X plays a role (of some type or form) in the consequential determination of Y, given that Y = f ( X1 , … , XN). However, not all variables are created equally. Some have more influence than others. Thus, we recognize that some of the Xs are more influential than others. Hence, the “vital few” versus the “trivial many.”
This is not to say the “trivial many” Xs have a random influence on Y. However, this is to say that the “trivial many” are not as influential in terms of their relative weight (with respect to a point estimate of Y or some parameter thereof). In other words, the related partial derivatives of the “trivial many” is of such a small magnitude that the related set of variables are of no practical value during an improvement effort.
The capability of DOE to discern the “vital few” is well established; however, would you not agree that sample size plays a highly interactive role in this capability? The practice of DOE may be capable of separating the independent effects, but if the sample size is too low, the DOE will not be able to detect a given amount of change (for a selected level of alpha and beta risk). As sample size is increased (for a given alpha and beta risk), the probability of being able to detect a fixed amount of change (in the response mean or variance) also increases.
From another angle, we can consider the temporal behavior of Y (or any given X). If the outcome of an autocorrelation study fails to reveal an association (across all possible lags), is it safe to say the observed behavior is not patterned (i.e., random)?
Is the idea of randomness a curiosity of the human imagination, or does it have some basis in the real world? If so, how do we conclusively prove it? If not, then what are the implications?
Reigle Stewart0April 27, 2005 at 6:57 pm #118534
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Tom, your perspective is most interesting and has high appeal. RS.
0April 27, 2005 at 6:27 pm #118531
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Six Sigma Tom:
I would have to agree. The idea of a “random cause” is merely a label we attach to a miniscule assignable cause that our current array of analytical tools can not effectively or efficiently discern. In this sense, random causes can not be statistically separated for independent analysis (in a practical way).
Reigle Stewart0April 27, 2005 at 6:04 pm #118530
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Terry:
You are absolutely correct. Dr. Harry did not do anything singlehandedly, no more than a CEO of a company does it all. Over the years, many people have contributed to the furthering of Six Sigma. This is evidenced by the numerous books and articles on the topic.
However, it is undeniable that he was a pioneer in the field of Six Sigma. For example, the first published work on Six Sigma was authored by Dr. Harry in the mid 80’s. But he did not create the concept of Six Sigma (as discussed in this publication). Mr. Bill Smith created the concept. At that period in time, Six Sigma was merely a statistical target with virtually no direction, just a shadowy vision. Mr. Smith and Dr. Harry collaborated over several years to revise and extend the idea of Six Sigma (including those years he and I spent at SSRI). Within many magazine interviews of Dr. Harry, he has acknowledged the work of others.
As a matter of verifiable fact, Dr. Harry has noted the contributions of many such individuals; either in the foreward to his books, as a coauthor, or within the content of those publications. This is a matter of verifiable fact. For example, his original instructional materials were dedicated to Mr. Bill Smith. As yet another example, his bestselling book spells out the contribution of several key individuals.
You ask how many people have contributed to the Six Sigma development at Motorola. Well, why don’t you review the corporate documents from that era of developement? Why, at that point in time, did these mysterious individuals you refer to not publish their thinking on the subject (internally or externally)? Why are there no other publications on the subject from that period of time?
Many of the answers to such questions are available at Dr. Harry’s biographical website (key documents for all to review). These documents specify many different people, often by name, job title, and location. But again, this is just one individual’s perspective, but that perspective is drawn from documents and other such artifacts, not memory and opinion.
In answer to your other question, it should be obvious why I vanish from time to time. Like you, I have a job and other responsibilities. However, when possible, I come to this forum, contribute my two cents, state my beliefs, render some facts, and weather false statements and endure poor memories. On the other side, I also gleem some really good information that I find valuable in the practice of Six Sigma.
Best of Regards
Reigle Stewart0April 27, 2005 at 3:18 pm #118514
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Andy, your perspective is well taken and appreciated.
You ask what happens if one were to realize a new
process set point — owing to a redesign, introduction of
new technology, etc. Well, its pretty simple … you have a
new level of entitltement. The entitlement concept and
supporting equations still apply. They are merely applied
in the context of a new set point! Regardless of set point,
the ideas and math underpinning “actual” and “potential”
capability are still at hand. Its always great to realize the
benefits associated with robust design — whenever and
wherever possible. I wish you the best with your case
studies and may good times prevail.0April 27, 2005 at 2:02 pm #118510
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Andy, I find your recent statement most peculiar. In your
recent post, you state “Accordingly, my ‘charge,’ as you
put it, is that Dr.Harry only documented what he believed
Motorola’s Six Sigma process should be and not what
was actually practiced. Therefore, it is hardly surprising
that so many companies in the West are still struggling
with low rolled first time yields, and low throughput, and
might explain why so many manufacturing companies are
relocating to the Far East!” So now the ills and woes of
Western manufacturing is Dr. Harry’s fault? You are really
hanging out there on this one. Besides, I am still awaiting
your specific citations and references. Thus far, you have
provided nothing but recollections and opinions.0April 27, 2005 at 1:55 pm #118509
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Tony, Darth is fully right about the general understanding
of what process entitlement is. However, YOUR definition
or MY defintion or ANYONE’s personal definition is not
important. What is important are the core equations that
describe process entitltement, how the data is collected to
feed those equations, and how the resultant outcomes
interpreted, how decisions are made on this basis, and
the consequential actions that stem from those decisions.
Do examine and study these equations, especially the
concept of rational subgrouping. This exercise will
provide you the desired insights. But as a starting point,
go with what Darth has described.0April 27, 2005 at 1:42 pm #118508
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Andy, I do appreciate and respect your recollection of
things. We all have recollections and meaningful
memories (as flawed as they may be). We all see things
differently in retrospect. However, one thing that does not
falter over time is the artifacts (documents). I do not
dismiss the contributions of others, because they too have
provided meaning. But the bottom line is simple and
verifiable, Dr. Harry won out at Motorola and got the top
management team to support for his version of Six Sigma.
Dr. Harry and Mr. Schroeder then took Six Sigma to ABB
and then to Allied Signal, and then on to GE, and from
there, the world. The artifacts are clear. Dr. Harry was
highlighted in Jack Welch’s autobiography and the “GE
Way,” not any of the others you mention. A simple review
of the literature explains why — Mr. Bill Smith and Dr.
Harry were the primary pioneers of Six Sigma in the 80’s.
Mr. Smith come up with the idea and Dr. Harry extended
and exploited the idea.0April 26, 2005 at 9:20 pm #118457
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Bighead Todd: Works for me if it works for you. My footsteps are small for selfevident reasons. My footsteps are deep because of the great mass of knowledge you have so gratiously bestowed upon me. Lead on my fine friend and I will dutifuly follow (but no longer in this thread).
0April 26, 2005 at 9:04 pm #118454
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Bighead Todd: We are overwhelmed by your keen sense of statistics and penetrating insights about human character. Keep up the good work.
0April 26, 2005 at 8:58 pm #118453
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Andy U:
Yes indeed, I have emotional issues — from the nagging grief induced by my overwhelming lack of intellectual capacity to my truck tire that won’t retain air. Its very possible I missed your point. If you believe so, I humbly apologize.
Now, to my point, please provide specific references to substantiate your accusations (without your zigzagging around the issue). To your other point, Motorola did win the MB award through the applied efforts of many. To refresh your wanning memory, we won the award in 1988, not 1987 as you so reference.
Dr. Harry’s work in the theory and application of Six Sigma was one of several sources. As you know, he was appointed to launch and head the Six Sigma Research Institute in late 1989 (at the request of Mr. Bob Galvin). So maybe you are right, his work was unused and insignificant, but not by the words of Bob Galvin or other members of the executive council. In fact, they too would disagree with your position of this issue (at least by what they have published in official corporate documents).
However, Motorola did in fact distribute over 100,000 copies of Dr. Harry’s publication entitled “The Nature of Six Sigma Quality.” Not only did Motorola print and distribute this document internally, they sold it to other companies (in great quantities I might add). How do I know this? Simple, I was the one at SSRI that had to keep track and verify the MU Press distribution.
Enough of this silly bantering. Simply clarify your accusations and provide the references. This would be most appreciated.
Respectfully Submitted,
Reigle Stewart
0April 26, 2005 at 8:13 pm #118450
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Very nice contribution Bighead Todd! Your insights will likely prove to be of significant value.
RS0April 26, 2005 at 8:09 pm #118448
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Paul Gibbons:
Your point is well taken and makes perfect sense. I do know they just recently got under way with Six Sigma, but I am not privy to their specific plans and performance data. You should talk directly to the folks at ASU. I believe your contact there would be Mr. Jeff Goss, Assistant Dean, Ira A. Fulton School of Engineering, Arizona State University.
Thank You,
Reigle Stewart0April 26, 2005 at 8:02 pm #118446
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.To bbdavid:
To help you through this, consider the following:
1) V1 is the value of realized capacity per dollar; i.e., V1 = C1 /C3, where C1 is the realized capacity and C3 is the total cost of realization.
2) V2 is the value of potential capacity per dollar; i.e., V2 = C2 / C3, where C2 is the potential capacity (or some other idealized state that one elects to study).
The Velocity of Value (VOV) equation is:
VOV = ( V2 – V1 ) / T, where V1 is the current state of affairs, V2 is the desired state (potential) and T is the amount of time required to realize V2. The resultant of this computation represents the relative rate of improvement in realized capacity per dollar per unit of time.
Thus, one can now see that the “potential” stateofaffairs” is duly accounted for.
Best of Regards,
Reigle Stewart0April 26, 2005 at 7:44 pm #118445
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Andy U:
I very much appreciated your responses, but I am not quite sure what Motorola Fabs (during the 80’s) has to do with anything. If you see a connection, OK, I’ll run with that. More to the point, I offer the following commentary to your recent post.
A) YOUR STATEMENT: “As for jealousy – I have no interest in ‘dead knowledge.'”
A) MY RESPONSE:
1) So if its not jealousy, then what would you call it? Obviously, you have a very strong emotional issue at play. What is the correlation between your accusations and Motorola wafer fabs, or is this just another slight of hand to distract us from your seemingly impetuous tongue? Better yet, what exactly are your accusations? Be specific and do provide some verifiable quotes and references.
2) My understanding is that things like algebra, statistics and certain physical principles are able to exist without a knowledge halflife. So what knowledge is dead? Again, please be specific and cite references.
B) YOUR STATEMENT. “The term process entitlement is in wide use in Motorola wafer fabs prior to 1987 … In fact, it was published in Semiconductor International in about 1984 … it certainly wasn’t limited to what Dr. Harry documented in 1990.”
B) MY RESPONSE:
1) Talk about dead knowledge! Everything you refer to is only 15 – 20 years old! Unlike algebra, This Motorola Wafer Fab thing has a halflife, like virtually every case study ever written. Maybe you need some new and fresh examples!
2) What document in 1990 do you refer to? Dr. Harry’s first published work on Six Sigma was in 1984: “Achieving Quality Excellence: The Strategy, Tactics, and Tools,” first printed by the Government Electronics Group, Motorola Inc. Within this book, there exists an entire section dedicated to a description of Six Sigma. The second major publication was in 1987, entitled “The Vision of Six Sigma,” also published by Motorola GEG and later on by Motorola Inc. Did I miss something here, or were there other publications on the topic of Six Sigma before these? Please be specific and provide your citations and references.
C) YOUR STATEMENT: “My only motivation has been to question some of the wild claims concerning Six Sigma.”
C) MY REPLY: Well, please enlighten us. What “claims” are you referring to? Here again, please give specific references.
For the sake of professional integrity, please provide us with specific, verifiable examples; something other than opinion, hearsay and/or hand waving. I would certainly think that, given the implied stature of your professional credentials, you (of all people) would want to substantiate your “claims.” In a recent post, one of this forum’s illustrious mentors had the moral courage to admit he was wrong and that his statements were false and groundless. Although he was wrong, and admittedly so, it takes a very strong and ethical person to step forward and admit to such.
With the deepest of respect.
Reigle Stewart0April 26, 2005 at 3:51 pm #118434
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Paul, we would be most interested in your description of
the factors you mention. Please share your perspective
with us.0April 26, 2005 at 3:41 pm #118432
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Vinny, that graph was too funny. I enjoyed it.
0April 26, 2005 at 3:38 pm #118431
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Andy, you are free to say what you like, whenever you
like. As you say, that is the nature of a public forum. You
may not see it, but the poster you mention made no
connection between Dr. Harry and the question of
process entitlement. It was made by way of your ill
intentioned association. Dr. Harry has used the term
“process entitlement” for many years now, in numerous
publications. I do not recall him laying any claims to this
term. Could you please substantiate your accusation? Of
course you can not because they are false, and you know
it. I fully understand your envy. I understand how that
jealousy forces people to revert to bashing. Just look at
many of the threads on this site. Its alright if you want to
bash, but please, just recognize how foolish you look in
doing so. Likewise, you too should expect a little criticism
from time to time (per your own words). By the way, it is
not necessary to wave the banner of “I belive I have the
right …” We all know you do, along with everyone else.
So maybe you should reconsider a membership in the
Sheep Sigma Club.0April 26, 2005 at 3:23 pm #118429
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Paul, this is a great question. They would be happy to
share their perspective with you.0April 26, 2005 at 2:49 pm #118426
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Wendy, so that you know, Arizona State University is now
successfully applying Six Sigma to several of its key
processes. A retired Motorola executive is heading this
effort for ASU.0April 26, 2005 at 2:40 pm #118425
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Stan, another one of your brilliant answers!
0April 26, 2005 at 2:31 pm #118424
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Don’t listen to this malarky from Andy, he is wrong. His
only motive here is to bash Dr. Harry, not answer your
question. Process entitlement capability is the level of
performance you enjoy when all assignable causes have
been removed from the system of causation. As you may
know, systematic effects are nonrandom in nature.
Process entitlement is the level of capbility that exists
when only random causes are present. Such a level of
capability is the best that a given technology can be. You
can find the answer to this and much more by looking into
the “ask Dr. Harry” segment of this website. Take a look
and you too will see for yourself how wrong Andy is.0April 5, 2005 at 8:41 pm #117266
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Mr Stan:
It is most unfortunate that I was not blessed with a spot of simplistic genius or marked by the powers of extraordinary communication. I humbly aplogize for being somewhat wordy in my explaination of the B vs C test.
Perhaps you could grace us with a definitive 10 second explaination of how to account for beta risk when using the method of endcounts. Please enlighten us to this knowledge for a selected range of required improvement.
It is always good to hear from such an accomplished practitioner.
Your Humble Student.
Reigle Stewart0April 5, 2005 at 6:34 pm #117258
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Many years ago, Mr. Dorian Shannin introduced the B vs C test as a methodology to statistically contrast two groups (in much the same vain as the classical ttest). The primary advantage was the ability to use fewer samples for the experimental condition. To clarify this notation, we must first understand that “C” stands for the “control” condition and B signifies the “better” condition (or at least what is expected to be better).
At the extreme end of things, the B vs C test allows us to statistically examine an experimental condition with only one sample! However, the control condition always requires more than one sample. Also note that many would argue that a balanced condition (equal number of Bs and Cs) is usually superior (for a variety of theoretical reasons). Nonetheless, some practical applications will not allow for such an ideal balance — like whenever expensive destructive testing is at hand. Hence, the pragmatic value of the B vs C test as compared to a standard ttest or oneway anova (two group).
To illustrate, let us consider a simple case. For example, suppose an engineer wanted to test the effect of some coating on the strength of a certain material. Lets say three samples were randomly selected from the general population of existing parts (standard coating) and tested for strength. In this fashion, the “C” group was formed and the resultant dependent variable measurements were properly recorded.
Next, three other parts were randomly selected; however, they were treated with the new coating. After application of the experimental coating, the strength of all three components was individually assessed and duly recorded. In this manner, the “B” group was formed.
At this point in the methodology, all N=6 parts were rank ordered in terms of the dependent variable called “strength.” In this case, it was noted that all of the Bs outranked all of the Cs. Consequently, the engineer accepted the alternate hypothesis (Ha) and concluded that the new coating makes a statistically significant difference in strength. Why?
Namely because the engineer used 3 parts for the “C” condition and 3 parts for the “B.” This means there existed 20 possible outcomes (in terms of potential rankings). Hence, the random chance event of seeing all “Bs” outrank all of the “Cs” would be 1 / 20 = .05, or about 5%. Thus, by selecting C = 3 and B = 2 (for a total of N = 6), the engineer had 100% – 5% = 95% statistical confidence. But this is only one part of the total problem! Let us now move on to the less obvious portion of our discussion.
We will now say that the engineer began this exercise by declaring that she must observe a 3 sigma change (or greater) in the universe mean (in terms of strength) if switching over to the new coating was to be justifiably implemented.
To help with this part of the problem, Dorian Shannin set forth a table of “KS” values, where K symbolizes the number of standard normal deviates ( Z ) and S represents the pooled standard deviation. For example, his table of KS values shows that for the case of B = 3 and C = 3 (i.e., N = 6) with a “Power” of 95% (i.e., a beta risk of 5%), then KS = 3.3. So how did he arrive at these values? Where did he get “K”?
Without a drawn out theoretical explanation, consider the application equation: =(NORMSINV(P^(1/(N1))))*SQRT(V.B + V.C), where P is the “power of test,” N is the total sample size, and V is the variance. To understand how this equation is applied, consider the following inputs.
Bs = 3
Cs = 3
P = .95
V.B = 1.0
V.C = 1.0
Thus, we compute KS =(NORMSINV(.95^(1/(61))))*SQRT(2) = 3.28, or 3.3. This says that for the case of N = 6 and a detection power of 95%, the decision threshold would be a mean difference of 3.3 sigma, but only if all Bs outrank all Cs. Of course, for a given value of N, the alpha risk will vary depending of the specific number of Bs and Cs. Note also that the KS value is not dependent upon the total combinations; however, it is dependent upon the total sample size and selected beta risk.
In other words, if all 3 Bs outrank all 3 Cs, then the B vs C test would be capable of detecting a mean difference of at least 3.3 sigma with a detection power of 95%. So, if N = 6 samples are prepared ( B = 3 and C =3) and the engineer observes that all Bs outrank all of the Cs, then there would exist P = 95% certainty of detecting a 3.3 sigma change if such a magnitude of change was really there to be detected!
Using the equation listed above, you will be able to construct your own table of KS values — beyond those published by Dorian Shannin.
I hope this helps you in your understanding of the B vs C test and resolves some of the mystery around the infamous “KS” value.
Respectfully,
Reigle Stewart0September 13, 2004 at 3:30 pm #107293
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Johnny:
Please recognize that Stan (and a couple of his cronies) is one of the very few contributors on this website that has a personal vendetta against Dr. Harry. Just read through Stan’s (and the others) historical postings and you will discover firsthand what I am talking about. You will instantly recognize that such extreme bias is rooted in fact less opinion obviously stemming from professional jealously. You can verify these assertions for yourself.
Please feel free to contact me [email protected] and I will provide you (or anyone else) the website address that displays the referenced corporate documents. Several months ago I promised the Six Sigma community access to these documents and now I have delivered on my promise.
1) Stan claims Dr. Harry never gives Bill Smith credit for Six Sigma, yet virtually all of Dr. Harry’s writings acknowledges Bill’s pivotal role and contributions.
2) Stan claims Dr. Harry did not create the Black Belt terminology, yet there exits corporate documents that clearly show otherwise. Stan claims Dr. Harry did not invent MAIC, yet there are corporate documents that clearly show that he was the inventor. In fact, Quality Pro recently posted a noncommercial website (Dr. Harry’s biographical information) where these documents can be viewed on PDF files, yet the moderator of iSixSigma immediately deleted the posting while other clearly commercial and promotional postings were allowed to remain on the discussion forum.
3) Stan claims Dr. Harry is irrelevant, yet Quality Digest magazines recently called him out as “the worlds leading expert on Six Sigma.” CEO magazine is soon to release a very large article on Dr. Harry and his impact on the world’s top companies. Dr. Harry is currently the lead consultant to the Chairman of POSCO, the world’s fourth largest steel manufacturer and is also working with Samsung.
4) Dr. Harry’s work is endorsed by the Society of Manufacturing Engineers and the Korean Standards Association. In fact, the nation of Korea just named the new Six Sigma award in honor of Dr. Harry.
5) In one paragraph of his recent posting, Stan says “If for no other reason Dr. Harry has become irrelevant in the Six Sigma community”, yet in another paragraph, he says “To be honest, my objective is to get people past being in awe of an author having an ISBN. If someone is “irrelevant” then why would Stan be so concerned about others being in “awe” of them.
6) Dr. Harry has been recognized by many of the world’s top CEO’s and has been given many awards for his many innovative contributions to Six Sigma.
7) Dr. Harry has been a bestselling author on the New York Times, Wall Street Journal, Amazon.com bestselling lists (also documented on the aforementioned website).
8) Most of all, Dr. Harry offered to “debate” Stan at the ASU campus. Of course, Stan agreed, made claims, and then never showed (even after saying repeatedly he would be there). He did not even submit a paper. Of course, you can call ASU about the reality of this (as others have).
And the points could go on and on. Bottom line is simple. There are corporate documents that support every claim that Dr. Harry has made, backup by testimonials from top corporate executives and CEO’s. Stan has nothing except opinion. He chooses to remain anonymous for obvious reasons.
Please feel free to contact me [email protected] and I will provide you (or anyone else) the website address that contains the referenced documents. Several months ago I promised the Six Sigma community access to these documents and now I have delivered on my promise.0August 11, 2004 at 6:53 pm #105483
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Perhaps some factbased insights can be realized by
running the suggested experiment. In this way, you too
can advance your application and theoretical knowledge
about Six Sigma and not be so dependent upon the
research and work of others. You might event find or
discover some new knowledge. Judging by your
responses in this thread, it would seem you still don’t
know the answer to H82BLATE’s question. Maybe you
have a better way to analytically investigate the question
and set forth insights (other than qualitative opinions). If
you need help running the experiment, please post and I
will guide you through it. Respectfully, Reigle Stewart0August 11, 2004 at 6:17 pm #105479
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.H82BLATE: As I understand your question, you can
“discover” an answer by first defining each of the 3C’s at
two levels (low and high) and then conduct a factorial
experiment on the eight combinations. The response
variable would be TPDU. For example, capability can be
set at Z.st = 3 and Z.shift = 1.0, thereby providing a long
term expectation of Z.lt = 2.0. Next, convert Z.lt into a first
time yield value (Y.ft) using a table of area under the
normal curve (or use “=normsdist(Z.lt)” in Excel).
Following this conversion, take the resultant yield value
(Y.ft) and opportunity count (M) to establish the rolled
throughput yield. This is done by computing Y.rt = Y.ft^m,
where M is the defined number of opportunities. After
this, simply compute the quantity TDPU = ln(Y.rt). Do this
proceedure for each of the eight combinations of the
factorial experiment. You will discover that when Z.st is
low, the influence of Z.shift and complexity is high. But as
Z.st increases in value, the TPDU becomes robust to shift
and complexity. Also pay attention to the interactions. By
examining the output tables and graphs (in Minitab), the
conclusions will be fairly selfevident. Furthermore, it is
interesting to extend this experiment using other DOE
designs.0July 27, 2004 at 2:59 am #104407
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.To Gary Cone and All Others. The following question was
posed earlier in this thread: Consider four performance
opportunities that define a system called “X.” For our
discussion we will say the opportunities are A, B, C and
D. Suppose only A and B are “active.” This says they are
regularly assessed in terms of “performance to standard.”
However, opportunities C and D are not assessed, so
they are said to be “passive” in nature. My discussion now
picks up from here. Lets pretend that we make a single
system called “X” containing A, B, C and D. To expand
the discussion, say we observe DPU=1.0 following the
creation of the first system. We will now say that defect
opportunity B was found to be defective following the
production of system X, but opportunity A was not
defective; however, opportunities C and D were not
surveyed for conformance to standard. Opportunities C
and D were never looked at or evaluated for conformance
to standards! What is the DPO? Seems to me that DPO =
D / O = 1 / 2 = .50. Why? Because every defect
opportunity must be capable of yielding a defect. If the
opportunity is not evaluated for conformance to
standards, the probability of defect is zero. If the
probability of defect is zero for a particular opportunity, the
opportunity is not really an OPPORTUNITY FOR
DEFECT. Therefore, it should not be counted. This
implies that the numerator term would be illustrated by the
binary sequence 0 + 1 + 0 + 0 = 1 and the denominator
term would be given as 1 + 1 + 0 + 0 = 2.. It would be
incorrect to say the denominator term should be 1 + 1 + 1
+ 1 = 4 since only the first two “slots” of the denominator
are “active.” Consequentially, the denominator
summation would be 1 + 1 + 0 + 0 = 2. Notice that a “1” in
the denominator term says the opportunity is active and a
“0” says it is passive. Another way of saying this is that a
every slot of the numerator must be capable of yielding a
“1” in the corresponding numerator term. If not, the
corresponding “slot” in the denomintor must be assigned
a value of “0.” So each “blank” position (i.e., slot) in the
denominator represents an opportunity that has been
created. But if any such opportunity cannot produce a
defect, then it is not a true defect opportunity – it is merely
a “production opportunity.” Since it cannot produce a
defect (for whatever reason), it should be assigned a
value of “0”. The opportunity may have been created, but
no defect density can be realized even though a total of 4
production opportunites were created (recognizing that
not all 4 were also defect opportunities). So, we should
only count “active” defect opportunities. Seems to me this
is a very simple rule for counting defect opportunities and
ensuring the proper summations (for the numerator and
denominator terms). Respectfully, Reigle Stewart0July 16, 2004 at 9:22 pm #103650
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Stan:
Anyone and everyone is fully capable of calling Mr. Jeff Goss at the Center for Professional Development, College of Engineering, ASU. A simple phone call will confirm:
A) The debate was (in fact) scheduled for the given time (July 29th)
B) You did not contact either Mr. Goss or Dr. Montgomery
C) Dr. Montgomery and Dr. Keats established the requirements
C) Dr. Montgomery and Dr. Keats blocked the time on July 29th
D) Dr. Harry complied with all of the debate specifications.
It is most unfortunate that you seek to continue this personal war of yours to discredit the accomplishments of others. I will no longer participate in future discussions with you since your “word” has proven to be false.
Respectfully,
Reigle Stewart0July 14, 2004 at 3:27 am #103394
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Gabriel: The shift factor is related to sampling means, not
a shift in the population mean.0July 13, 2004 at 11:02 pm #103383
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Gabrial:You state that “THE SHORT TERM STANDARD
DEVIATION CAN BE SMALLER THAN THE LONG TERM
STANDARD DEVIATION! A NEGATIVE SHIFT!” My
answer is simple, you are not correcting for differences in
the corresponding degrees of freedom. If the sumsof
squares between groups is zero, then the standard
deviation longterm S.lt = SST/(ng1) is smaller than S.st =
SSW / g(n1). Even though SST = SSW, the standard
deviations are not equal because the degrees of freedom
are NOT equal. If you correct your equations to
compensate for the differences in degrees of freedom,
you will discover that S.lt = S.st when SSB = 0, but as SST
> SSW, then SSB > 0. If there is no subgroup shifting (all
subgroups have the same mean and variance, the
between group sumsofsquares will be ZERO once the
degrees of freedom are corrected. AS SSB increases,
SST also increases. You really need a simple education
on the components of variance model (i.e., 1Way
ANOVA). With further investigation, you will also discover
that Z.shift is an “natural artifact” of subgrouping. As I
have so often stated, the Z.shift is NOT AN ACTUAL
SHIFT IN THE UNIVERSE AVERAGE, BUT IT IS A
“TYPICAL” SHIFT IN SUBGROUPTOSUBGROUP
AVERAGES. I have said many times, it is a
“compensatory offset” that models an expansion of the
variance. Prove it to yourself … simply plot the
cummulative sumsofsquares — SST, SSW, and SSB.
For “typical” subgroup sizes (4 SSW, then SSB > 0. If there is no subgroup shifting (all
subgroups have the same mean and variance, the
between group sumsofsquares will be ZERO once the
degrees of freedom are corrected. AS SSB increases,
SST also increases. You really need a simple education
on the components of variance model (i.e., 1Way
ANOVA). With further investigation, you will also discover
that Z.shift is an “natural artifact” of subgrouping. As I
have so often stated, the Z.shift is NOT AN ACTUAL
SHIFT IN THE UNIVERSE AVERAGE, BUT IT IS A
“TYPICAL” SHIFT IN SUBGROUPTOSUBGROUP
AVERAGES. I have said many times, it is a
“compensatory offset” that models an expansion of the
variance. Prove it to yourself … simply plot the
cummulative sumsofsquares — SST, SSW, and SSB.
For “typical” subgroup sizes (4 SSW, then SSB > 0. If there is no subgroup shifting (all
subgroups have the same mean and variance, the
between group sumsofsquares will be ZERO once the
degrees of freedom are corrected. AS SSB increases,
SST also increases. You really need a simple education
on the components of variance model (i.e., 1Way
ANOVA). With further investigation, you will also discover
that Z.shift is an “natural artifact” of subgrouping. As I
have so often stated, the Z.shift is NOT AN ACTUAL
SHIFT IN THE UNIVERSE AVERAGE, BUT IT IS A
“TYPICAL” SHIFT IN SUBGROUPTOSUBGROUP
AVERAGES. I have said many times, it is a
“compensatory offset” that models an expansion of the
variance. Prove it to yourself … simply plot the
cummulative sumsofsquares — SST, SSW, and SSB.
For “typical” subgroup sizes (4 0. If there is no subgroup shifting (all
subgroups have the same mean and variance, the
between group sumsofsquares will be ZERO once the
degrees of freedom are corrected. AS SSB increases,
SST also increases. You really need a simple education
on the components of variance model (i.e., 1Way
ANOVA). With further investigation, you will also discover
that Z.shift is an “natural artifact” of subgrouping. As I
have so often stated, the Z.shift is NOT AN ACTUAL
SHIFT IN THE UNIVERSE AVERAGE, BUT IT IS A
“TYPICAL” SHIFT IN SUBGROUPTOSUBGROUP
AVERAGES. I have said many times, it is a
“compensatory offset” that models an expansion of the
variance. Prove it to yourself … simply plot the
cummulative sumsofsquares — SST, SSW, and SSB.
For “typical” subgroup sizes (4 0. If there is no subgroup shifting (all
subgroups have the same mean and variance, the
between group sumsofsquares will be ZERO once the
degrees of freedom are corrected. AS SSB increases,
SST also increases. You really need a simple education
on the components of variance model (i.e., 1Way
ANOVA). With further investigation, you will also discover
that Z.shift is an “natural artifact” of subgrouping. As I
have so often stated, the Z.shift is NOT AN ACTUAL
SHIFT IN THE UNIVERSE AVERAGE, BUT IT IS A
“TYPICAL” SHIFT IN SUBGROUPTOSUBGROUP
AVERAGES. I have said many times, it is a
“compensatory offset” that models an expansion of the
variance. Prove it to yourself … simply plot the
cummulative sumsofsquares — SST, SSW, and SSB.
For “typical” subgroup sizes (4 < n < 6), you will see first
hand that SSB > 0. You will find that the “typical” Xbar –
Xbarbar is about 1.5 sigma. Pretty soon, you might get
the problem defined correctly. Only then will you pursue
the correct answer. Math is never wrong, only the
problem definitions. Reigle0July 13, 2004 at 9:09 pm #103379
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Craig: Funny thing, the benchmarking data has been
published for nearly 20 years in several Motorola
documents, training materials, and other books and
articles … I would suggest you investigate deeper than a
“person” you happen to know that worked at Motorola
(then). Oh, by the way, my wonderful wife (Susan) has
worked at Motrola (Semiconductor Group) for 35 years
(and is still working there). The validity of your comments
seems to fit with the old phrase “a sample size of one
does not make a universe.” Again, believe what you want
… freedom of speach is the law, but the truthfulness of that
speach is NOT guaranteed by that law. Have a great day
and keep on blasting away without any facts. Site me a
reference that I can read and verify for myself what you
are saying. This is not a Herculean task, just cite your
references. Reigle0July 13, 2004 at 7:53 pm #103376
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.To the poster “Sorry Reigle.” OK, if you want me to be
wrong, so be it … but that does not change its historical
truthfulness, nor its mathematical validity. You cannot
convince anyone with an “opinion.” Show us your
evidence (verifiable facts), like any good Six Sigma
practioner would do. As I have said many times on this
forum, and as Dr. Harry has published in many books and
articles, Bill Smith had the intuitive belief that a 1.5 shift
happens in a stable system, but it was Dr. Harry that “put
the math to it” so as to demonstrate the plausability of
what Bill “suspected” to be true. This is also documented
in the eBook currently being offered on this website. I am
reminded of the old saying “you can lead a horse to
water, but you cannot make it drink from the bucket.” If
someone believes airplanes are “very unsafe,” then no
matter how much scientific data and information you
present to that person, they will never travel in an
airplane. In the world of human psychology, this is called
a “phobia.” Of course, phobias are not based in rational
thought, they are founded in irrational thought, to the
thinker, such things cannot be differentiated. Reigle
Stewart.0July 13, 2004 at 7:23 pm #103373
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.To All on iSixSigma:
To help clarify the simplicity of the shift (that others try to make complicated), I am providing the instructions for a most elementary Excel based simulation. We will use a uniform distribution to keep things simple, but you can also use a normal distribution. But, first time through, use the uniform since it is simple and will illustrate the principles.
Here are the steps for constructing the simulation.
Step 1: Create a rational subgroup of data. To do this, we must create a random number in cell locations A1, B1, C1, D1 and E1; i.e., put the Excel equation “ = rand() ” in each of the 5 cells. You have now created the first row of n = 5 random numbers. This row constitutes or otherwise models a “rational subgroup.”
Step 2: Create 50 rational subgroups. To do this, we repeat step 1 for rows 2 through 50. Now, we have g = 50 rows of n = 5 random numbers. At this point, we now have a “process” that was operated over some period of time, but we are only sampling its performance on 50 occasions – each time making 5 measurements.
Step 3: Compute the “range” for each of the g = 50 rows. The range of each row is computed by subtracting the minimum value from the maximum value. As an example, we would input the equation: = max(A1:E1) – min(A1:E1) for the first row of data. This calculation would be repeated for each of the g = 50 subgroups (rows of data), thereby creating 50 unique ranges in column F.
Step 4: Compute the “grand range” for the aggregate set of ng = 5*50 = 250 random numbers. The Excel equation for computing the grand range is: = max(A$1:E$50) – min(A$1:E$50). Locate this equation in cell locations G1 through G50. Doing so will create a column with the same exact value in all 50 cells of column G.
Step 5: Create a simple line chart that graphs columns F and G. One of the lines on the graph should be a straight horizontal line. This is the composite range (i.e., grand range) of all 50 sets of data (i.e., it is the overall range of all 50 rational subgroups treated as a single set of data).
Step 6: Draw Conclusions. Notice that all of the subgroup ranges are less than the grand range. In fact, the average withingroup range is less than the grandrange. So why is this true? Because no single withingroup range can ever be bigger than the grandrange. Thus, the average withingroup range will certainly be even less than the grandrange. The individual subgroup averages RANDOMLY bounce around making the grandaverage be larger than any given withingroup range. So, the total variability of a process will always be larger than any given “slice in time.” If we average the “slices in time,” we have the “shortterm” standard deviation. If we find the longterm standard deviation by concurrently considering all of the measurements (not just individual slices in time), we can compute the “longterm” standard deviation. Thus, we have the ratio c = S.lt / S.st. So, as the value “c” gets bigger, the average grouptogroup “shift” also increases in magnitude. In this manner, we are able to study “mean shift” by looking at the ratio of variances, just like the “F test” is able to test “mean difference” by looking at the variance ratios.
Regards,
Reigle Stewart0July 13, 2004 at 5:58 pm #103368
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Gabriel:Transient random shifts will be “in control” on a control
chart but do have a practical impact. Consider a power
transformer. The “electrical losses” are directly related to
steel thickness of the laments that comprise the
transformer’s core. If a certain section in a given “roll” of
stamped steel is “on the high side” the losses will be
larger than if that if that section is on the low side, but also
“in spec.” In fact, the mean shift from T might be a
“random event” but it nonetheless will induce “losses” in
the transformer that is made from that particular section of
steel. Overall, it is possible that the grand average is on
target, but any given subgroup is offtarget. Of course, the
product built with the “randomly offtarget” parts suffer
losses, just like the Taguchi Loss Function shows us.
Respectfully, Reigle0July 13, 2004 at 5:50 pm #103366
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.You can find the “original” thinking. Look at the header
bar on this web page under the red button called “New
eBook”. Reigle0July 13, 2004 at 5:46 am #103332
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Process capability can “shift” in two basic ways. The first
type is called “static mean offset.” It is characterized by
the equation Z.shift.static =  T – M  / S.st, where T is the
“target value,” also called the “nominal specification,” M is
the process mean, and S.st is the shortterm
(instantaneous) standard deviation. Thus, for any given
shortterm sampling period, the momentary “shift” in the
mean (from T) can be expressed in Z units of measure.
Do understand that the probabilistic value of Z.shift.static
can NOT be estimated for an unstable process (owing to
the nature nonrandom assignable causes), but it can be
estimated for a process that is in a state of “statistical
control, relative to T.” If a process is in a state of
“statistical control,” and that control is also relative to
T, the extent of expected mean shift (from T) is given by
the probability distribution of sampling averages
(reference central limit theorem). The second type of shift
is called “dynamic mean offset.” So, for a sampling
group of n = 4, we observe the shift expectation to be
about 1.5 sigma. For a process that is in a state of
statistical control in relation to T, the net effect, over many
sampling periods will be T = M, owing to the fact that 50%
of the means will be to the right of T and 50% will be to
the left of T when the “errors” are random in relation to T.
The net effect of such process behavior is manifested in
the form of an expanded standard deviation. It expands
because the “random error” occurring within sampling
periods is added to the “random error” that occurs
between sampling periods, thereby resulting in the case
S.lt > S.st. Hence, we observe that SST = SSW + SSB,
where SST is the totalsumsofsquares, SSW is the
sumsofsquares within sampling periods and SSB is the
sumsofsquares between sampling periods. Such an
expansion in the shortterm standard deviation (due
purely to random errors in process centering) can expand
the longterm standard deviation. However, when SSB =
0, then we observe S.lt = S.st and therefore conclude no
shift. But, when SSB > 0, then we know S.lt > S.st. When
SSB reaches a predefined magnitude, it becomes
“statiscally significant” and is no longer considered
“random.” At this threshold, such “expansion” can be
expressed in the form of an “equivalent mean shift [of
process capability].” Under this condition, we compute
Z.shift.dynamic = (3S.lt – 3S.st) / S.st, which reduces to the
case Z.shift.dynamic = 3(c – 1),where c = S.lt / S.st For
typical sampling schemes involving the use of “rational
subgrouping,” it can be shown that the “typical” mean
offset (in subgroup means) is about Z.shift.dynamic =
1.50. Of course, this shift is the “threshold” maximum that
can be expected for a process that is in “statistical
control.” This is the “shift” that is used as a convenient
“constant” in Six Sigma work. Respectfully, Reigle0July 9, 2004 at 8:41 pm #103221
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.R2D2:
As we speak, there is a substantial effort currently underway to do just what you are speaking of. This effort is operating under the banner of the “Six Sigma Federation” and currently involves several universities (US and international schools of enginerring and business) as well as several fortune 100 corporations. Standardards are being defined for BOK, deployment, implementation, management, metrics, certification, and a host of other factors. It is anticipated that the standards will be implemnted during 2005.
Regards,
Reigle Stewart0July 9, 2004 at 12:35 am #103175
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Stan: I know you like to be a little controversial and that is
OK by me … to each his own, and rightfully so. However,
some have invested a fair amount work (and personal
expense) to put this debate together. Various key people
have organized their calendars to make their contribution
possible. These individuals (and myself) would greatly
appreciate some cooperation and information from you.
As you previously indicated, your white paper was
completed and sent to ASU (per one of your recent posts).
As of this afternoon, there is still no paper present at ASU
and no communication with ASU. People are beginning
to have serious doubts about your intents (and perhaps
motives). Without any form of judgment, we kindly ask if it
is still your intent to: a) submit a white paper on the
defined topic and b) attend the debate. Please, lets
practice some basic professional courtesies. If you do not
intend to provide a white paper and participate in the
debate, then please, say so now. There is no point in
being disruptive to others time and resources. Thank you
for your immediate post concerning this issue. Reigle
Stewart0July 8, 2004 at 8:44 pm #103160
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Stan: To reconfirm the status of your paper, I contacted
ASU today and nobody including Jeff Goss or Dr.
Montgomery have received a white paper from you, nor
have they had any communication with you concerning
your participation in the scheduled debate. Please
remember the deadline for your white paper is by close of
business on July 15, 2004. If you really did send the
paper, then immediately respond to me with the person’s
name and address. Perhaps we can track in down
through this person. Recall that I previously posted that
your paper must be sent to the email address of Mr. Jeff
Goss or Dr. Montgomery. Since you stated the paper has
been completed, I recommend that you resend it today
(to either of these two gentlemen). I look forward to your
reply on this issue. Respectfully, Reigle Stewart0July 8, 2004 at 8:31 pm #103157
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Peppe: You are right on target. As the mean of each CTQ
approaches its target value (nominal specification) and
the variance is reduced, the probability of defect is
reduced. As the probability of defect is reduced for each
CTQ, the rolledyield is increased. As the rolled yield
increases, the the total defects per unit is reduced which,
in turn, leads to a reduction in the latent defect content of
the product. Given a constant combined test and
inspection efficiency, the escape rate is reduced. As the
escape rate is reduced, field failures is reduced. As the
field failure rate is reduced, the warrenty period can be
extended. As further improvement in process capability is
realized, there is less need for test and inspection. As
rolledyield increases, cycletime goes down. As cycle
time goes down, workinprocess is reduced. Thus, we
have the “micro economics” of Six Sigma. Respectfully,
Reigle Stewart0July 8, 2004 at 2:50 pm #103126
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.V. Laxmanan: Your position has lots of emotional appeal
but might be lacking in the operations management
arena. I would agree that external measures are the end
game. After all, it is the customer’s perspective.
Nontheless, we must also have internal measures of
performance, especially those measure that correlate to
the external measures. Merely looking at external
measures is like trying to steer your boat by looking at the
wake. Of course, internal measures must be statistically
correlated to the external measures. When such
correlation exits, we then have “radar” to “see” where we
are going. By identifying and verifying the key internal
measures, we better understand what “knobs” must be
turned to make things better and keep the ship on course.
Respectfully, Reigle Stewart0July 8, 2004 at 6:37 am #103095
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Stan: As of today, Mr. Jeff Goss or Dr. Montgomery at
ASU has NOT received any type of white paper or
communication from you. Their email addresses have
been posted several times for your convienence. The
stage is set, the parties are ready, the referees await your
position paper. Respectfully, Reigle Stewart0July 7, 2004 at 6:30 pm #103054
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.New BB: What Jamie says is so true, but often
overlooked by many practitioners. Given the statistical
relationship between the ttest, Ftest and ANOVA, one
can easily and confidently conclude that any given
difference between two or means can be mathematically
equated to a ratio of the variances. In short, a mean shift
can be “calibrated” to an expansion of the standard
deviation. This relation is exploited in Six Sigma work,
specifically related to the “Six Sigma Model.” For
example, if we set Cpk = Pp, then we algebraically
determine that k = 1 – (1 / c), where c = S.lt / S.st and S.lt
is the longterm standard deviation and S.st is the short
term standard deviation. By some more simple algebra, it
can be demonstrated that the equation k = 1 – (1 / c) can
be further reduced to Z.shift = Z.st – Z.lt. So, for the Six
Sigma model, we observe that Z.shift = Z.st – Z.lt. = 6.0 –
4.5. = 1.5. Remember, this is a “model” and not an
absolute. Regards, Reigle Stewart0July 7, 2004 at 5:25 pm #103051
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Bev: I would respectfully disagree with your position
concerning the J.D. Powers data. Such disagreement is
based on two points. First, it was Mr. Bill Smith at
Motorola in the early 80’s that discovered an empirical
correlation between the field failure rate of electronic
equipment and the total defects produced during
manufacture. Mr. Smith further postulated a correlation
between the latent defect content of a product (flaws that
require activation energy to made observable), total
defects per unit (TDPU) and the field failure rate.
Following this, Mr. Smith connected with Dr. Harry to
further examine this association in the mid 80’s, but from a
statistical and mathematical point of view. Subsequent
emperical and simulation studies conducted by both
gentlemen statistically ascertained that the “infant
mortality period” and “useful life period” of the classic
bathtub reliability curve are explained by three things;
namely, existing design bandwidth, process operating
bandwidth, and feature complexity. The interaction of
these three factors governs the probability of observing
an inline defect (noting that defects are most often
Poisson distributed). In turn, it was found that the latent
defect content is directly proportional to this probability.
From this, the two researchers concluded the
mathematical association and subsequently published
this information internally. Of course, the problem can be
worked “in reverse.” The field performance data can be
used to “back compute” the latent defect content which, in
turn, is used to back compute the TDPU. Once the TDPU
is estimated, the pre and postinspection performance
yield of a “typical” CTQ can be approximated. Do
recognize that such an approximation is just that, an
approximation. In this context, the resulting “sigma” is
merely a benchmark (not a precise measurement like
would be ascertained from direct measurement of the
related CTQs). Secondly, while deploying Six Sigma at
Ford Motor Company, Dr. Harry and Mr. Phong Vu
applied this reasoning and equations to the internal
automobile quality data gathered at Ford and from the
related J. D. Power’s reports. Using this data, they were
able to successfully benchmark (and forecast) the
inherent production capability of several vehicles.0July 6, 2004 at 3:41 pm #102994
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.If we assume 200 defects per 100 cars, no matter how
you cut it, that’s 2 defects per car. This is AFTER the
influence of factory inspection and test. Assuming a
combined containment efficiency of 85% (which is
typical), we would see about 13 defects per car BEFORE
test and inspection. Further assuming 2,000 CTQ’s per
car, this gives us .0067 defects per opportunity (i.e., per
CTQ), or a yield of about 99.33%. Since this yield is “long
term” by nature, we can estimate the corresponding
“sigma value.” This would be approximated as 2.48 +
1.50 = 3.98, or about 4.00 sigma. If we consider 100
defects per 100 cars, the resulting Sigma value is about
4.2 sigma. On the other end of the spectrum, if we
assume 300 defects per 100 cars, the capability would be
about 3.8 sigma. This likely makes sense given the
“average sigma” per part is about 4 sigma. From a “worst
case estimate” point of view, if we assume 300 defects
per 100 cars, 95% containment efficiency, and only 500
CTQs per car, we see a capability of about 2.67 sigma per
CTQ. Bestcase might be 50 defects per 100 cars with a
containment efficiency of 50% and 3,000 CTQs, therein
realizing a capability of about 4.9 sigma. So, we have an
“extreme range” of 2.7 to 4.9 sigma with a “most probable”
scenario of about 4 sigma. Remember, this is a
“benchmarking” technique and it makes a lot of
RATIONAL assumptions. It is merely a firstorder
approximation of the process capability. To gain more
precision, one must dig to the next level of detail. But at
the high level we are cruising, this approximation is “good
enough.” We don’t need 7 digits of precision to make a 1
digit decision. This is what I meant by the phrase
“measure it with a micrometer, mark it with caulk and then
cut it with a handax. Reigle Stewart0July 6, 2004 at 3:01 pm #102993
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Jude: Convert each Cpk to a standard normal deviate;
i.e., Z=Cpk*3. Next, you must convert Z to equivalent
yield (in Excel use normsdist(Z)). After this, you need to
multiply all of the yield values to get the rolled throughput
yield … Y.rt = Y1 * Y2 * … * Yn, where n is the number of
yield values (i.e., the number of Cpks you converted).
Then normalize the rolled yield by computting Y.norm=
Y.rt^(1/n). Finally, convert Y.norm to a Z value (using
Excel) and then divide by 3 to get the final answer …
Cpk.norm = Z.norm / 3. The resulting normalized Cpk is
“kind of like an average” but is statistically valid. Kind
Regards, Reigle.0July 6, 2004 at 2:25 pm #102991
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.From benchmarking we know the average is about 4
sigma. The automobile data is close. Measure with a
micrometer, mark it with chaulk and then cut it with a
handax. Your data reveals a sigma that is below
average. We know Toyota is better than average. So the
defect opportunity count is much too low. I heard from a
Ford executive there are about 2,000 opportunities in a
car.0July 1, 2004 at 4:06 am #102771
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Matt: Obviously, the debate has narrowed down to just Stan … the debate format, plans and time allocations have been made accordingly. Remember, the intent of this debate is to establish whether or not the shift factor has theoretical merit and practical application. This brings it down to a digital position. Owing to this, only two contestents are required. Dr. Harry’s book clearly presents his position. Stan’s paper (on the prerequiste topic) is due on July 15th to Dr. Montgomery. Stan has no wiggle room left … either he meets the debate specifications, or he does not. If he does not, he enters into a default postion. Reigle
0July 1, 2004 at 3:43 am #102769
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Hey Stan: Its good to see that you finally joined us in the
work function thread. I have really missed you … you
know, being on my case and all. My life has not been the
same over the last few days without you. I just wish you
knew what a kick we get from reading your posts. I must
admit that you really work at trying to induce an argument.
Well, old friend, our time in the limelight has come and
gone. If you have not noticed, I am off the shift discussion
since the debate was scheduled … so my job is done …
you committed publicly to several specific positions as
well as the debate (per all of your enlightening posts).
Look, everyone knows you won’t show up … you will have
some reason or other why you can’t debate. Personally, I
would love you forever if you did show up to the debate.
By the way, your cloak of “Stan” has since been revealed
by an associate of yours. I guess he got tired of your
arrogance. You certainly have an interesting history with
Motorola. By now, you should know that I use multiple
names when posting (as I have previously admitted
simply because it is kind’a fun … but your detection rate is
not much better than today’s typical inspection efficiency.
You have picked up on some of these aliases while
others you have not. On some occasions, you have been
in left field while others you hit a home run. I have
reviewed the policies of this website and find nowhere
that such a practice is “improper” or violates some “rule.”
If I have overlooked such a reference, please point it out
and I will only use my given name, promise. But until
then, it’s a real gas watching you light off. By the way, the
new Annual National Six Sigma Korean Quality Award
(under the direction of the Korean Management
Association) was named in Dr. Harry’s honor. Funny
thing, the engraving did not say “Stan.” Wow, was I
disappointed. I thought by now you would be world
famous (other than in your own shower … I mean “mind,”
not “shower.”). Just kidding with you Stan. I love ya baby.
Keep the sarcasm coming … it makes you a colorful
character. Reigle Stewart0June 30, 2004 at 11:27 pm #102747
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.V. Laxmanan: Thank you for your kind and timely
responses to my questions. Do understand that I deeply
appreciate your innovative thinking. There is currently
enough information and data for me to chew on for some
time. Now, I need to just sit back and think about it.
Again, your contribution and efforts are greatly
appreciated. Respectfully, Reigle Stewart.0June 30, 2004 at 11:17 pm #102746
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.PB: Excellent points! Do recognize that the “model”
parameters of many control charts also change after each
new data point (or grouping of data) — much like the
EWMA chart. Also, the fundamental model of many SPRT
tests change (like Sequential Probability Ratio Tests; i.e.,
sequential hypothesis testing). So, a dynamic and
evolutionary model (like EVOP) does not trouble me as it
may concern you. What does trouble me is not knowing
how the “constants” should be established in his work
model: y=hx+c, especially for the case where c > 0.0June 30, 2004 at 8:06 pm #102728
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Darth: I fully agree with your position and support your
assertions. I do believe he is entitled to a little more time
to “boil the ocean,” although I too am getting somewhat
anxious about the proverbial “bottom line.” He did
promply reply to my request for a pragmatic example
(reference his DPMO example), although it too was
somewhat clouded with mysterious terms (like h and c).
But if he can explain these terms in the context of the
DPMO example, we may have something. If not, I would
agree that the context is more “theoretical” than
“practical.” Thank you for your kind consideration of my
position. Regards, Reigle Stewart0June 30, 2004 at 7:43 pm #102723
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.To All. Innovation must rise above all else in our field of
endeavor. Rote understanding of existing knowledge is
great, but what marks the pioneer is the ability and desire
to innovate. V. Laxaman is doing a marvealous job at
trying to communicate a possible innovation. Until we
study it and comphrehend the possibilities, we will never
know. To this end, I am reminded of the movie “Close
Encounters of the Third Kind.” Imagine upon first contact,
the human species not understanding the “tones and
pulses” simply launched a missle at the source with the
rationale “lets shoot what we don’t understand.” We have
witnessed this so many times on this discussion board
(outofpocket dismissals). Remember, many inventions
had little practical significance at their inception, but latter
proved to be a milestone in the course of human history.
V. Laxaman is at least attempting to share a vision. He
may be wrong or offcourse, but he is certainly innovative.
His approach may be nothing new, but what a “packaging
job.” Even if it is the same as regression, it is compelling –
– when is the last time so much discussion was generated
about linear regression? Maybe this is a way to draw
management into the use of regression (if that is what it
turns out to be). Sometimes the moustrap don’t work, but
the box in comes in is quite useful. Please lay down your
swords and open your minds to the possibilities because
such opportunities (innovations) are far and few between.
Lets communicate with ET, not shoot him down before the
message is delivered. My Humble Opinion, Reigle
Stewart0June 30, 2004 at 7:27 pm #102715
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Darth: The practical significance of all this is as follows.
Laxaman has identifed a simple equation that MIGHT be
useful to forecast the performance of something that
would normally reported as a simple ratio. As you know,
many six sigma metrics are presented as a simple ratio,
but often fail to communicate what is really going on. For
example, final yield is often defined as the ratio of output
divided by input. However, the classic yield metric does
not account for rework (like rolled throughput yield does).
Therefore, the classic yield metric (output / input) can be
highly deceptive. It is possible to have a final yield of
100%, but in reality, the input exceeds the output.
Another example would be process capability, which we
recognize as a contrast of the design bandwidth to the
process bandwidth. Laxaman contends that such ratios
are often misleading in terms of what is acutally
happening and should be altered depending upon the
circumstantial conditions at hand (by way of changing the
constants). If his work function is merely a linear
regression model, then there is nothing “new.” On the
other hand, if the work function is not analogous to
regression AND there exists a practical way to establish
the constants, THEN it is POSSIBLE that the work function
should be further studied for inclusion in the six sigma
toolbox. If what Laxaman has put forth holds water, it
represents a type of “universal performance metric” with
many applications within our field of endeavor.
Respectfully, Reigle Stewart0June 30, 2004 at 6:19 pm #102708
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.V. Lamanan. Thank you very much for your application
example (i.e.DPMO). The idea that y=defects and x=
opportunities greatly facilitated my understanding of your
value proposition. It seems to me that y=hx+c is
applicable, but does represent a significant overlap with a
statistical procedure called “linear regression.” As you
may know, simple linear regression assumes the form y=
bo+b1x1, where y is the dependent variable, x is the
independent variable, bo is the intercept and b1 is the
slope. Essentially, it would appear that your work function
is of the same form as the linear regression model. If so,
we can say there is an association between the absolute
number of observed defects (y) and the absolute number
of defect opportunities (x). For any given situation, the
absolute number of observed defects will likely vary over
time even though the total number of available defect
opportunities remains constant (owing to influence of
random and/or nonrandom causes). But, the case y > x is
not physically possible given the conjugal relationship
between a defect and its corresponding opportunity. This
is to say that a single opportunity can only be realized in
one of two possible states; namely, 0 or 1 – either the
opportunity proves to be defective or it does not (once
brought into existence). If x=0, it is not physically possible
to realize the case y > 0, again due to the conjugal
relationship. Nor can x be a negative number in the
physical world. Given this discussion, how would you
propose to establish the “h” and “c” terms of the work
function. If the work function is analogous to regression, I
need no guidance, but if the work function is not
analogous to regression, I must seek your advice on how
these terms should be properly established. What is your
thinking? Respectfully, Reigle Stewart0June 30, 2004 at 3:11 pm #102694
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Rod: According to Dr. Elena Averboukh [an industry
funded professor at the University of Kassel (Germany)],
“TRIZ basic postulates, methods and tools, including
training methodologies invented by H.Altshuller have
been further developed and significantly enhanced by his
followers, researchers and trainers (from 1985 to
present), particularly known as ITRIZ generation of
methodology and tools.” Respectfully, Reigle Stewart.0June 30, 2004 at 3:00 pm #102691
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Sgollus: To help answer your question, I refer you to the
“Ask Dr. Harry” forum on this site. He states: “One must
necessarily understand that the shortterm standard
deviation reports on the “instantaneous reproducibility” of
a process whereas the longterm standard deviation
reflects the “sustainable reproducibility.” To this end, the
shortterm standard deviation is comprised of the “within
group” sumsofsquares (SSW). The longterm standard
deviation incorporates the “total” sumsofsquares (SST).
Of course, the difference between the two constitutes the
“between group” sumsofsquares (SSB). By employing a
rational sampling strategy it is possible to effectively block
the noises due to assignable causes from those due to
random causes. In this context, we recognize that SST =
SSW + SSB. By considering the degreesoffreedom
associated with SST and SSW, we are able to compute
the corresponding variances and then establish the
respective standard deviations. In the case of a process
characterization study, we note that the shortterm
standard deviation is given by the quantity Sqrt(SSW / g(n
– 1)). The longterm standard deviation is defined as
Sqrt(SST / ng– 1). When computing Cp and Cpk, it is
necessary to employ the shortterm standard deviation.
This ensures that the given index of capability reports on
the instantaneous reproducibility of the process under
investigation. So as to reflect the sustainable
reproducibility of the process, the longterm standard
deviation must be employed to compute Pp and Ppk.
Oddly enough, many practitioners confuse these two
overlapping sets of performance indices.”0June 30, 2004 at 2:28 pm #102688
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Mike: You say I do not need to bring up Dr. Harry
everytime I post. Interesting you point this out to me when
a select few spew out their false statements about him
(i.e., Stan) in an obvious attempt to discredit and defame
his work (every chance they get). Why don’t you provide
commentary on this each time they do this and point out
their being unprofessional? Bias perhaps? Second, you
indicate that you know a couple of people at that place
[ASU] … that their implementation knowledge is shaky at
best. This too is interesting since it is only Dr. Harry that is
providing the instruction, not ASU. So, based on your
comment, what does ASU have to do with the course
content? Dr. Harry’s business partner Mr. Phong Vu, was
the senior champion at Ford Motor company (reporting to
the CEO). Seems these two gentlemen likely know more
about Six Sigma and its deployment than most. Perhaps
your bias is showing through. In terms of the debate, ASU
has taken a neutral position, but we all know Stan will not
show up. You speak of this forum being turned into a
daytime talk show … wake up man and read some of the
“site cronies” posts … it already is a daytime (and night
time) talk show (and tabloid). False, misleading
information abounds and goes unchecked. Disrespect to
others is always on the menu and being served. Of
course, it is pointed out that this is a discussion forum and
not a “expert Q&A” site … many have said “let the buyer
beware” when it comes to obtaining information on this
site. In many cases, the inflammatory posts are little more
than bathroom graffiti. Unfortunately, the “respect policy”
of this site has little meaning and is not enforced. While
the vast majority of posters are well intentioned, a few use
the discussion forum to vent their alter egos using lies
and deceit to further their own agendas. Like Stan, when
called out into the open, they fail to appear. Biased?
Maybe so.0June 30, 2004 at 4:33 am #102661
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.V. Laxmanan: With the greatest respect, I have reviewed
each of your posts. I also reviewed (in detail) your posted
paper as well as the power point presentation. Perhaps
its my middleaged eye sight, but I could not find
anywhere within these references where you related the
terms y, h, x, and c to certain specific variables within our
profession’s lineofsight. I did read (in many portions of
your work) where you related these terms to the physical
variables from which the work function originated (i.e.,
physics), but not to the field of quality. Is it too much to
simply fillintheblanks for us: Y = ?, h = ?, x = ? and c = ?.
Doing so would greatly help the others (and me)
understand your reasoning and therein support further
inquiry. Thank You for Helping Us Along. Reigle Stewart0June 30, 2004 at 2:54 am #102654
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Dear V. Laxmanan: I have reviewed your posts
concerning the work function. I would agree there might
be some interesting implications for the field of business
management and quality management; however, it would
seem that the readership of this forum has asked a very
simple question of this “profound” thinking you offer. That
question appears to be about the variable labels in the
relation y = hx + c. For example, if one declares the
variable “Y” to be “Process Yield,” then what might the
other variables be (i.e., h, x, and c)? Please, offer us
something we can get our teeth into. Surely, finding the
“right” empirical data is not required to provide us with a
set of hypothetical labels (and perhaps an application
scenario). With such “profession specific” labels, it will be
much easier for many of us to evaluate your proposition
and share the same “profound” insight you say is there.
After all, if you cannot lead us back into our world of
application, the insight remains yours alone. I humbly
await your example labels. Respectfully, Reigle Stewart0June 29, 2004 at 9:30 pm #102645
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Sorry, I messed up the one reference:https://www.isixsigma.com/offsite.asp?A=Fr&Url=http://
http://www.qualitydigest.com/feb04/articles/0June 29, 2004 at 9:28 pm #102644
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Trev: Maybe you should also consider these references:http://www.qualitydigest.com/feb04/articles/
06_article.shtmlhttps://www.isixsigma.com/library/content/c031006a.asp0May 13, 2004 at 11:58 pm #100248
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Matt: Thank you. Reigle Stewart.
0May 13, 2004 at 11:09 pm #100244
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.SSNewby: Excellent point. Dr. Harry has been working
for the last 18 months on developing Generation III Six
Sigma. The focus is on Value Creation and looks at an
organization’s “Velocity of Capacity.” It utilizes the ICRA
approach: Innovation, Configuration, Realization, and
Attenuation. Hey Stan, I’ll be Dr. Harry did not invent any
of this either … huh. The white paper on Gen III and
Velocity of Capacity was written by Dr. Harry some time
ago and is now coming into its own. You will be hearing
a lot about this in the near future. It is currently being
delivered to POSCO … Korea’s largest steel manufacturer.
Thats Dr. Harry’s job … I am doing mine too. Reigle
Stewart.0May 13, 2004 at 10:59 pm #100243
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Matt: By the way, you are very quick to point out Gabriel’s
contribution to this website … why are you not as quick to
point out Dr. Harry’s contribution to the world. Why do you
not declare Gabriel’s supposed flaws, yet you are quick
to point out Dr. Harry’s. Seems you may not have a
balanced playing field. Respectfully, Reigle Stewart0May 13, 2004 at 10:49 pm #100241
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Matt: You make an excellent point about Gabriel and I
will yield to your request and judgement. However, you
must look at it from my shoes as well. For twenty years I
have been with Dr. Harry … I have seen firsthand his
work put into action by so many corporations (that have
benefited so greatly). His contributions are tireless and
continuous. When you witness such things first hand and
then read the crap some say on this website, it does “stick
in your craw.” When you hold documents in your hand
that clearly demonstrate “first useage” (like the terms
Black Belt and so on) and shows the dates in which these
documents were signed, and then hear Stan say things
like “He did’nt invent anything,” it really hurts on a
personal as well as professional level. Stan (and several
others) seem to have a need to attack the character and
contributions of others, thereby bringing shame to our
profession. We should all be willing to acknowlege the
truth when presented evidence (just like in a court of law,
also based on the principles of logic an science). When
challenged, we should have the personal courage and
strengh to step forward and provide our facts. If we do not
have the facts, then we should freely say so and conceed
that our position is one of conjecture, not fact. Otherwise,
we simply make our profession look bad. Reigle Stewart0May 13, 2004 at 10:13 pm #100239
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Gabriel: Praveen said it best. You really don’t know how
to extend credit to someone that has earned it. Bottom
line, your professional jealousy is peeking through …
everyone else can see it but you. You want Dr. Harry to
be wrong soooo bad, you even comprise your own
integrity (and don’t even know it, but others do). Greed
has the same effect you know. So does jealousy. Save
the words and just show your math that proves Dr. Harry’s
equations are wrong … its that simple … But then we know
you won’t do that either. Reigle Stewart.0May 13, 2004 at 10:06 pm #100238
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Gabriel: Thanks for making my point … seems that when
you cannot offer math or facts or documents to support
your position on an issue, you then turn to taking
paragraphs fully out of context and skew it to what you
want it to say. Everyone knows this age old parlor trick …
it just makes the perp look even more foolish. So let me
ask you, what does the other 186 pages of the book say?
Wow, for a bunch of people that claim to be “Six Sigma
Professionals,” you really don’t follow the practices you
preach. Come on and get with it Gabriel make your case
… show me your math … not your limited understanding.
The same goes for Darth. Give me the references and
facts wholistically, not in the fragments you want to
present. I won’t let you off that easy. Reigle Stewart, the
oldbaldfatguythatdon’tknowsquat.0May 13, 2004 at 7:54 pm #100231
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Matt: One more thing, a simple “HTML Source Code”
report shows that the Juran Institute uses Dr. Harry’s
name as a MetaName Keyword. As you know, such
metanames are used by search engines to find “relevant”
searches. While this is flattering for Dr. Harry, it would
seem to be a questionable use of his name. Again, why
would the Juran Institute stoop to such tactics? Reigle
Stewart. PS: The Juran Institue is not the only “Six Sigma
Consultancy” doing this.0May 13, 2004 at 7:48 pm #100230
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Gabriel: I know you mean well when you say “No
mention is done to the long term variation, Pp and Ppk
(which are included in the SPC handbook from AIAG) are
not mentioned either.” So the bottom line is simple … the
Ford document you have not reference Cp* and Cpk*
(now known as Pp and Ppk). So, Dr. Harry’s first useage
still stands. What year is the AIAG document you have?
Does this document call these things out before Dr.
Harry’s first publication (Producibility Analysis book
1988)? Respectfully, Reigle Stewart.0May 13, 2004 at 7:24 pm #100228
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Mattt: Go the Juran Institute site and look under “Who We
Are.” Hold your mouse over the title “Juran Global” or
“Juran Partners” and you will see Dr. Harry’s name
referenced in the alttag (but not within the text). Sneaky
huh. I will guarantee you with 120% probability (if that
were possible) that Dr. Harry IS NOT a partner with Dr.
Juran or the Juran Insitute. So what is all of this saying. I
do believe its selfevident … the Juran Institute believes it
can capitalize on the name “Dr. Harry.” Oh well, so it
goes with free enterprise … If you can’t get’em with your
own name, then use someone elses name … even if they
have not asked permission to do so. Wonder why they
did’nt use Stan’s name? Reigle Stewart0May 13, 2004 at 7:14 pm #100226
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Matt: Dr. Juran has accomplished many great things in
his highly distinguished career, but Six Sigma is not one
of them. When did Juran’s work (books and papers) start
to talk about “Six Sigma?” In fact, Motorola got rid of the
Juran stuff in favor of Six Sigma. Interestingly, the Juran
Institute uses Dr. Harry’s name as an alternate description
(i.e., alttags) for their website … wonder why? On some
computers, you can see Dr. Harry’s name appear many
times when the site first opens up. Reigle Stewart0May 13, 2004 at 6:23 pm #100222
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.JD: An affiliation with an institution does not determine
the validity of mathematics or the soundness of one’s
resoning or the validity of one’s references … the math
stands on its own merits as do the references … for
anyone to independently examine. These judges are
exceptionally qualified and worldrenowed within the
disciplines of statistics, engineering, DOE, SPC, reliability
engineering, and so on … Dr. Montgomery was a
Shewhart Medal receipent. The debate is not intended to
change anyone’s mind … simply provide the arguments
and let the referees decide. Once posted, you can decide
on your own. Bottom line, Stan is a big boy … he agreed
to prepare a white paper that provides the math to counter
Dr. Harry’s position … he agreed to participate in the
debate … if you are not personally interested in the
debate, then ignore it. No one forced Stan into this
position … he agreed, has made his opinion and
allegations know to the world (as evidenced by his posts
on this website) … now, let him defend them in an
honorable way. The truth will prevail, unless you don’t
want the truth known (for any number of reasons). Reigle
Stewart0May 13, 2004 at 6:11 pm #100220
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Stan: Wow, I checked again, but I can not find the page
number where Juran uses the term “DMAI,” “MAIC,” or
“DMAIC.” Please, give me the previously mentioned
documents and page numbers and I will shutup … no
kidding around, give me a reference for “first useage” of
the terms: Black Belt, Green Belt, Brown Belt, Yellow Belt,
PTAR, DMAIC, MAIC, and so on and I will be humbled
and simply goaway forever (once I have verified your
references). Now how is that for an offer to your heart?
Reigle Stewart0May 13, 2004 at 6:04 pm #100217
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Stan. Sorry for jumping back in but I could not resist when
I read your post that said “Yes” to the question: Have you
ever been responsible for the deployment of Six Sigma in
a multinational company. Will you give us the name of
those companies so we can verify that you had the
corporate leadership responsibilty for deployment. All we
need are the company names … the rest can be easily
verified in a very short period of time (like a couple of
phone calls). Please, give us the company names … this
could be a big step in your direction. Reigle Stewart0May 13, 2004 at 5:29 pm #100209
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Stan. I am done with this bantering. Good Luck at the
debate. Reigle.0May 13, 2004 at 5:25 pm #100206
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Stan: Funny thing, they don’t know you. Reigle Stewart
0May 13, 2004 at 5:24 pm #100204
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Stan: More REHTORIC and ALLEGATIONS … without
facts, documents, or equations. Enjoy hiding behind your
“code name” now. Keep on making false rehtoric and
allegations while you can. Everyone knows you will find
some excuse not to debate. Stan, the world is “getting on
to you.” Your tactics of unsubstantiated rehtoric and
allegations is getting outofhand and sounds more
rediculous by the post. You won’t identify yourself, you
won’t give us your “first useage” references, you won’t
give us your math, you only provide a lot of unfounded
opinions. The only person not “playing fair” seems to be
you. There is an old saying: “Let the product do the
talking,” words you will likely never forget after the debate.
By the way, have you ever been responsible for
deploying Six Sigma across a multinational company …
of course you have not, but you are always on this site
putting out false advice (as if you had first hand corporate
leadership experience). Reigle Stewart0May 13, 2004 at 5:11 pm #100201
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Stan: I have Juran’s book you refer to. No where, and I
mean no where in the book, does it refer to the “DMAIC”
cycle of breakthrough … Juran just uses the word
“breakthrough” in an improvement context … PlanDoAct
stuff, not “DMAIC,” or even “MAIC.” Given me the source
and page number of first useage of “DMAIC” You will find
it was Dr. Harry that first used DMAIC. You say PTAR is
from “Adult learning model” … Given me the source and
page number of first useage of “PTAR.” Of course, you
won’t because you have no such sources. Seems I am
willing to provide sources, but you are not. No, Dr. Harry
was NOT an executive at Allied Signal, Rich Schroder
was. Dr. Harry was Corporate Vice President, Quality
Systems, Asea Brown Boveri … a 500,000 employee
company in Europe. He reported to Sune Karlson
(Executive Vice President and Member of the Board).
Again, your “facts” are in error. Reigle Stewart.0May 13, 2004 at 4:55 pm #100196
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Stan: Sounds like you are having second thoughts. All
you have to do is prove that Dr. Harry’s equations are
without merit and are wrong. All you have to do is
produce your references to support your allegations. Of
course, to do this, you must demonstrate what is “right.”
This should be a pieceofcake for your genius mind and
allknowing experience. Sure looks like you are starting
to back peddle now … faster and faster. If you read Dr.
Montgomery’s and Dr. Keats credentials, I do believe you
will find they have little bias one way or the other. Sounds
like paranoia to me (or someone starting to run a little
scared). Besides, the papers will be published along with
the transcripts, so any bias wil be in plain view … so I
dobut they will allow such bias. Reigle Stewart0May 13, 2004 at 4:48 pm #100194
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Matt: I am only quoting from several source documents
that define “first useage.” These documents are currently
being readied for posting on the internet for all to see
(and for some to read and weep). Several people are
going to feel a little akward when they see these
documents, given they have strongly asserted (without
documentation) a postion to the contrary. Sometimes you
have to give a person enough rope to hang themselves.
Reigle Stewart.0May 13, 2004 at 4:43 pm #100193
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Stan: You say “Ford” but what document at Ford are you
referencing and what is the date of that Document. I keep
asking for documentation, but you don’t produce any. I
am sure you believe what you believe, now show the rest
of us that your beliefs are founded in verifiable
documents. Lets practice some Six Sigma here OK. By
the way, Dr. Harry’s business partner is Mr. Phong Vu
(Phong was the Senior Deployment Champion for Ford
Motor Company when they rolled out Six Sigma). Reigle
Stewart0May 13, 2004 at 4:11 pm #100184
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Stan: Where are your references? You make public
statements that Dr. Harry did not do or create many of
these things, so provide us with a source document of
“earlier useage” If what you say is true, that should be
very easy for you do. After all, management by fact is the
Six Sigma way. Besides, you will have to produce such
documents during the debate. Reigle0May 13, 2004 at 4:03 pm #100180
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.DaveG: Yes, the position papers and referees decision
will be published. It is likely that a court recorder will be
retained to document the event. If so, the transcripts will
also be published. Reigle Stewart.0May 13, 2004 at 4:00 pm #100178
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Stan: Wow, you are really back peddling on your position
now. Suddenly, you are saying that an expansion in the
standard deviation can be equated to a linear shift in the
mean (reference your reply to Fernando … it is the
“same”). Several months ago, you argued the opposite.
Seems the closer you move to this debate, the more your
position is “shifting and drifting.” Can’t wait to read your
white paper. Regards, Reigle.0May 13, 2004 at 3:51 pm #100177
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Stan: You have made some statements that you will likely
not be able to back up. For example, if you reference
pages 67 through 616 of Dr. Harry’s book “Six Sigma
Producibility Analysis and Process Characterization” (first
published in 1988 and then again in 1992) you will find
the mathematics that prescribes Cp*, now known as Pp,
as well as Cpk*, now known as Ppk. Dr. Harry
demonstrates that the index Cp assumes a static standard
deviation (S.st), but over time, the standard deviation
inflates (S.lt); thereby mitigating the Cp computation. In
other words, Dr. Harry demonstrated that Cp*=TSL /
(S.st*c), where c is the rate of dynamic inflation. He also
demonstrates that Cpk*=(TSL*(1k)) / (S.st*c). Stan, if
you dispute this “first useage,” then provide your
references prior to 1988. Please provide for all of us the
source name and page numbers. In terms of the Plan
TrainApplyReview (PTAR) cycle of learning, I am sitting
here holding a book entitled “The Vision of Six Sigma: A
Roadmap for Breakthrough” published by Dr. Harry in
1993. On page 25.3 it shows a graphic entitled “The
PTAR Training Model”. The page provides a picture of
the PTAR cycle and says “The PTAR model is executed
for each phase of the Breakthrough Strategy. In this
manner, training is introduced as it is needed and at a
rate it can be institutionalized.” Again Stan, provide a
source and page number for the same useage prior to
this date. In terms of the words “Black Belt” and “Green
Belt,” I am holding a copy of the contract Dr. Harry had
with Unysis corporation in 1987. It has several pages
dedicated to the training of “Black Belts, Brown Belts, and
Green Belts. Also, there is a letter from Cliff Ames (Unysis
executive) that says “I am writing this letter to confirm the
fact that we hired Dr. Mikel Harry in the fourth quarter of
1987 to help with the implementation of a high
performance management system” in the Unisys Salt
Lake Printed Circuit Facility. This program was conceived
and implemented during the time frame of Q487 through
Q289. During this period of time, the terms “Black Belt,”
“Brown Belt,” and “Green Belt” were introduced to the
facility by Dr. Mikel Harry. As the responsible Plant
Manager, I agreed with these terms and implemented
them to put a label on our statistical superstars.” Stan, I
again ask you to cite a reference and page number for
useage of these terms prior to Q4 1987. You keep saying
these things are not true, I keep citing references, you
never cite anything verifiable (other than your opinion).
Show us the data buddy. Regards, Reigle Stewart0May 12, 2004 at 1:27 am #100075
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.mjones: Your source of information and data is very
credible and is most consistent with what Dr. Harry has
asserted and demonstrated over the last 20 years … the
“typical” CTQ from a “typical” process in a state of “typical
control” within a “typical” factory will “typically” shift and
drift between 1.4 sigma and 1.6 sigma, with the average
shift will be about 1.5 sigma (over many CTQ’s) . His data
was based on the simple calculation Z.shift = Z.st – Z.lt. In
fact, Dr. Harry has published such “emperical findings”
resulting from “typical” process studies undertaken at
Motorola and ABB. Recently, on this website, there was a
headline paper on the shift factor that cited Dr. Harry’s
research data as being the only “published” data currently
availible on the subject. You should consider publishing
your data. Do not be detered by the band of renegades
on this website. They talk alot, but never produce any
emperical data to support their claims, nor do they publish
any type or form of technical papers on the subject … they
don’t even use their real names (wonder why?).
Understand that they will “poopoo” what you do simply
because they don’t want the shift factor to be true
(probably because they have publicly committed to this
position without any data or math). As one automotive
executive recently observed: “These handfull of [deleted]
don’t live by the their own gospel … that being the power
of data.” Keep up the good work and keep reporting your
findings. You really put them on the run with your recent
post. Reigle Stewart0May 7, 2004 at 9:24 pm #99948
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Jonathon: Oh yeah, forgot that one. The bite has almost
healed. BowWow Baby. Reigle Stewart0May 7, 2004 at 8:17 pm #99940
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Stan: Thank you for your opinion. As usual, it is opposite
to anything I might say. Lets just save time … I am wrong
about everthing … in the past, now, or in the future. There,
now you can go back to work a happy man always
knowing you are right and I am wrong. I don’t have a life,
my kid hates me, I can’t get a job, I’m an alchoholic, broke,
and Dr. Harry is full of crap. Stan is good, loved, admired,
respected, and is the center of the world. Love ya. Reigle
Stewart.0May 7, 2004 at 7:05 pm #99933
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Praveen. Very nice article … well written. Reigle Stewart
0May 7, 2004 at 6:17 pm #99928
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Guy. There is a lot of “speculation” on what was in Bill’s
mind when he coined the term “Six Sigma.” I doubt if
there is anyone that worked more with Bill Smith on Six
Sigma than Dr. Harry. Maybe you should ask Dr. Harry
what Bill had in his mind. I do believe Dr. Harry laid all of
much of Bill’s thinking in his “Resolving the Mysteries of
Six Sigma” book. Reigle Stewart.0May 7, 2004 at 4:51 pm #99921
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Stan: Focus on your paper … you will have your hands
quite full in a few weeks from now. Reigle Stewart.0May 7, 2004 at 4:47 pm #99919
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Orlando: It is most unfortunate that your dates are wrong.
The producibility book was first printed and distributed in
Motorola in 1988 (according to the technical publications
information on the inside cover). The second printing was
then undertaken by Addison Wesley publishers (owing to
its popularity). It was again printed in 1992. The Vision of
Six Sigma (also authored by Dr. Harry) was first released
in 1986 as an extensive whitepaper and then in 1987 as
a book. Do you have a book or white paper from Mario
prior to this date … not likely … I wonder why? Motorola
University Press ultimately sold 500,000 copies of this
book (Nature of Six Sigma) … at least thats what the
publications report from Motorola University Press shows.
Why did’nt Mario’s book get published by Motorola for
public distribution? These are the “documented” facts
buddy, unlike your memory. You may also be unaware of
the corporate documents in 1988 which calls out Dr.
Harry to start working with corporate headquarters on a
global Six Sigma approach … before being transfered to
Corporate Headquarters in 1990. Interestingly, one of
these documents is from the Director of Quality at
Semiconductor Product Sector asking for Dr. Harry to
represent SPS. Wow, this is really odd given that Mario
was at SPS. Wait till you read the correspondence from
Scott Schumway. Reigle Stewart.0May 7, 2004 at 4:03 pm #99910
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Stevo: Of course not. How absurd. Neither will I be
“pulled in” to an even more ridiculous discussion. I am so
sick and tired of reading wildly different versions of
history, I finally decided to do something about it. As we
speak, I am filtering through all of Dr. Harry’s old
documents (several file cabinets full) and then will
assemble them into a documented time line of events.
Then, beyond any doubt, people can see firsthand “who
said what and when.” For example, I have the Unysis
contract where Dr. Harry deployed (for the first time ever)
the terms “Black Belt” and “Green Belt.” This was in 1987.
Then, he took the concept to Motorola at SSRI (also
thoroghly documented). As another example, I will post
the General Electric contract that demonstrates the Six
Sigma Academy was in fact the “prime” contractor for the
deployment of Six Sigma (as signed by the corporate
officers). As you know, many consultancies try to claim
“they implemented Six sigma at GE.” These documents
will finally put all of this squabbling to rest. Reigle Stewart0May 7, 2004 at 3:06 am #99894
Reigle StewartParticipant@ReigleStewart Include @ReigleStewart in your post and this person will
be notified via email.Since I am not a part of the debate and given the debate
has been structured, set in place, and the terms accepted,
there is little need for further discussion. Reigle Stewart0 
AuthorPosts