### D2 values for the distribution of the average range appear in the following table.

The columns and rows represent the subgroup size (n) and number of subgroups (k). For a given subgroup size, say n=2, notice that the value of d2 changes as the number of subgroups, k, increases. As an example, notice that d2=1.150 for n=2 and k=15. When k is infinite, the value of d2=1.128. Notice the difference in the value of d2 is small.

When we use control charts, such as the average and range chart, we often wait to collect k=20 to 30 subgroups. The reason we do so is simple. The value of d2 when k=30 is close to the value of d2 derived from a continuous distribution of subgroups having the same subgroup size. This is why the table of d2 values is published up to k =15.

Control charts use range statistics and d2 values to estimate the standard deviation to compute control limits. The average and range chart is a perfect example. Such a chart is often used to track the behavior of a product feature during production.

### Sample Parts made Under Like Conditions

Often production personnel sample n=5 consecutive parts from a process and plot the average and range. Sampling in this manner often assures we have parts made under like conditions. Once we have many subgroups we can compute control limits about these averages. When the subgroup average or range falls outside a control limit we should question if the conditions changed. So the calculation of control limits based on a subgroup of parts made under like conditions is an important concept.

### How many subgroups do we need to compute control limits?

To compute the control limits, we need to collect a critical number of subgroups. Often we use 30 or more subgroups. Once we compute the range for each subgroup we then calculate the average range. To compute the average range we add the ranges and divide by the number of range values. We use the following expression to compute the average range.

Once we know the average range we need to look up the correct d2 constant. As an example, suppose we collected k=30 subgroups where each subgroup contains n=5 parts. Also assume that the average range is 10.82. As such, the appropriate d2 value for n=5 is 2.326. Using the expression below we can estimate the standard deviation for a subgroup of parts. This standard deviation is a often referred to as measure of within subgroup variation.

So there you have it. This is how we estimate the within subgroup standard deviation for a collection of parts made under like conditions. In another post, I’ll discuss how we can compute the control limits for the average and range charts using within subgroup variation. If you want to learn more about range statistics then click on the following links.

**Range Statistics and d2 Constant | How to Calculate Gage Repeatability Using the Average Range**

If you liked this post or have any questions then please let me know by adding your comments below. I look forward to hearing from you.

Sajid says

Dear Mr. Andrew,

Generally, while calculating GR&R – there are three different operators and 10 samples used to get EV and AV. Based on the table for d2, sample size is mention only till 15.

If I’m using 3 operators and 10 samples for gage R&R then m=10 , g=3*3=9

so d2 value is 2.981 . Please confirm if I’m calculating d2 value correctly?.

Support User says

Hello Sajid

Please refer to the MSA 4th Edition manual page 203. This is a table of d2 values for the distribution of the average range. The columns and rows represent the number of subgroups (g) and the subgroup sample size (m). In your case, lets first compute the number of subgroups. If you have 10 samples and 3 operators then we have g=30 subgroups. Now we need to know how many observations form a subgroup (m). The number of observations within a subgroup is the number of replicate measurements each operator will make for

a given part feature. In your description, you did not mention how many repeat measurements will be made. So lets assume that number of replicate measurements is, m=2.

Given m=2 and g=30, we would use this information to determine the appropriate d2 value. Referring to page 203, of the 4th edition MSA manual, the last row is g=20. Looking down the first column (which is m=2) the d2 value for g=20 is 1.144. Assuming an infinite number of m subgroups the d2 value is 1.128 (refer to the bottom of page 203 for this value for m=2).

Since this table does not provide a d2 value for g=30, we do know that value falls between 1.144 and 1.128. Since the difference is (1.144 – 1.128) 0.016, the error is 1.4%. This error would be smaller for g=30. As such, the MSA manual is essentially saying, for g > 20, you can use the d2 values assuming an infinite number of g subgroups.

I trust this answers your question.

Amuthan says

Hello Andrew, Hope you will be safe & good now,

Thanks for your blog-spot for spreading this knowledge forum .

My Question is :

What would be the right formuale to find “Standard Deviation” for a moving range… (or) In layman language, below is the case i want to solve.

A person arrives to office everyday (totally 5days) was given below… with reference to start time of 8.00

1. 7.45,

2. 7.55,

3.7.50

4.7.59

5.8.00

.. For example can i use SD = SQRT{(X-Xi)^2/(n-1)}

(or)

SD = Rbar/d2 (in this case, from subgroup size minimum 2 is required to know the d2 value… Where my sub groups are Nil, here, how to take the d2… Can i take minimum 2nd value, that is 1.128)

Kindly support to clarify & thanks once again for this knowledge sharing forum

Andrew Milivojevich says

For these n=5 values I would use d2 = 2.326. The range of these 5 values (max minus min) is 0.55. Dividing d2 into the Range we have: 0.55 / 2.326 = 0.24. If we compute the sample standard deviation we have 0.22. Both these estimates of the sample standard deviation are similar. I trust this answers your question.

Craig Woodard says

Hi Andrew,

Was wondering if Chebyshev’s inequality would be useful here:

So when the shape differs we cannot estimate the proportion of non-conforming data. So here is the problem, to compute the proportion of non-conforming data you need to know the shape of the distribution.

Andrew Milivojevich says

Hello Craig. That is an awesome point! Yes, Chebyshev is useful in this case. I will need to keep this in mind and prepare a blog post on this topic. Many thanks for pointing this out to me! Hopefully I can prepare something soon and I trust you will come back and let me know what you think.

vinoth s says

Hii sir,

Let say I need to calculate the process capability for surface roughness value 1.2 Ra.

Why cp value is getting * while calculate in Mini tab.

please explain why Cp value is not computing ?

Andrew Milivojevich says

Because the CP value requires an Upper and Lower tolerance limit (bilateral tolerance). I believe surface roughness is a unilateral tolerance, as such, only Cpk will be reported.

Leonel Rodríguez says

Hello Andrew, good afternoon. Hope you are doing well.

Please, what is the correct way to calculate Pp/Ppk and Cp/Cpk when the process output does not follow a normal distribution? (abnormal data)

Andrew Milivojevich says

Hello Leonel.

This is an interesting question!

I trust the following explanation is helpful.

Let’s first discuss Pp and Cp. These two indices, in their denominators, use a standard deviation estimate. However, the standard deviation estimate used to compute Cp is based (should be) on a rational sampling program. This means that we should form a subgroup of samples that were made under like conditions. Doing so assures that the standard deviation estimated from R-bar/d2 (d2 is a constant based on the subgroup size) will yield a standard deviation based on common causes. Thus, this estimate will resemble values derived under normal conditions (normally distributed).

However, the standard deviation estimate used to compute Pp is the total standard deviation. This standard deviation estimate includes the variation within and between the subgroups. Therefore, if there is any unusual variation it will appear as a CHANGE in the subgroup averages. If this change exceeds the variation within the subgroups then the total standard deviation becomes inflated beyond that which was computed by R-bar/d2. When this happens the chance of non-normal data increases. So, I will often examine Pp and Cp together. If these two indices are alike then the variation between the subgroups is minimal and the data is most likely normally distributed. Now I need to say that my assumption on normality also assumes that we have a sufficient number of distinct data categories. This I can verify by examining the RANGE chart. If I see more than 5 distinct levels for the range values then I am a happy. If, there is less than 5 then I need to question the device I am using to collect my data and I should consider adding a decimal point.

Now we need to talk about Cpk and Cp. These indices measure a standardized distance to the nearest specification limit. This distance is based on the average used in the numerator of each index. If the data is not normally distributed (assume the data is skewed) then the average will not estimate the peak of the distribution and in this case the median may be more appropriate. So if the median and average differ, then there is cause to believe the data is skewed. So what does this mean? It means that the SHAPE of the distribution is not symmetrical (not normal). So when the shape differs we cannot estimate the proportion of non-conforming data. So here is the problem, to compute the proportion of non-conforming data you need to know the shape of the distribution.

Now this explanation comes down to one simple fact about statistical process control. The control chart is designed to find unusual events (special causes). When we detect these unusual events we should then attempt to investigate the root cause. Once we find the root cause we can then implement a solution that attempts to prevent it from occurring again. In doing so we improve Quality.

So my personal opinion is this. I don’t spend time trying to compute Cp, Cpk, Pp, and Ppk for non-normal data. Simply because I would rather spend that time searching for a root cause.

Now there is one condition under which I would consider computing these indices for non-normal data. If the data describes a phenomena that is non-normal and the shape of that distribution is known and does not change with time.

I hope this answers your question.

Best Regards,

Andrew Milivojevich

Bruce Siskowski says

Hello Andrew,

Your explanation of the GRR and SPC Chart constant d2 was very helpful, however, how can one calculate d2* (d2 star)? For instance, D2star(k=1, n=2)=1.4142. Thanks.

Andrew Milivojevich says

Hello Bruce.

I have never looked into d2*. I’ve always looked up the value in the appropriate table.

But, thanks for giving me something to think about.

Andrew

Anju says

Okay Andrew.

Thank you a lot for the clear explanation. It was really halpful for me to understand the concept in a better way.

Also, looking forward for your posts on the same.

Thanks once again.

Anju

Anju says

Okay Andrew.

Thank you so much for your clear explaination. It was really helpful for me to understand the concept a little more.

Also, looking forward for your posts on the same.

Thanks,

Anju

Anju says

Thank you so much Andrew.

But my actual requirement is to get d2 values for k=1 and n>20…….

Currently, a case arised where, I am in need of d2 value for k=1 and n=40.

So I wanted to know, if their is any formula that can get us through values of d2 for k=1 and n> 20 …ie., n=21, n=22, … till n=100.

Once again, Thanks in advance.

Andrew Milivojevich says

Hello Anju. I am not aware of an empirical expression that can answer your question. In the absence thereof, I would suggest that you try to derive the values via simulation.

Best Regards,

Andrew

Anju says

Hello Andrew,

Thank you so much. Let me try that then.

Best Regards,

Anju

Andrew Milivojevich says

Hello Anju.

I found a reference and it might have an expression that estimates the d2 value. I will get back to you shortly.

Andrew

Anju says

Hello Andrew,

Wow! That’s great. Thank you. Looking forward for your reply.

Anju

Andrew Milivojevich says

Hello Anju.

I was only able to derive a regression model that estimates d2 to two decimal places. However, this regression model provides a d2 estimate based on an infinite number of k subgroups. In your case, you need corrected d2 values based on k=1 subgroup. Such d2 values are slightly larger. So I did find an expression that estimates these corrected d2 values. These corrected d2 values depend on the degrees of freedom for k=1 subgroup and n samples within that subgroup. So, I need to do some more digging to see how these degrees of freedon are estimated for the range of n you requested. If I find something I will post it here. Sorry I could not have better news for you.

Best Regards,

Andrew Milivojevich

Anju says

Hello,

Thanks a lot for the page.

For k=1 subgroup, n>20 subgroup size, can we get the d2 value? Please help.

We just have the d2 value for n>20 only for infinite subgroups.

Thanks in advance.

Andrew Milivojevich says

Hello Anju.

For k=1 and n=20 i believe the the d2 value is 3.80537.

I hope this helps.

Andrew

Christopher says

Hello Andrew,

Many thanks for this quick reply. I have found the table AIAG’s 4th Edition MSA Manual as you said, exactly what I was looking for.

Christopher

Andrew Milivojevich says

Hello Christopher.

Glad I could help!

Andrew

Christopher says

Dear Mr Milivojevich,

First of all thank you for this post. It helped me better understand the way to use de d2 index.

However, I have a case were I have 10 subgroups (k) with a size of 20 for each one of them (n). How shall I proceed to determine the d2 index?

Thank you in advance for your return.

Yours sincerely,

Christopher Thorne

Andrew Milivojevich says

Hello Christopher.

For k = 10 subgroups and n = 20 samples per subgroup, the d2 = 3.74205. If you have an infinite number of k subgroups then d2 = 3.735.

You can get these values from AIAG’s 4th Edition MSA Manual, page 203.

Andrew