### Statistical Engineering is becoming popular and Statistically Designed Experiments play a big role in the Discovery Process.

From time to time there will be a number of factors a Six Sigma Black Belt or Quality Engineer will need to study. In this case, the Six Sigma Practitioner or Quality Engineer may choose to conduct a statistically designed experiment. So what makes a statistically designed experiment useful in the first place? It has much to do with its inherent properties. Let’s take a closer look at these properties using an example.

Suppose we have a factorial experiment with two factors: **X _{1}** and

**X**. From such an experiment, we can estimate two single effects (

_{2}**X**) and one two-way interaction (

_{1}, X_{2}**X**). The data from such an experiment can fit the following model.

_{1}X_{2}In this expression; **β _{o}**,

**β**,

_{1}**β**, and

_{2}**β**are regression parameters that measure the population mean (

_{12}**βo**), and the single effects (

**β**,

_{1}**β**) for factors

_{2}**X**and

_{1}**X**. Finally, the effect of the two-way intersection,

_{2}**X**, is measured by

_{1}X_{2}**β**. The last term,

_{12}**ε**, is an error term that represents the variation in

**y**not explained by the following regression or prediction model.

The error, **ε**, is computed by subtracting the predicted value, **ŷ**, from the actual value, **y**.

Using matrix algebra, we can compute the regression coefficients (**b’s**) and estimate the regression parameters (**β**‘s). In matrix notation, we can solve for these least square estimates using the following expression.

In this expression, **b**, is a vector of regression coefficients that estimate the regression parameters, **β**. In this expression, **X**, is our design matrix, **X ^{T}**, is the transpose of the design matrix (

**X**), and finally,

**y**, is a response vector. Using the following data set we’ll compute regression coefficients by solving for

**b**.

In table 1 there are two complete replicates on a physical property of interest, **y**. Here we examined the effects of two factors, **X _{1}** and

**X**, each at two levels, on

_{2}**y**. In total, there are eight observations.

We can express the information, shown in Table 1, in matrix notation as follows.

The first column of 1’s is used to compute the overall average of **y**. The remaining columns represent the levels for factors **X _{1}**,

**X**, and the two-way interaction,

_{2}**X**.

_{1}X_{2}Using this data we will fit the following regression model.

The **X ^{T}X** matrix is.

The (**X ^{T}X**)

^{-1}matrix is the inverse of the

**X**matrix.

^{T}XSolving for **X ^{T}y** we have.

Solving for the least squares estimates, we have the regression coefficients **b**.

At this point, there are some interesting things we should discuss. First of all, the (**X ^{T}X**) matrix is a diagonal matrix with all its off diagonal elements equal to zero. When this is the case, the levels for factors

**X**and

_{1}**X**are orthogonal. This just means the levels chosen for the factors in our experiment are not correlated or confounded with each other. As such, the sum of the product of the levels for the factors will equal zero. This is shown in the table below. Notice that the sum of the product of the levels for factors

_{2}**X**and

_{1}**X**is zero. This means the factor effects,

_{2}**X**and

_{1}**X**, are not correlated. When we analyze the effect of

_{2}**y**versus

**X**, it will not be tainted by the changes in

_{1}**X**. This is also true for

_{2}**y**versus

**X**.

_{2}

Secondly, notice the diagonal values of the inversion matrix, (**X ^{T}X**)

^{-1}, is the inverse of the diagonal values found in the (

**X**) matrix. Thus, an orthogonal matrix is easily inverted. Also, the computation of the regression coefficients is simplified because of this.

^{T}XOrthogonality is a powerful property inherent to statistical experiments for the following reasons.

1. When the levels of a factor are orthogonal to all other factors we can compute the regression estimates with easy.

2. Since the effect of a factor, estimated by **b**, is not influenced by any of the off diagonal elements in the (**X ^{T}X**) matrix it assures the conclusions we draw versus a specific factor effect is independent of all other factor effects.

3. We can estimate the effect of interactions between factors.

So many times experiments are designed without any consideration to orthogonality. When this is the case, the factors may be correlated. In this case, how can we draw any conclusions about a single factor when it is confounded with another factor? We also lose the opportunity to estimate the interaction effect between factors. We therefore lose any opportunity to understand the system under investigation and this impedes our ability to improve the system.

In summary, statistical experiments are the most efficient way to conduct experiments. They collect the required information, for the least amount of cost and use of resources.

To learn more about this topic I recommend the text: **Quality by Experimental Design**. It is an applied text in Statistical Experimental Design and Analysis.

## Leave a Reply