Control dominates the use of any other control

Control charts are widely used
tools of statistical quality control in industrial environments since its
inception by Shewart in 1920’s. The major function of control charting is to
detect the occurrences of assignable causes so that the necessary corrective
action may be taken before large quantity of non conforming product is
manufactured. A survey conducted by Saniga and Shirland (1977) shows that on
continuous measurement scale the control chart for averages dominates the use
of any other control chart technique. All control charts have a common
structure. A plot of the result of repeated sampling is made on a vertical
scale against the number of samples plotted horizontally. The center line of
the chart represents a long term average of the process statistic or its
standard value. The upper control limit (UCL) and Lower control limit (LCL)
represent the boundaries of typical statistic variation. The process call for
adjustment if the points fall outside the control limits. Departures from
expected process behavior within the limits (non random patterns on the chart)
can be detected by using different run tests for pattern recognition (Nelson
(1985)). On using control charts two kinds of errors may occur: over adjustment
and under adjustment. Uncertainty of inferences based on sampling statistic is
the major cause for these errors. The magnitude of the errors depends on the
decision-making method. It is beneficial that a control chart detect process
change quickly so that the causes of any undesirable changes can be identified
and removed. It is also beneficial that the rate of false alarms generated by
the control chart be low in order to maintain the confidence of process
operations in the chart. Sampling cost will be an issue for most of the
applications, thus it is important that a control chart be able to provide fast
detection of process change and a low false alarm rate with a reasonable rate
of sampling. So the statistical performance of a control chart is often
evaluated by considering, for a given false alarm rate and sampling rate, the
expected time required by the chart to detect various process changes. It has
been found in recent years that the statistical performance of control charts
can be improved considerably by changing the rate of sampling as a function of
the data coming from the process. The basic idea is that whenever there is an
indication of a problem with the process the sampling should be more intensive
and less intensive when there is no indication of a problem. There are many
ways in which the sampling rate can be varied as a function of process data.
One of the ways is to vary the sampling interval: a short sampling interval is
used when there is a indication of a problem and a long sampling interval is
used when there is no indication of a problem. The resulting variable Sampling
interval (VSI) control charts have been studied broadly (see, e.g., Reynolds et
al (1988) ; Zee (1990), Runger and Pignatiello (1991); Baxley (1996); and
Reynolds (1996a, 1996b). when there is an indication of a problem a large
sample size is used and a small sample size is used when there is no indication
of a problem. Variable sampling size (VSS) have been examined in various papers
(see ,e.g., Prabhu et al. (1993); Costa (1994), Park and Reynolds (1994),
Reynolds (1996b), Rendtel (1990), Arnold et al. (1993), Prabhu et al. (1994,
1997), and Arnold and Reynold (1994). Variable sampling Rate (VSS) is a control
chart that varies the sampling interval and the sample size. Tagaras (1998)
gave a survey on VSR chart. Most studies of VSR charts have compared VSR and
FSR charts by fixing the false alarm rate and the average sampling rate and
then comparing the expected time required to detect various shifts in the
process parameter of interest. The conclusion of this comparative study shows
that the VSR chart will detect small and moderate shifts much quicker than the
FSR chart. For very large shifts the VSR feature is not helpful to lessen the
detection time and almost appropriate and handy. However, they do not attempt
to specify the benefits of VSR feature in an economic sense. Even though a VSR
chart will have more design parameters than an FSR chart, some of the issues
associated with chart design are the same for the two charts. In particular,
there is the question of how to choose the design parameters of the chart to
achieve a reasonable balance between the cost of sampling, the cost due to
false alarms, and the cost of not detecting process changes. If an FSR chart is
already in use in a particular application, then the VSR chart for this
application could be setup to have the false alarm rate and the same average
sampling rate as the FSR chart. This approach mostly used to compare VSR and
FSR charts would result in faster detection of most process shifts.
Alternatively, the VSR chart could be set up to provide approximately the same
detection ability as the FSR chart but with reduced sampling cost. Baxley
(1996) and Reynolds (1996a) examine the use of VSI charts to reduce sampling
costs. Reduction of the problem rank for univariate cases by plotting a
statistic depending on both process mean and variance have been considered by
many authors.  Reynolds and Glosh (1981)
proposed plotting a statistic representing the squared standardized deviations
of the observations from the target value. Monitoring the value of this
function is also discussed by Derman and Ross (1994). Surely, these charts are
not studied for evaluation of process study but its uniformity that can be
characterized by a quality less occurring when the process deviates from its
desired value and generates non-uniform products. Unfortunately, these charts
are incapable of distinguishing a shift in the mean from a increase in
variance. There is a close connection between control charts and hypothesis
testing. In a sense, a control chart is a test of the hypothesis that the
process is in a statistical control. A point plotting within the control limits
is equivalent to failing to reject the hypothesis of statistical control, and a
point plotting outside the control limits is equivalent to accepting the
hypothesis of statistical control (Montgomery, 1997). In other words the aim of
quality monitoring is to test the null hypothesis

 :
? = 0   (out-of-control state of the
process) against the alternative hypothesis

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

😕 ? 0 (out-of-state of the process) (Pacella
and Sameraro (2007)), where ? represents the mean shift. This hypothesis
testing framework is useful in many ways, but there are some differences in
viewpoint between control charts and hypothesis testing. There are at least two
remarkable differences between the SPC techniques and hypothesis testing.
First, the rejection of null hypothesis must be supported by a follow-up
investigation to identify assignable causes, where as the common procedure of
hypothesis testing permits rejection on the basis of comparison of the test statistic
with the test critical value only. Secondly, a null hypothesis in SPC can be
rejected not only for salient points on a control chart but also due to non
random nature of the received data, where as hypothesis testing does not imply
any analysis of data patterns. Despite, many specialists admits that “a Shewart
control chart is equivalent to applying a sequence of hypothesis tests (
Chengular et al. (1989)). The centre line represents hypothesized mean value of
the process parameter, the control limits represents the critical values of the
two-sided test for the null hypothesis acceptance region, and each point
represents the critical values of the two -sided test for the null hypothesis
acceptance region, and each point represents a test value for the given sample.
The quality of the product depends upon the combined effect of various quality
characteristics. These characteristics may be correlated among themselves for
example, quality of linen thread, depends upon diameter and breaking strength
of the thread, which are related to each other. Hence, the use of some
available process capability measures separately for each of the quality characteristics
may not yield an adequate idea about the overall process capability. SPC is a
well known and widely publicized tool which is employed in perseverance of
products specification compliance through the regulation of the production
process. SPC is generally affected by external intervention, unlike, automatic
process control (APC) which is generally a closed loop and online sub-system of
the process. Control charts play a prominent role in SPC applications and
although they originated in applied engineering sector, their use has since
spread other areas of production and processing, such as, chemical industry.
Another less conventional control chart is that of missionary condition
monitoring published by Raadnui (2000). The control chart is conventionally
demarcated by upper control limits (UCL) and lower control limits (LCL),
Kreysig (1972), which are often symmetrically placed about a central value
line. If a system is under control it means that its output is statistically
uniform and the product is all from the same universe, Davis (1958). This does
not mean that the product mostly compiles with its quality specifications. The
quality will be in terms of a critical parameter target value (with a tolerance
range), typically, a dimension, strength, weight or electrical resistance, for
example, the observed average of the output samples may not coincide with the
target value or even if it does, the scatter in the values may be too high. The
level of process control must match the quality specifications. If the former
type of fault is noted, it may be corrected by process adjustment under SPC or
APC but if latter type of fault occurs it may entail more fundamental action
with offline intervention, perhaps up to system redesign. An alternative is to
amend the level of control to match specifications. In any application of SPC
the hypothesis being tested is that any detected variation in the data is only
due to stochastic process. Two types of variations may be observed in sample
data: 1) Chance variations, these are due to assignable events within the
process or small variations in environmental conditions, in inputs or in
operator actions. 2) Assignable variations, these are due to accountable causes
and usually produce identifiable patterns on a control chart. For example, they
are produced by deterioration, mechanical faults, a change in source of raw
material or process operator fatigue. The effect of both types of variation are
additive. In the assignable chance cause category, part of this may be due to
random effects that can be termed ‘noise’. It is well known that integration
reduces the effect of signal noise and, similarly, the effect of averaging
sample value of, say, 5-10 production units will assist in mitigating the
effect of random or periodic variations. Clearly, the larger the sample size,
the greater the reduction, but subject to control system and economic
efficiency. SPC cannot be applied to chance causes. In the case of non random
variations in the system output, the identification and classification of
causes are important. In the application of control charts, Davies (1958) has
emphasized that each case needs to be considered in detail and treated on its
own merits to determine the appropriate form of control chart. This is also
true in wider context of SPC–APC applications. For any control systems of any
type, the time scale of control response must be of smaller order of magnitude
than that of production process for it to be effective. For SPC intervention,
the process may need to be stopped, but APC intervention is normally online. The
SPC concepts outlined above are mainly concerned with the observation of output
variable data, where individual measurements are made on specimen samples. The
back theory is based upon a normal probability distribution. If the data are
from number of defective items per output batch, the theory is based upon
Binomial distribution, and for number of defects per unit (as in metal, plastic
sheet), the background theory is based upon poison distribution, davies (1985).
In the first case, it is known as analysis by variables, whereas in the latter
case it is termed as analysis by attributes, Riaz (1997).

          The
power of any test of statistical importance is defined as the probability that
it will reject a false null hypothesis. Statistical power is affected mainly by
the size of the effect and the size of the sample used to identify it. It is
easy to detect the larger effects as compared to smaller effects, while large
samples offer greater test sensitivity than small samples. In order to analyze
the minimum sample size required, power analysis can be used so that one can be
practically liable to detect an effect of a given sample. In order to compare
different statistical testing procedures, the concept of power can be used: for
example, between a parametric and a nonparametric test of the hypothesis. Correlation
is a statistical technique that can show whether and how strongly pairs of
variables are related. Correlation is bivariate analysis that measures the
strength of relationship between two variables and the direction of the
relationship. Usually, for Pearson’s correlation it is assumed that the
variables under study should be normally distributed. Other assumptions may
also include linearity and homosedasticity which expects linear relationship
between each of the variables and the data is normally distributed about the
regression line. A basic assumption made in most traditional applications of
control charts is that the observations from the process are independent. When
the mean of observations is being monitored, the mean is assumed to be constant
at the target value until a special cause occurs and produces a change in the
mean. However, for many processes, e.g., in the chemical and process
industries, there may be correlation between observations that are closely
spaced in time. For FSI chart correlation is not a serious problem but it may
become problematic for a VSI chart because some of the observations will be
taken using a relatively short sampling interval. The effects of correlated
observations on the performance of FSI control charts have been studied by
several authors. Goldsmith and Whitfield (1961); Bagshaw and Johnson (1975),
Harris and Ross ( 1991); Yaschin (1993), and Van Brackle and Reynolds (1994)
investigated the effect of correlation on EWMA charts. Vasilopoulous and
Stamboulis (1978); Maragah and Woodall (1992) and Pedgett et al. (1992)
investigated

  charts with correlated observations, where the
control limits are estimated from data. Alwan (1992) investigated the
capability of standard control charts for individual observations to identify
special causes which are reflected as isolated extreme points in the presence
of correlation. Chou et al. (2001) studied economic design of

 charts for non-normally correlated data. Liu
et al. (2002, 2003) examined the minimum- loss design of

chart for correlated data; they also
attempted to study the effect of correlation on the economic design of warning
limit

  charts.
Optimal design of VSI

 control charts for monitoring correlated
samples were discussed by Chen and Chiou (2005). Chen et al. (2007) studied the
economic design of VSSI

 control charts for correlated data. An
economic design of double sampling

 charts for correlated data were studied by
Torng et al. (2009). On examining these studies it may be concluded that if the
control charts are applied under correlated observations the results can be
misleading. In particular, there may be more frequent false alarms when the
process is in control, and the detection of out-of control situations may be
much slower than expected. In many situations were there is correlation between
the observations, assuming the process mean to be constant is not practical
until a special cause occurs. It may be more practical to assume that the
process mean is continuously wandering even though no special cause can be
identified. For some situations of this kind, the purpose of process monitoring
may be the application of some kind of engineering feedback control method in
addition to the detection of special causes (Macgregor (1990)). In order to
detect the special causes in correlated observations, several authors have
studied the alternative to a control chart which plots the original
observations. Two general monitoring approaches are recommended in order to
deal with this problem. First approach is to fit time series model to the data,
and then apply traditional control charts such as Shewart, EWMA
(exponentially-weighted moving average) and CUSUM (cumulative sum) control
charts to the residuals from the time series model. Another approach is to use
traditional control charts to monitor correlated observations with modified
control limits. Alwan and Roberts (1988) through their study shows that using
residuals from the time series model (ARIMA) may be appropriate if the correct
time series model is known and then apply the traditional control charts since
the residuals of time series model of correlated process are independently and
identically distributed with mean zero and variance ?2. Harris and
Ross (1991) fit a time series model to the univariate observations, and then
investigate the correlation effect on the performance of CUSUM and EWMA charts
by using residuals. Montgomery and Mastrangelo (1991) shows that the
Exponentially Weighted Moving Average (EWMA) may be useful for correlated data
by applying control charts to the residuals of time series model; Wardett et
al. (1994) reflects the ability of EWMA charts to detect the shift more quickly
than individual Shewart chart when the correlation is based on an ARMA (1,1)
model. They also suggest that residual charts are not sensitive to small
process shifts. EWMA control charts were studied by Lu and Reynolds (1995) to
monitor the mean of correlated process. They suggested that for the low and
moderate level of correlation, a Shewart control chart of observations will be
better at detecting a shift in the process mean than Shewart. They found that
when there is high correlation in process, control charts based on estimated
parameters mused be used. To monitor multivariate process in the presence of
correlation Pan and Jerret (2004) propose using vector autoregressive model
(VAR) by using residuals of the model. Pan and Jarret (2007) extended Alwan and
Robert’s approach to multivariate cases using residuals from a VAR on
Hotellings T-square control charts to monitor the multivariate process in
presence of correlation. Brain Hwang and Yu Wang (2010) proposes a Neutral
Network Identifier (NNI) for multivariate correlated processes.

          In this chapter an attempt has been
made to examine the power of  

 chart when the assumption of independence of
data is violated. Expressions for the power of

 chart are derived for different values of correlation
coefficient and for sample size. It is assumed that the process has a normal
distribution with mean µ and variance ?2 (known). We further assume
that at the time of determining the control limits the process is in a
statistical control.

 

2.2
Power of

 Chart in presence of data correlation

 In this development it is assumed that the
process has a normal distribution with mean µ and variance ?2. It is
further assumed that at the time of determining the control limits the process
is in a statistical control, and the same device is used as will be employed
for latter measurements. We further assume that the observations came from the
normal population and the observations are correlated or we can say that the
assumption of independence is violated in this case. Thus the data used for
establishing the limits on the control chart comes from a process that is N(µ,
?2). When the process shifts, the data is assumed to come from a N(

, ?2T2) population. If samples
of size n are taken from the population N(

, ?2T2) and the value of 

is plotted with control limits of µ ± 3

 ,
the power of detecting the change of process is given by the following formula:

  

 .                                    (2.1)

 Converting to a standard normal distribution,
we have

        

 .                                                                                        (2.2)

 
Where

,as we assume that the data comes from the normal
population and the observations are dependent. Thus the variance of

under the correlated data is given by

   
    

 ,

                  =


                                                                                     (2.3)

where

,where s2
is
assumed to be known, r is the correlation coefficient and n is
the sample size.

Now assuming the new variable, equation
(2.1) can be written as:

    

 ,                   (2.4)

           =

  ,

        
  = 

 ,

   
       = 

 ,

            =

 ,

        

  
= F

 +
F

 ,                               (2.5) where

, is the change of process average 

and 
F(z) =

).

 

3.
Numerical Illustrations and Results:

 For the purpose of illustrating the effect of
correlation on the power of

chart, we have determined the values of
power function

 for independent observations (r
=
0) and for different values of correlation coefficient r.
The values of power function have been calculated and the results are presented
in Table -2.1, Table- 2.2 for n = 5 and for n= 7 respectively. In order to give
visual comparison of the power functions for different values of correlation
coefficients r and process average d, a curves have been
drawn and shown in Fig. 2.3 and Fig. 2.4 for n = 5 and n = 7, which illustrates
the relationship between the change of process average d and the power of
detecting this change

 in presence of correlation. The power depends
upon the magnitude of process change.