Guide to data analysis, quality control and improvement using cusum techniques

Guide to data analysis, quality control and improvement using cusum techniques

BS 5703-2:2003 pdf free.Guide to data analysis, quality control and improvement using cusum techniques- Part 2: Introduction to decision-making using cusum techniques.
Establishing the base criteria against which decisions are to be made is obviously an essential prerequisite. To provide an effective basis for detecting a signal, a suitable quantitative measure of noise” in the system is required. What is noise, and what is signal, is determined by the monitoring strategy adopted, such as how many observations to take, and how frequently, and how to constitute a sample or a subgroup. Also, the measure used to quantify variation can affect the issue.
In process management, two kinds of variation are taken into account.
a) Special cause variation: a special cause is a source of variation that arises because of specific
circumstances that are not always present. The effect can be transient, sporadic or persistent. Instances are isolated extreme values, step changes, runs, trends, cycles, and periodicity. A process that is subject to special cause variation is said to be “out•of-control”. Elimination of special causes brings a process
‘under contror.
b) Common cause variation: a common cause is a source of variation that is inherent in a process over time. A process that is subject only to common cause variation is said to he stable or in a state of
‘statistical control”. Reduction of common cause variation, andlor adjustment of the mean to the preferred value, give rise to process performance improvement.
It is usual to measure inherent variation by means of a statistical measure termed either of the following.
I) Standard deviation: where individual observations are the basis for plotting cusums. The individual observations for calculation of the standard deviation are often taken from a homogeneous segment of the process. This performance then becomes the more onerous criterion from which to judge. Any
variation greater than this inherent variation is taken to arise from special causes indicating a shift in the mean of the series and/or a change in the nature magnitude of the variability.
2) Standard error where some function of a subgroup of observations. such as mean, median or range, form the basis for cusum plotting. The concept of subgrouping is that variation within a subgroup is made up of common causes with all special causes of variation occurring between subgroups. The primary role of the cusum chart is then to distinguish between common and special cause variation. Hence the choice of subgroup is of vital importance. For example, making up each subgroup of four consecutively from a high speed production process each hour. as opposed to one taken every quarter of an hour to make up a subgroup of four every hour, would give very different variabilities on which to base a decision. The
standard error would be miniscule in the first instance compared with the second. One cusum chart would be set up with consecutive part variation as the basis for decision-making as opposed to 15 mm
to 15 mm variation for the other chart. Which is the appropriate measure of underlying variability will depend on which changes it is required to signal.
However, the prerequisite that stability should exist over a sufficient period to establish reliable
quantitative measures, such as standard deviation or standard error, is too restrictive for some potential areas of application of the cusum method.
For instance, observations on a continuous process can exhibit small unimportant variations in the average level. It is required that it is against these variations that systematic or sustained changes should be
judged, Illustrations are:
i) an industrial process is controlled by a thermostat or other automatic control device;
ii) the quality of raw material input can be subject to minor variations without violating specification;
iii) in monitoring a patient’s response to treatment, there might be minor metabolic changes connected with meals, hospital or domestic routine, etc.. but any effect of treatment should be judged against the overall typical variation.
On the other hand, samples can comprise output or observations from several sources (administrative regions, plants, machines and operators). As such, there might be too much local variation to provide a realistic basis for assessing whether, or not, the overall average shifts. Because of this factor, data arising from a combination of sources should be treated with caution, as any local peculiarities within each contributing source might be overlooked. Moreover, variation between the sources might mask any changes occurring over the whole system as time progresses.
Serial correlation between observations can also manifest itself, namely, one observation might have some influence over the next. An illustration of negative serial correlation is the use of successive gauge readings to estimate the use of a bulk material, where an overestimate on one occasion will tend to produce an underestimate on the next reading. Another example is where over-ordering in one month is compensated by under-ordering in the subsequent month. Positive serial correlation is likely in some industrial processes where one batch of material might partially mix with preceding and succeeding batches. Budgetary and accounting interval ends, dueness of organization year end results, project milestones and contract deadlines can affect the allocation of successive business figures. such as costs and sales on a period to period basis, and so on.
In view of these aspects it is necessary to consider other quantitative measures of variation in the series or sequences of data and the circumstances in which they are appropriate.
Such measures of variation on which to base decision-making using cusums are developed, in a quantitative sense, in Annex A. Recommendations are also made as to which to choose depending on the circumstances.
5.3 Measuring the effectiveness of a decision rule
5.3.1 Basic concepts
The ideal performance of a decision rule would be for real changes of at least a pre-specified magnitude to be detected immediately and for a process with no real changes to be allowed to continue indefinitely without giving rise to false alarms. In real life this is not attainable. A simple and convenient measure of actual effectiveness of a decision rule is the average run length (ARL).
The ARl is the expected value of the number of samples taken up to that which gives rise to a decision that a real change is present.
If no real change is present, the ideal value of the ARL is infinity. A practical objective in such a situation is to make the ARL large. Conversely, when a real change is present, the ideal value of the ARL is I, in which case the decision is made when the next sample is taken. The choice of the ARL is a compromise between these two conflicting requirements. Taking an incorrect decision to act when the process has not changed gives rise to “over-control”. This will, in effect, increase variability. Not taking appropriate action when the process has changed gives rise to “under-control”. This will also, in effect, increase variability.BS 5703-2 pdf free download.Guide to data analysis, quality control and improvement using cusum techniques

Leave a Reply

Your email address will not be published. Required fields are marked *