1. Studies in Quality Improvement: Dispersion Effects from Fractional Designs
by George Box and R. Daniel Meyer. (February 1986).
The expense of repeating measurements can sometimes be avoided by using unreplicated fractional factorials to identify factors that affect dispersion Publication(s): Technometrics, 1986, Vol. 28, No. 1, pp. 19-27
2. An Analysis for Unreplicated Fractional Factorials
by George Box and R. Daniel Meyer. (February 1986).
New procedures for analyzing unreplicated fractional factorial designs make them easier to use. Publication(s): Technometrics, 1986, Vol. 28, No. 1, pp. 11-18.
3. Analysis of Unreplicated Factorials Allowing for Possibly Faulty Observations
by George Box and R. Daniel Meyer. (February 1986).
Inaccurate data points are particularly troublesome in the analysis of unreplicated factorial experiments, but new techniques allow investigators to overcome this difficulty. Publication(s): Design, Data, and Analysis, Colin Mallows (ed.), Wiley, (1987), pp. 1-12.4.
4. Managing Our Way to Economic Success: Two Untapped Resources
by William G. Hunter. (February 1986).
American organizations could compete much better at home and abroad if they would learn to tap the potential information inherent in all processes and the creativity inherent in all employees.
5. My First Trip to Japan
by Peter R. Scholtes. (February 1986).
American visitors to Japan can learn much about what it takes to successfully implement quality improvement.
6. Total Quality Leadership vs. Management
by Control by Brian L. Joiner and Peter R. Scholtes. (February 1988).
To survive in increasingly tough markets, top management in American companies will have to forsake their desire to "control" their employees, and instead learn what it means to provide Total Quality Leadership.
7. Studies in Quality Improvement: Designing Environmental Regulations
by Soren Bisgaard and William G. Hunter. (February 1986).
There is a surprising similarity between what SPC provides for industries and the need for constructing sensitive, reliable standards for environmental regulations. Publication(s): US EPA-230-03-047 publication, Paul I. Feder (ed.), Washington, DC, (1987), pp. 41-53.
8. Studies in Quality Improvement: Minimizing Transmitted Variation
by Parameter Design by George Box and Conrad A. Fung. (February 1986).
By properly designing products and taking the inevitable variation in components into account, engineers can minimize the amount of variation that ultimately shows up in finished products.
9. A Useful Method for Model-Building II: Synthesizing Response Functions from Individual Components
by William G. Hunter and Andrzej P. Jaworski. (February 1986).
Analyzing which components of a response are due to each factor is an alternative way to find the best model for studying the properties of a product or process (and thus for improving both). Publication(s): Technometrics, 1986, Vol. 28, No. 4, November (1986), pp. 321-327.
10. The Next 25 Years in Statistics
by William J. Hill and William G. Hunter (With contributions by Joseph W. Duncan, A. Blanton Godfrey, Brian L. Joiner, Gary C. McDonald, Charles G. Pfeifer, Donald W. Marquardt, and Ronald D. Snee). (February 1986).
A transformation of the American style of management has already begun; in order for it to succeed, statisticians must assume a leadership role. Publication(s): Chance, 1990, No. 1, pp. 38-39.
11. Signal to Noise Ratios, Performance Criteria and Statistical Analysis: Part I
by George Box.
Signal to noise ratios have been used to identify what combination of factors can actually produce the desired product characteristics with the minimum amount of dispersion (variance), but it turns out that these measures are dependent on how the data are transformed. (This report, along with Report 12, is no longer in print. A combination and extension of Reports 11 and 12 now appear as Report 26.)
12. Signal to Noise Ratios, Performance Criteria and Statistical Analysis: Part II
by George Box and José Ramírez.
Many of the criteria being used to measure performance of a process have hidden biases and inadequacies, but there are techniques that avoid these pitfalls. (This report along, with Report 11, is no longer in print. A combination and extension of Reports 11 and 12 now appear as Report 26.)
13. Doing More With Less in the Public Sector: A Progress Report from Madison, Wisconsin
by William G. Hunter, Jan O'Neill, and Carol Wallen. (June 1986).
The new quality improvement ideas can help public officials combat the effects of decreasing budgets just as they help private businesses increase productivity. Publication(s): Quality Progress, July 1987, pp. 19-26.
14. Drastic Changes for Western Management
by W. Edwards Deming. (June 1986).
This report is a compact summary of the most important points that Dr. W. Edwards Deming has been making about changes that must be made by American businesses if they are to be competitive.
15. How to Apply Japanese Company-Wide Quality Control in Other Countries
by Kaoru Ishikawa. (November 1986).
This report highlights the experiences of Kaoru Ishikawa, a leader in Japan's QC movement, who has spent the last 20 years visiting countries all over the world to give lectures and guidance on QC implementation. Publication(s): Quality Progress, September 1989, Vol. 22, No. 9, pp. 70-74.
16. Analysis of Fractional Factorials
by R. Daniel Meyer. (June 1986).
Statistically designed experiments, particularly fractional factorial designs, are key tools to use when the object is to screen a large number of variables in order to identify those with the most influence.
17. Eliminating Complexity from Work: Improving Productivity
by Enhancing Quality by F. Timothy Fuller. (July 1986).
Increasing quality does not increase costs; in fact, it is poor quality that increases "complexity," which in turn increases cost and decreases productivity. Publication(s): National Productivity Review, Autumn, 1985.
18. The World Class Quality Company
by William A. Golomski. (December 1986).
Through a long history of consulting with companies around the world, William Golomski has found some themes common to companies capable of achieving world class quality.
19. An Investigation of the Method of Accumulation Analysis
by George Box and Stephen Jones. (December 1986).
A discussion of Taguchi's method for analyzing ordered categorical data. Publication(s): Technometrics, November 1986, Vol. 28, No. 4, pp. 295-301 and also in Total Quality Management, 1990, Vol. 1, No. 1, pp. 101-113.
20. A Critical Look at Accumulation and Related Methods
by Mike Hamada and C.F. Jeff Wu. (November 1986).
Using accumulation analysis on ordered categorical data can often result in the detection of spurious effects. Publication(s): Technometrics, 1990, Vol. 32, No. 2, pp. 119-162.
21. A Process for Consulting for Improvement in Quality and Productivity
by Spencer Graves. (November 1986).
A process that consultants can use to improve their effectiveness.
22. Further Details of an Analysis for Unreplicated Fractional Factorials
by R. Daniel Meyer. (February 1987).
Some important implications and statistical details of Box and Meyer's formal approach to the analysis of unreplicated fractional factorial experiments.
23. Identification of Active Factors in Unreplicated Fractional Factorial Experiments
by R. Daniel Meyer and George Box. (February 1987).
How to pinpoint the most likely explanation for the results of unreplicated fractional factorial experiments.
24. An Investigation of OA-based Methods for Parameter Design Optimization
by C.F.J. Wu, S.S. Mao, and F.S. Ma. (April 1987).
There exists simpler alternatives for analyzing the results of a designed experiment than the orthogonal array methods proposed by Taguchi. Publication(s): Statistical Design and Analysis of Industrial Experiments, edited by Subir Ghost. Marcel Dekker; New York, 1990, pp. 279-310.
25. The Scientific Context of Quality Improvement
by George Box and Soren Bisgaard. (September 1987).
Scientific method is a key ingredient in the new philosophy of quality and productivity improvement. This paper provides an overview. A discussion of new ideas of how to design quality into products and processes is provided and Taguchi's work is evaluated and put in context. Publication(s): Quality Progress, June 1987, Vol. 4, No. 2, pp. 54-62.
26. Signal to Noise Ratios, Performance Criteria and Transformation
by George Box. (July 1987).
The relevance, efficiency and relation to transformations of Taguchi's signal to noise ratios are critically discussed. (This report is a combination and extension of Reports 11 and 12.) Publication(s): Technometrics, 1988, Vol. 30, No. 1, pp. 1-17.
27. On Quality Practice in Japan
by George Box, Raghu Kackar, Vijay Nair, Madhav Phadke, Anne Shoemaker, and C.F. Jeff Wu. (December 1987).
This report contains a summary of impressions from a study mission to Japan by a researcher from AT&T Bell Laboratories and the Center for Quality and Productivity Improvement, University of Wisconsin. It describes important quality initiatives seen in Japan and provides a comparative analysis between the United States and Japan. Publication(s): Quality Progress, March 1988, Vol. 21, No. 3, pp. 37-41.
28. An Explanation and Critique of Taguchi's Contributions to Quality Engineering
by George Box, Søren Bisgaard, and Conrad Fung. (March 1988).
This paper presents an overview of Professor Genichi Taguchi's contributions and concludes that Professor Taguchi's quality engineering ideas are of great importance. However, many of the statistical design and analysis techniques he employs are often inefficient and unnecessarily complicated and should be replaced or appropriately modified. Publication(s): Quality and Reliability Engineering International, 1988, Vol. 4, No. 2, pp. 123-131.
29. Analysis of Incomplete Data from Highly Fractionated Experiments
by Michael Hamada and C.F. Jeff Wu. (April 1988).
An iterative method is proposed that provides a simple and flexible way to consider many models simultaneously. The method can be implemented with existing software, results in computational savings and promotes experimenter involvement. Publication(s): Technometrics, 1991, Vol. 33, No. 1, pp. 25-38.
30. Discriminant Upset Analysis
by Paul M. Berthouex, George Box, and Agustinus Darjatmoko. (May 1988).
This report presents an application of discriminant analysis in setting rules for early warning indicators of process upsets in wastewater treatment plant operation.
31. Quality Improvement: An Expanding Domain for the Application of Scientific Method
by George Box. (July 1988).
Sir Ronald Fisher's work on data analysis and experimental design made possible applications of scientific method to quality improvement in industry and everyday life. This paper discusses how quality improvement provides an expanding domain for scientific method. Publication(s): Philosophical Transactions of the Royal Society, 1989, "Industrial Quality and Reliability," pp. 139-152.
32. The Quality Detective: A Case Study
by Soren Bisgaard, (June 1988).
A case study is presented that illustrates the practical problems of conducting experiments in an industrial environment. Publication(s): Philosophical Transactions of the Royal Society, 1989, "Industrial Quality and Reliability," pp. 499-511.
33. A Contour Nomogram for Designing Cusum Charts for Variance
by José Ramírez and Jesús Juan. (February 1989)
A contour nomogram is given that helps in the design of cumulative sums charts for variance when the observations are normally distributed.
34. When Murphy Speaks - Listen
by George Box. (February 1989).
Every operating system supplies information on how it can be improved but this information is often not acted on because people believe they are powerless to alter the system. The needed change in management philosophy and the necessity of input from those closest to the system is discussed. Three strategies for system improvement - corrective feedback, preemptive feedforward and simplification are described. Publication(s): Quality Progress, October 1989, Vol. 22, No. 10, pp. 79-84.
35. The Necessity of Modern Quality Improvement and Some Experience with its Implementation in the Manufacture of Rolling Bearings
by C. Hellstrand. (March 1989).
SKF restructured its manufacturing world-wide in response to competition from Japan in the early 1970s. The necessity of a company-wide quality procedure soon became evident. Its implementation later paved the way for both the implementation of statistical process control (SPC) and of experimental design throughout the company. Topics discussed include the structure of SKF quality procedures, difficulties and benefits experienced while implementing SPC and experimental design throughout the organization. Publication(s): Philosophical Transactions of the Royal Society, 1989, Series A, 327, pp. 529-537.
36. Quality in the Community: One City's Experience
by George Box, Laurel W. Joiner, Sue Rohan, and F. Joseph Sensenbrenner. (June 1989).
This report highlights the evolution of the quality movement in Madison, Wisconsin, and addresses what it takes to start a quality improvement network or similar organization. Publication(s): Quality Progress, May 1991, Vol. 24, No. 5, pp. 57-63.
37. Case Study: Experimental Design in a Pet Food Manufacturing Company
by Albert Prat and Xavier Tort. (October 1989).
Experimentation in the complex world of industry and service organizations requires a deep understanding of the basic engineering concepts underlying the process being studied, as well as relevant technical and economic constraints. The experimental design described in this report is a plant experiment where those constraints were taken into account. Several responses were measured, for the goal was not only to improve quality but also to increase productivity and reduce cost.
38. Teaching Statistics to Engineers
by Soren Bisgaard. (October 1989).
The fact that many engineers have only recently "discovered" statistics suggests that we need to reconsider our approach to teaching this important science. In this report, Soren Bisgaard reports on his experience teaching engineers using an approach that integrates statistics into engineering practice. Publication(s): The American Statistician, November 1991, Vol. 45, No. 4, pp. 274-283.
39. Integration of Techniques in Process Development
by George Box. (January 1990).
The reasons for the need of an iterative approach to experimentation are explained. The consequent implications in the use of screening designs (fractional factorials and other orthogonal arrays), response surface designs and mechanistic modeling studies are discussed. Publication(s): Transactions of the 11th Annual Convention of the American Society for Quality Control, 1957, pp. 687-702.
40. Quality Engineering and Taguchi Methods: A Perspective
by Soren Bisgaard. (January 1990).
Robust product design and parameter design - methods to develop products that will perform well regardless of changes in uncontrollable environmental conditions or that are insensitive to component variation - are key concepts in the work of Dr. Taguchi. We should encourage design and manufacturing engineers to apply these useful ideas. But in designing experiments and analyzing data - key aspects of the practical implementation - better and simpler methods are available and should be preferred over Taguchi's less intuitive and more cumbersome approaches. Publication(s): Target, October 1989, pp. 13-19.
41. Statistical Process Control and Automatic Process Control-A Discussion
by George Box and Tim Kramer. (January 1990).
The roles of Statistical Process Control for process monitoring and of Automatic Process Control for process regulation are considered and common misunderstandings discussed. Simple examples are used to show how the characteristics of the disturbance affecting the system, of the process dynamics and of various costs, decide the nature of optimal control schemes. Publication(s): Technometrics, 1992, Vol. 34, No. 3, pp. 251-285.
42. Process Control From An Economic Point of View-Chapter 1: Industrial Process Control
by Tim Kramer. (February 1990).
Some of the issues mentioned in technical Report #41 concerning the proper uses of Statistical Process Control and Automatic Process Control are discussed in greater detail in this report.
43. Process Control From An Economic Point of View-Chapter 2: Fixed Monitoring and Adjustment Costs
by Tim Kramer. (February 1990).
The problem of choosing a control scheme that minimizes the combined costs of monitoring, adjustment and being off-target is discussed. In addition to the various costs, the choice depends on the nature of the disturbance and the dynamic relationship between control actions and their effects.
44. Process Control From An Economic Point of View-Chapter 3: Dynamic Adjustments and Quadratic Costs and Chapter 4: Summary and Future Research
by Tim Kramer. (February 1990).
When a process is such that there is appreciable delay in control action taking effect, minimal variance feedback control can require excessive adjustment. In this report it is supposed that the cost of making the adjustments is proportional to the square of the size of the adjustment and the minimal cost schemes that result are considered. These "damped" schemes are compared with minimum mean square error schemes which use a longer monitoring interval.
45. An Application of Taguchi's Methods Reconsidered
by Veronica Czitrom. (July 1990).
Two aspects of Taguchi's methods for analyzing parameter design experiments that can be improved upon are considered. It is shown how using interaction graphs instead of marginal graphs, and how using the sample variance instead of a signal-to-noise ratio, can lead to product designs that are more robust to variation. The advantages of the alternative analysis will be illustrated by re-analyzing a case study considered by Barker (1986). Publication(s): Presented at the ASA Meeting, Washington, DC, 1989.
46. Do Interactions Matter?
by George Box. (November 1989).
It has recently been argued that in an industrial setting the detection and elucidation of interactions between variables is unimportant. In this report contrary view is advanced and is illustrated with examples. Publication(s): Quality Engineering, 1990, Vol. 2, No. 3, pp. 365-369.
47. Must We Randomize Our Experiment?
by George Box. (December 1989).
The importance of randomization in the running of valid experiments in industrial context is sometimes questioned. In this report the essential issues are discussed and guidance is provided. Publication(s): Quality Engineering, 1990, Vol. 2, No. 4, pp. 497-502.
48. Good Quality Costs Less? How Come?
by George Box. (March 1990).
It is sometimes supposed that the manufacture of high quality goods must be expensive. The reasons why this need not be so and why quality should cost less are discussed. Publication(s): Quality Engineering, 1990-91, Vol. 3, No. 1, pp. 85-90.
49. Design of Standards and Regulations
by Soren Bisgaard. (February 1990).
This report outlines how statistical terminology, concepts and methods can help in the design of better specifications of laws and standards. Publication(s): Journal of the Royal Statistical Society, 1991, Series A 154, Part I, pp. 93-96.
50. An Application of Box-Jenkins Methodology to the Control of Gluten Addition in a Flour Mill
by T. Fearn and P. I. Maris. (August 1990).
The approach of Box and Jenkins was used to design a control algorithm for a feedback loop controlling the addition of dried gluten to breadmaking flour in a flour mill. The variations to be controlled were modeled by an IMA(0,1,1) process and the system dynamics identified as a simple delay. The resulting optimal control strategy was implemented and worked well.
51. Existence and Uniqueness of the Solution of the Likelihood Equations for Binary Markov Chains
by Soren Bisgaard and Laurel E. Travis. (December 1990).
The two-state Markov chain is a useful model when analyzing binary data that may be serially correlated. The likelihood equations for this model are two intersecting conics, and several solutions could be anticipated. However, we prove the existence and uniqueness of the solution, and give a simple method for numerical solution. Publication(s): Statistics and Probability Letters, 1991, Vol. 12, No. 1, pp. 29-35.
52. Quality Improvement Approaches for Chemical Processes
by William J. Hill and Lane Bishop. (August 1990).
Quality improvement of chemical processes through the use of design of experiments (DOE), variance component analysis, and process noise simulation models is the focus of this report. A case history of a nylon process serves as the backdrop as to how effective these "second generations" tools can be in the process industries. The memory of Dr. William G. Hunter and his philosophy provide the central theme and message for the discussion. Publication(s): Quality Engineering, 1990-91, Vol. 3, No. 2, pp. 137-152.
53. Constrained Experimental Designs Part I: Construction of Projection Designs
by Ian Hau and George Box. (October 1990).
Experimental design is a powerful tool for quality improvement. In some situations, however, the design variables are subject to multiple linear constraints. In this report, we propose a class called projection designs which can be used when the design variables are constrained by linear relations. Fundamental issues such as region of interest and scaling of design variables are discussed. Then the construction of projection designs are illustrated.
54. Constrained Experimental Designs Part II: Analysis of Projection Designs
by Ian Hau and George Box. (October 1990).
In this report, we discuss the analysis of projection designs proposed in Report #53. We show that analyzing projection designs is essentially the same as analyzing some traditional unconstrained designs such as factorial designs and composite designs.
55. Constrained Experimental Designs Part III: Steepest Ascent and Properties of Projection Designs
by Ian Hau and George Box. (October 1990).
Steepest ascent is an important tool for process improvement. This report discusses how to use the steepest ascent method in the context of constrained designs. The properties of the projection designs proposed in Report #53 are also discussed.
56. Designing Products That Are Robust To The Environment
by George Box and Stephen Jones. (March 1990).
Professor Genichi Taguchi has emphasized the use of designed experiments in several novel and important applications. The engineering concept of robust product design is important since it is frequently impossible or prohibitively expensive to control or eliminate sources of variation due to environmental conditions. In robustness experiments, Professor Taguchi's total experimental arrangement consists of a cross-product of two experimental designs, an inner array containing the design factors and an outer array containing the environmental factors. Except in situations where both of these arrays are small, this arrangement may involve a prohibitively large amount of experimental work. One of the objectives of this report is to show how this amount of work can be reduced. Publication(s): Total Quality Management, 1992, Vol. 3, No. 3, pp. 265-282.
57a. A Simple Way to Deal With Missing Observations From Designed Experiments
by George Box. (September 1990).
A common difficulty in using designed experiments is that for one reason or another certain observations may be missing. This article discusses a simple way due to Draper and Stoneman to deal with this problem for two level factorials and fractional factorials. Some broader philosophical issues concerning missing observations are also discussed. Publication(s): Quality Engineering, 1990-91, Vol. 3, No. 2, pp. 249-254.
57b. Finding Bad Values in Factorial Designs
by George Box. (September 1990).
Sometimes the results from a designed experiment contain "bad or suspect" values. This article discusses a simple way, due to Cuthbert Daniel, of detecting a bad value. It also describes how you might re-estimate its value. More general issues are considered surrounding observations that appear discrepant. Publication(s): Quality Engineering, 1990-91, Vol. 3, No. 3, pp. 405-410.
58. Cumulative Score Charts
by George Box and José Ramírez. (February 1991).
In this paper we develop Cuscore statistics for this purpose which can be used as an adjunct to the Shewhart chart. These statistics use an idea due to Box and Jenkins (1966) which is in turn an application of Fisher's score statistic. We show how the resulting procedures relate to Wald-Barnard sequential tests and to Cusum statistics which are special cases of Cuscore statistics. The ideas are illustrated by a number of examples. These concern the detection in noisy environment of (a) an intermittent sine wave, (b) a change in slope of a line, (c) a change in an exponential smoothing constant, and (d) a change from a stationary to a non-stationary state in a process record. Publication(s): Quality and Reliability Engineering, January 1992, Vol. 8, pp. 17-27.
59. Teaching Quality Improvement by Quality Improvement in Teaching
by Quality Improvement in Teaching by Ian Hau. (February 1991).
n response to disturbing challenges ahead, leaders at the University of Wisconsin-Madison are committed to transform the institution to a Total Quality University. As a pilot project in the transformation, this paper describes how students and the instructor worked as a team to improve the quality of teaching in a class. Treating students as customers, the team identified 50 areas that affected the quality of teaching. A class survey revealed six areas where most students indicated problems. The instructor then implemented changes which dramatically reduced the defect rate as viewed by the customers in these areas. For example, the defect rate dropped from 78% to 22% for computer instruction, 56% to 8% for blackboard presentation, and 82% to 20% for overhead presentation. The team also developed a system to transfer their knowledge to the next team to ensure never-ending improvement in the future.
60. An Analysis of Taguchi's Method of Confirmatory Trials
by Soren Bisgaard and Neil Diamond. (October 1990).
Taguchi has suggested a new method of confirmatory trials that is intended to test for possible interaction effects. The method is being promoted by many recent authors. A careful analysis of this method, however, shows that confirmatory trials are ineffective in detecting the presence of interaction effects. In fact in many cases, it is shown that the probability of detecting an interaction effect decreases rather than increases as the size of the interaction effect increases. Thus, we advise practitioners not to adopt this new method as a test for interactions.
61. Split-Plot Designs for Robust Product Experimentation
by George Box and Stephen Jones. (May 1990).
More details of the issues in Report 56 are discussed in this report. Consideration of the efficiency of split-plot designs indicates that experiments conducted in a split-plot mode can be of tremendous value in robust product design since they not only enable the contrasts of interest to be estimated efficiently but also the experiments can be considerably easier to conduct than the designs proposed by Professor Taguchi. Publication(s): Journal of Applied Statistics, 1992, Vol. 19, No. 1, pp. 3-26.
62. Robust Product Designs, Part I: First-Order Models with Design ? Environment Interactions
by George Box and Stephen Jones. (May 1990).
In this paper we apply the strategy developed in Report 56 to a simple first-order model with interactions between the design and the environmental factors. For this model we derive the robustness measure and give a series of tables of designs that are appropriate for the possible objectives of the experimenter.
63. Robust Product Designs, Part II: Second-Order Models
by George Box and Stephen Jones. (May 1990).
In this paper we apply the strategy developed in Report 56 to a general second-order model. For this model we derive the robustness measure and give a series of tables of designs that are appropriate for the possible objectives of the experimenter.
64. Robust Product Designs, Part III: Second-Order Models with Additional Third-Order Terms
by George Box and Stephen Jones. (May 1990).
In this paper we apply the strategy developed in Report 56 to a general second-order model with additional third-order terms. For this model we derive the robustness measure and give a series of tables of designs that are appropriate for the possible objectives of the experimenter.
65. Sequential Methods in Statistical Process Monitoring, Chapter 1: Introduction and Chapter 2: Sequential Monitoring of Variances
by George Box and José Ramírez. (May 1991).
A CUSUM chart to monitor variability, based on the Wald-Barnard likelihood ratio test, is introduced. A nomogram to aid in the construction of the charts is also presented.
66. Sequential Methods in Statistical Process Monitoring, Chapter 3: Design of CUSUM Charts
by George Box and José Ramírez. (May 1991).
In this report we show how to design CUSUM charts to monitor process variability. An eight step outline for the implementation of these charts is included.
67. Sequential Methods in Statistical Process Monitoring, Chapter 4: Sequential Monitoring of Models, Chapter 5: Summary and Future Research, and Appendix: Table of ARL Values
by George Box and José Ramírez. (May 1991).
An extension of the cumulative sum charts based on the Fisher's score function is introduced. This CUSCORE can be used to monitor parameter changes in a model. It is shown that the traditional CUSUM for location is a particular case of the CUSCORE.
68. A Simple Rule for Judging Compliance Using Highly Censored Samples
by P. M. Berthouex and Ian Hau. (April 1991).
A special case of judging compliance is when the effluent limit is set at a level below the method limit of detection (MDL) of the substance being monitored. In many such cases, almost all measurements on the effluent are reported as "not detected." A simple rule is proposed for judging compliance of the effluent in this situation. It makes allowance for random errors in measurements on the effluent and it recognizes that an effluent can be in compliance and still produce a proportion of values above the MDL.
69. Quality Improvement at the Design Stage- A Cyclic Incremental Approach
by Soren Bisgaard. (May 1991).
Quality control based on inspection and segregation is uneconomical and inefficient. To be effective, quality needs to be considered and planned at the product design stage. In this article we put what seems like detailed problem solving, troubleshooting and statistical experimental design work into the larger context of the design and product development process. To do this we have developed a conceptual model for the design process based on the idea of cyclic incremental improvement. Publication(s): Proceedings of the 17th NSF Design and Manufacturing Systems Grantees Conference, January 1991, University of Texas, Austin.
70. Process Optimization-Going Beyond Taguchi Methods
by Soren Bisgaard. (May 1991).
This paper summarizes a speech given at the May 25, 1990 National Thermal Spray Conference in Long Beach, California. It is an expository paper which motivates the use of RSM for Product and Process Improvement and the sequential approach to experimentation with Taguchi methods. Publication(s): Proceedings of the Third National Thermal Spray Conference, Long Beach, California.
71a. Understanding Exponential Smoothing-A Simple Way to Forecast Sales and Inventory and Feedback Control and Bounded Adjustment Charts
by George Box. (April 1991).
This article gives a simple account of exponential smoothing and the way it can be used in forecasting. The meaning of an exponentially weighted moving average (EWMA) is given and the role of the smoothing constant in balancing the need to average data against the need for immediacy in the forecast is discussed. A way of estimating the smoothing constant is presented. Publication(s): Quality Engineering, 1990-91, Vol. 3, No. 4, pp. 561-566.
71b. Feedback Control
by Manual Adjustment by George Box. (April 1991).
While we should always make a dedicated endeavor to bring a process into a state of control by fixing causes of variation, there sometimes remains a tendency for the process to wander from the target. In such a case, some method of feedback adjustment may be needed. This article discusses an easily used manual feedback adjustment chart which is equivalent to integral control used by the control engineer. Publication(s): Quality Engineering, 1991-92, Vol. 4, No. 1, pp. 143-151.
71c. Bounded Adjustment Charts
by George Box. (April 1991).
The feedback adjustment charts discussed in the previous article are valuable when the cost of adjustment is essentially zero. However, when process adjustment is associated with a specific cost (for example, of stopping a machine and changing a tool), it is more economical to use a scheme that requires less frequent adjustment. For this purpose, bounded adjustment charts using an exponentially weighted average of past data may be used. A simple interpolation chart is presented for updating the forecast and indicating when and how large an adjustment is needed. A table is given allowing a scheme to be chosen by balancing a longer average interval between adjustments against the resulting increase in the standard deviation about the target value. Publication(s):Quality Engineering, 1991-92, Vol. 4, No. 2, pp. 331-338.
72. A Method for the Identification of Defining Contrasts for 2k-p Designs
by Soren Bisgaard. (July 1991).
The defining relation for a two-level fractional factorial design uniquely characterizes a design; from it the design can be studied, all the aliases can be found, the resolution determined, and the design if necessary reproduced. Finding a set of generators which is the key to finding the defining relation for a given two-level fractional factorial design is, however, usually a tedious job. In this article an algebraic method is presented that simplifies this job. Publication(s): Journal of Quality Technology, January 1993, Vol. 25, No. 1, pp. 28-35.
73. The Use of Statistics to Improve Manufacturing Systems
by Soren Bisgaard. (October 1991).
This article presents a general overview of statistical methods applied to solving manufacturing problems. We also provide a specific example of a statistically designed experiment used to study factors affecting robot accuracy. The robot experiment illustrates how manufacturing engineers can improve quality and productivity, and reduce costs by applying relatively simple statistical tools on the shop floor. Publication(s): appeared as "Statistical Tools for Manufacturing" in Manufacturing Review, Vol. 6, No. 3, pp. 192-200.
74. Quality Improvement-The New Industrial Revolution
by George Box. (October 1991).
Beginning from Bacon's famous aphorism that "Knowledge Itself is Power", the underlying philosophy of modern quality improvement is seen as the mobilization of presently available sources of knowledge and knowledge gathering. These resources, often untapped include the following: (i) that the whole workforce possesses useful knowledge and creativity; (ii) that every system by its operation produces information on how it can be improved; (iii) that simple procedures can be learned for better monitoring and adjustment of processes; (iv) that elementary principles of experimental design can increase the efficiency many times over of experimentation for process improvement, development, and research. Publication(s): International Statistical Review, Vol. 61, No. 1, pp. 3-19.
75. The Early Years of Designed Experiments In Industry: Case Study References and Some Historical Anecdotes
by Soren Bisgaard. (November 1991).
Case studies are important because they provide illustrations of the industrial use of designed experiments. However, they are usually hard to come by; nevertheless unknown to many, there are quite a few case studies already published in the literature. Most of them are unfortunately scattered in many different technical and scientific journals, and are not readily available unless one knows where to look. In this article we provide a list of approximately 130 references to case study articles published over the past six decades. We also provide a causerie of historical anecdotes from early initiatives in the use of designed experiments in industry. Publication(s): Quality Engineering, 1992, Vol. 4, No. 4, pp. 547-562.
76. Teaching Engineers Experimental Design With a Paper Helicopter
by George Box. (November 1991).
How a paper "helicopter" made in a minute or so from a 8 1/2" X 11" sheet of paper can be used to teach principles of experimental design including- conditions for validity of experimentation, randomization, blocking, the use of factorial and fractional factorial designs, and the management of experimentation. Publication(s): Quality Engineering, 1992, Vol. 4, No. 3, pp. 453-459.
77a. Blocking Generators for Small 2k-p Designs
by Soren Bisgaard. (January 1992).
Small 2k-p designs are increasingly being used in industry for processes and product experimentation. Blocking of these designs can often significantly increase their efficiency, but does not seem to be its full potential in industry. To facilitate the practical use of blocking, a comprehensive table providing information about blocking generators for all the possible eight and sixteen runs, two-level fractional factorial designs has been developed. Examples of blocking and application of the table are also provided. Publication(s): Journal of Quality Technology, October 1994, Vol. 26, No. 4, pp. 288-296.
77b. A Note on the Definition of Resolution for Blocked 2k-p Designs
by Soren Bisgaard. (May 1992).
When 2k-p designs are blocked, the application of the standard definition of resolution requires careful consideration. The problem is that the degrees of freedom associated with a set of blocking contrasts are essentially all "first order effects." Hence contrasts that superficially may appear as higher order interaction effects in reality are first order effects. Experimenters might therefore inadvertently confound these first order effects with important effects among the primary factors. In this note we discuss this subtle problem and provide an additional rule to the usual definition of resolution that helps provide a conservative but more realistic estimate of the resolution of a blocked design. We also show that this amendment to the definition is useful when several two-level contrasts are combined to yield factors with more than two levels. A few illustrative examples are provided. Publication(s): Technometrics, August 1994, Vol. 36, No. 3, pp. 308-311.
78a. What Can You Find Out From Eight Experimental Runs?
by George Box. (February 1992).
Different ways to use n = 8 and n = 16 experimental runs are described to generate two-level factorial and fractional factorial designs for studying up to n - 1 factors. The roles of "aliases" and of design "resolution" are discussed and the rationales for the employment of designs with different degrees of fractionation are presented. Publication(s): Quality Engineering, 1992, Vol. 4, No. 4, pp. 619-627.
78b. What Can You Find Out From Sixteen Experimental Runs?
by George Box. (February 1992).
Different ways to use n = 8 and n = 16 experimental runs are described to generate two-level factorial and fractional factorial designs for studying up to n - 1 factors. The roles of "aliases" and of design "resolution" are discussed and the rationales for the employment of designs with different degrees of fractionation are presented. Publication(s): Quality Engineering, 1992, Vol. 5, No. 11, pp. 167-178.
79. Taguchi's Parameter Design: A Panel Discussion
edited by Vijayan N. Nair with discussants B. Abraham, G. Box, R. Kacker, T. Lorenzen, J. Lucas, J. MacKay, R. Myers, J. Nelder, M. Phadke, J. Sacks, A. Shoemaker, S. Taguchi, K. Tsui, G. Vining, W. Welch, and J. Wu. (March 1992).
QIt is more than a decade since Genichi Taguchi's ideas on quality improvement were introduced in the U.S. His parameter design approach for reducing variation in products and processes has generated a great deal of interest and debate among both quality practitioners and statisticians. This panel discussion provides a forum for a technical discussion of these diverse views. The topics discussed include the importance of variation reduction, the use of noise factors, the role of interactions, selection of quality characteristics, signal-to-noise (SN) ratios, experimental strategy, dynamic systems, and applications. The discussion also provides an up-to-date overview of recent research on alternative methods of design and analysis. Publication(s): Technometrics, May 1992, Vol. 34, No. 2, pp. 127-161.
80. Finding the Active Factors in Fractionated Screening Experiments
by R. Daniel Meyer and George Box. (April 1992).
Highly fractionated factorial designs and other orthogonal arrays are powerful tools for identifying important, or active, factors, and improving quality. We show, however, that interactions, and important factors involved in those interactions, may go unidentified when conventional methods of analysis are used with these designs. This is particularly true of Plackett and Burman designs with number of runs not a power of two. A Bayesian method is developed in which the marginal posterior probability that a factor is active is computed, and allows for the possibility of interactions. The method can be applied to both orthogonal and nonorthogonal designs, as well as other troublesome situations, such as when data are missing, extra data are available, or factor settings for certain runs have deviated from those originally designed. The value of the new technique is demonstrated with three examples in which potential interactions and factors involved in those interactions are uncovered. Publication(s): Journal of Quality Technology, Vol. 25, No. 2, pp. 94-105.
81. A New Design for Quality Paradigm
by Mikkel Morup. (April 1992).
Product development and design has a tremendous influence on the final product quality and the cost of quality. This paper presents a critical look at the position of Design for Quality in western industry and academia. It is suggested that Design for Quality should be enhanced in the context of design methodology in order to better fit the way that products are actually designed. Finally, the paper presents new concepts, models and a structured procedure for Design for Quality that have evolved by looking at quality from the viewpoint of design methodology. Publication(s): Journal of Engineering Design, 1992, Vol. 3, No. 1. pp. 63-80.
82. A Comparative Analysis of the Performance of Taguchi's Linear Graphs
by Soren Bisgaard. (June 1992).
In this article we use conventional concepts of aliasing and confounding to analyze several two-level fractional factorial designs constructed with the use of Taguchi's linear graph technique. We also compare these designs with more conventional alternatives and show that the conventional design are often better in terms of resolution, and are robust to assumptions that are likely to be violated in practice. We also comment on the practice of making strong prior assumptions based on engineering knowledge about which two-factor interactions are active and which are inert. We conclude that the value of linear graphs is limited, that the designs obtained are non-robust, and that better and simpler conventional alternatives already exist. Publication(s): Applied Statistics, 1996, Vol. 45, No. 3, pp. 311-322.
83. Sequential Experimentation and Sequential Assembly of Designs
by George Box. (June 1992).
Because of the many uncertainties in choosing an appropriate experimental design, it is best to avoid "all encompassing" experiments which must necessarily be planned when least is known about the system. Instead, where possible, it is best to run smaller sets of experiments in sequence. Examples are given of how a strategy of "sequential assembly" of design can be used. Publication(s): Quality Engineering, 1993, Vol. 5, No. 2, pp. 321-330.
84. How to Get Lucky
by George Box. (June 1992).
Some principles for success in quality improvement projects discuss, in particular, how to encourage the discovery of useful phenomena not initially being sought. A graphical version of the analysis of variance which can help to show up the unexpected is illustrated with two examples. Publication(s): Quality Engineering, 1993, Vol. 5, No. 3, pp. 517-524.
85. Comparison of Two Approaches for Feedback Control
by Alberto Lucentildeo. (July 1992).
Two forms of feedback regulation that have been used for process adjustment in Automatic Process Control are considered. In the first, recommended originally by Box and Jenkins (1963) and by Box, Jenkins and MacGregor (1974), action is taken when the absolute difference between an exponentially weighted moving average (EWMA) of the past data and the target value T first crosses a threshold value L*. In the second, recommended by Taguchi (1981), the action is triggered when the absolute difference between the last observation zn and the target value T exceeds a given constant l. However, it is shown here that the actual cost of regularly applying the policy based on zn may be considerably more expensive than that based on . For example, in practice ? is frequently in the range from about 0.6 to about 0.9 and it is shown here that an increase in the mean square deviation of 64% can occur with ?=0.8. Publication(s): Communications in Statistics, January 1993, Vol. 22, No. 1, pp. 241-255.
86. An Iterative Non-Graphical Approach to Accommodate Interactions for Two-level Fractional Factorials
by Soren Bisgaard and Howard Fuller. (July 1992).
Taguchi and Wu (1985) introduced the method of linear graphs to accommodate pre-specified interaction effects in orthogonal arrays. Kacker and Tsui (1990) developed a method called interaction graphs and Wu and Chen (1992) proposed another graph-aided method that optimizes resolution by minimizing aberration subject to the requirement that all the specified main effects and interactions are estimable. We present a simple approach based on iterative substitution, easily implemented with commercially available word processing software, which guarantees a maximum resolution solution to the accommodation problem. Publication(s): Quality Engineering, 1994, Vol. 7, No. 1, pp. 71-87.
87. Experimental Optimization of Computer Models
by Soren Bisgaard, Bruce Ankenman and Tim A. Osswald. (July 1992).
Reducing the time and cost of product and process development is a key concern of today's competitive industrial firms. Although engineers often have available complex computer models of the product or process being developed, the use of these simulation models is often limited to ad hoc, one-factor-at-a-time exploration. In this article, a compression molding example is used to demonstrate a sequential strategy for optimizing complex computer simulation models based on response surface methodology. It is explained how the use of this approach can reduce the time and cost of prototype building and testing and thus aid in reducing the development time and cost. Publication(s): Manufacturing Review, 1994, Vol. 7, No. 4, pp. 332-345.
88. What Can You Find Out From 12 Experimental Runs?
by George Box and Soren Bisgaard. (August 1992).
Report 78 has shown how 8 and 16 run two-level factorial designs could be used to study a number of factors. In particular they could be used to generate fractional factorial designs whose projective properties made them excellent screening designs for finding a few vital factors having major effects on a system. These fractional designs are particular examples of orthogonal arrays developed by Plackett and Burman which are available when N is a multiple of 4. In particular these authors derived such a design to study 11 factors in 12 runs. It turns out that this design has the remarkable property that it yields a full 23 factorial plus an additional optimal half replicate for any of the 165 choices of 3 factors out of the 11 factors tested. Publication(s): Quality Engineering, 1993, Vol. 5, No. 4, pp. 663-668.
89. Charts for Optimal Feedback Control with Recursive Sampling and Adjustment
by George Box and Alberto Luceño. (September 1992).
A cost model proposed by Box and Jenkins (1963) and later generalized by Box and Kramer (1992) for obtaining minimum cost for feedback control of processes is considered. Unfortunately, it is sometimes difficult to assign values to the costs of making an adjustment, of taking a sample and of being off target as is required by their approach. An alternative that avoids the direct assignment of values to these costs is discussed in this paper and charts are provided to aid in choosing a reasonable scheme. For different values of the action limit and the non-stationarity measure, it is possible to compute an envelope of optimal schemes from which a choice may be made by judging the disadvantage of an increased mean square deviation against the advantage of having to take samples less frequently and/or increasing the average adjustment interval. Publication(s): Appeared as "Selection of Sampling Interval and Action Limit for Discrete Feedback Adjustment" in Technometrics, Vol. 36, No. 4, pp. 369-378.
90. The Design and Analysis of 2k-p x 2q-r Inner and Outer Array Experiments
by Soren Bisgaard. (September 1992).
Inner and outer array designs are useful for the development of robust products. In this article we will provide a discussion of their design and analysis from a classic factorial and fractional factorial design standpoint. In particular we will focus on inner and the outer arrays composed of two-level fractional factorials. Confounding, split plotting, split plot confounding, and economics will be discussed and we will show how savings in terms of runs or increased information sometimes can be achieved.
91. Sample Size Estimates for Two-level Factorial Experiments with Binary Response
by Soren Bisgaard and Howard Fuller. (September 1992).
When the number of defectives (non conforming products) is used as the response in two-level factorial and fractional factorial experiments it is important to have an estimate of the necessary sample size needed for detecting a specific change. In this article we provide a set of easy-to-use tables for standard 8, 16 and 32 run two-level factorial experiments that provide rough guidelines for the necessary sample sizes for different average quality levels and different shifts. Examples of the use of the tables are also provided. Publication(s): Journal of Quality Technology, October 1995, Vol. 27, No. 4, pp. 344-354.
92. Process Adjustment and Quality Control
by George Box. (November 1992).
To be competitive in the international marketplace modern manufacturers need to pay keen attention to quality monitoring and control. In this paper I discuss process quality monitoring, control and adjustment of a modern statistical time series analysis point of view. In particular, I show how several past approaches can be unified. Publication(s): Total Quality Management, Vol. 4, No. 2, pp. 215-227.
93. Confounded Dispersion Effects in Robust Design Experiments with Noise Factors
by David M. Steinberg and Dizza Bursztyn. (December 1992).
Robust design experiments can be a very useful tool for improving quality. They enable engineers to reduce the variance of important quality characteristics by identifying design factors with dispersion effects and guiding the choice of nominal levels of those factors. Robust design experiments are especially effective when it is possible to build some variation directly into the experiment by including noise factors-factors that are impossible or too expensive to control during actual production or use. When noise factors are included, it is important to model their effects explicitly in the subsequent analysis. We present two examples in which failure to do so leads to incorrect conclusions about dispersion effects. Publication(s): Journal of Quality Technology, 1994, Vol. 26, No. 1, pp. 12-20.
94. Graphical Aids to Measurement System Analysis
by John Hallinan and Soren Bisgaard. (February 1993).
This report is not available online
95a. Quality Quandries: Spreadsheets for Analysis of Two-level Factorials
by Soren Bisgaard. (March 1993).
Analysis of two-level factorial and fractional factorial experiments can be performed very simply in a standard spreadsheet environment on a personal computer. This article shows a simple way to program Yates' Algorithm for effects and the Reverse Yates' Algorithm for predicting values and residuals. Publication(s): Quality Engineering, 1993, Vol. 6, No. 1, pp. 149-157.
95b. Quality Quandries: Iterative Analysis of Data from Two-level Factorials
by Soren Bisgaard. (March 1993).
Thorough analysis of data from two-level factorial and fractional factorials can help engineers gain further insight into the technical system under investigation. This article provides an example of how data can be analyzed iteratively to reveal the underlying structure of the data. An example by Taguchi is used to illustrate the iterative use of estimation residual analysis and transformations to model this data from an experiment on viscosity. Publication(s): Quality Engineering, 1993, Vol. 6, No. 2, pp. 319-360.
96. The Design and Analysis of 2k-p x S Prototype Experiments
by Soren Bisgaard and David Steinberg. (March 1993).
Prototype testing and experimentation play a key role in the design of new products. These experiments enable engineers to assess the feasibility of the design. It is common practice to build a single prototype product and then test it at specified operating conditions. We argue that it will often be more fruitful to make several variants of a prototype according to a fractional factorial design. The information obtained can be of great value in comparing design options and improving product performance and quality. In such experiments the response of interest is often not a single number but a performance curve over the test conditions. In this article we develop a general method for the design and analysis of prototype experiments that combines orthogonal polynomials with two-level fractional factorials. The method we propose is simple to use and has wide applicability in the design of new products. We illustrate our ideas by applying them to an experiment reported by Taguchi (1986) on the carbon monoxide (CO) exhaust of eight motors made up according to a 27-4 design and tested at three operating conditions. Publication(s): Technometrics, 1997, Vol. 39, No. 1, pp. 52-62.
97. Bringing Total Quality Improvement into the College Classroom
by W. Lee Hansen. (March 1993).
This paper describes a recent effort to infuse the Total Quality Improvement (TQI) approach, popularized by Deming and others, into an upper-division, junior-senior economics course at the University of Wisconsin-Madison. The process of infusing TQI into instruction has received relatively little attention. Most efforts to bring TQI into higher education focus on improving administrative operations and establishing courses and programs for students to learn how to apply TQI in their future jobs. The challenge is in using TQI to help students realize their potential for learning in traditional courses.
98. A Discussion of Scientific Methods for Setting Manufacturing Tolerances
by Paul R. Weiss. (April 1993).
Several traditional and newer techniques for setting manufacturing tolerances are discussed. The traditional methods include worst case, statistical case, and proportional and constant factor scaling. Newer methods, such as Optimization and Monte Carlo Simulation are described more briefly. The Estimated Mean Shift model is included as a method for setting tolerances more realistically, while at the same time improving the communication between design and manufacturing departments. Additionally, some techniques are described for setting initial tolerances when little or no data or tables are available. Three tolerancing examples are included.
99. Designing Experiments for Tolerancing Assembled Products
by Soren Bisgaard. (May 1993).
This article provides an outline of theory and methods for the experimental determination of tolerance limits for mating components of assembled products. The emphasis is on novel combinatorial problems of pre- and post-fractionation of certain products of two-level factorial designs. The cost of experimentation is discussed and used as a guide to allocating experimental runs. Several alternative design examples are provided. The article concludes with a comprehensive example of the experimental determination of tolerances for the components of a throttle handle for a small motor. Publication(s): Technometrics, 1997, Vol. 39, No. 2, pp. 142-152.
100. William G. Hunter: An Innovator and Catalyst for Quality Improvement
by George Box. (June 1993).
This is the text of a talk given at the Speakers' Dinner at the Sixth Annual William G. Hunter Conference on Quality in Madison, Wisconsin, on June 2, 1993. In it, George Box recalls Bill Hunter's pivotal role in the birth of the quality movement in the city of Madison. Without Hunter's catalytic contributions, Madison would not have its current leadership position in the improvement of quality in government, industry, and education.
101. Is Your Robust Design Product Robust?
by George Box and Conrad Fung. (June 1993).
WRobustifying a product is the process of defining its specifications to minimize the product's sensitivity to variation. This article reviews two approaches to the problem of minimizing transmitted variation propagated from the product's components. The authors point out that no matter what approach is used, the solution can be extremely sensitive to certain assumptions which must be checked out. Sometimes tacit assumptions that seem innocuous turn out to be perilous. Thus we need to consider the robustness to assumptions of the robust design procedure itself. Publication(s): Quality Engineering, 1994, Vol. 6, No. 3, pp. 503-514.
102. Role of Statistics in Quality Control
by George Box. (June 1993).
The role of Statistics in Quality Systems depends on certain philosophical issues which the author believes have been inadequately addressed. Three such issues are the role of statistics in the process of discovery, the extrapolation of results from the particular to the general, and the management climate in which quality improvement needs to be conducted. Statistical methods appropriate to discovery are discussed as distinct from those appropriate only to the testing of an already discovered solution. The manner in which the tentative solution has been arrived at is shown to determine with what assurance the experimental conclusions can be extrapolated to the practical application in mind. Whether or not statistical methods and training can have any impact depends on the system of management. A vector representation of management strategies is discussed. This can help to realign policies so that members of an organization can work together for its benefit. Publication(s): appeared as "Statistics and Quality Improvement" in the Journal of the Royal Statistical Society, Series A, 1994, 157, Part 2, pp. 209-229.
103. Analytic Parameter Design
by Soren Bisgaard and Bruce Ankenman. (June 1993).
Parameter design, a method introduced by Taguchi for robustifying products to uncontrollable variation in components or the environment, has often for its solution relied on computer experiments using inner and outer array methods. In this article we use a simple electrical circuit to formulate the parameter design problem as a constrained optimization problem and use analytic methods of non-linear programming to solve it. Such an approach is now relatively simple with the proliferation of symbolic manipulation software to perform differentiation and other analytic operations. Our discussion is illustrated with graphics to further elucidate the basic structure of the parameter design problem. We also show that the solutions to the parameter design problem and the tolerance design problem, the phase where a design's tolerances are determined, cannot be separated. Our use of Lagrange multipliers also allows us to perform a sensitivity analysis. Publication(s):Quality Engineering, 1995, Vol. 8, No. 1, pp. 75-91.
104. Compensation and Employment Security
by Spencer Graves. (June 1993).
Research by economists supports a couple of Japanese management practices that seem to have been underemphasized in many Total Quality implementation efforts in the US - lifetime employment and linking pay to the accomplishments of the team. This paper illustrates the value of these policies with a few examples from consulting experience, then describes research by economists that suggests that the effects noted in the examples are commonplace and not isolated incidents. The focus is primarily on the link between management policies and productivity and profitability; this should make the conclusions largely independent of an understanding of the role of quality in organizational performance.
105. Total Quality Management and D*A*T Model
by Joe Van Matre. (June 1993).
Total Quality Management (TQM) is the current embodiment of the quality movement that began at AT & T in the early 1930's. Although initiated by Americans such as Walter Shewhart, W. Edwards Deming and Joseph Juran, it has been the Japanese who brought the quality movement to international attention. Japanese success in the global marketplace led their competitors to adopt similar strategies. In the United States, firms leading the way in TQM during the 1980's were primarily manufacturers such as Motorola, Ford and Xerox. Their experiences coupled with the success of "Japanese management" employing American labor in Ohio (Honda), Kentucky (Toyota), Tennessee (Nissan), and California (Sony), further increased the credibility of TQM as a major managerial development. Now many firms, service as well as manufacturing, are experimenting with and adopting the new philosophy. This paper reviews the essential elements of TQM (i.e., attitudes, tools and data) and proposes a conceptually simple but effective framework, the D*A*T model (Van Matre 1992), which focuses on those core elements and their interrelationships. Examples from the health care industry are used to show the role of TQM implementation in service industries. Publication(s): Journal of American Health Information Management Association, 1992, Vol. 63, No. 11.
106. Projection in 12-run Plackett and Burman
by John Tyssedal. (June 1993).
In this article we prove the form of the projections in the 12-run Plackett and Burman design. We do this by exploiting the close relationship between Hadamard matrices, orthogonal two-level arrays and a special type of balanced incomplete block designs. Projections into 2-5 dimensions are treated.
107. Noise Factors, Dispersion Effects and Robust Design
by David Steinberg and Dizza Burnsztyn. (July 1993).
There has been great interest recently in the use of designed experiments to improve quality by reducing the variation of industrial products. A major stimulus has been Taguchi's robust design schema in which experiments are used to detect dispersion effects (that is, factors that affect process variation). We study here one of Taguchi's novel ideas, the use of noise factors to represent varying conditions in the manufacturing or use environment. We show that the use of noise factors can dramatically increase power for detecting dispersion effects, provided their effects are explicitly modeled in the subsequent analysis.
108. Changing Management Policy to Improve Quality and Productivity
by George E.P. Box. (August 1993).
It is generally accepted that the effectiveness of a quality improvement program often depends on changing the management culture in which it operates. Contemplated changes of policy affect different parts of an organization in different ways. A geometric representation of viewpoint on policy is introduced which makes it possible to compensate difficulties in making changes and finding effective ways to overcome them. Publication(s): Quality Engineering, 1994, Vol. 6, No. 4, pp. 719-724.
109. Total Life Models - An Important Tool in Design of Quality
by Mikkel Morup (December 1993).
Product quality is far more than "fitness for use" and robustness in the manufacturing process. This paper discusses the phenomena of product quality in the entire product life. It presents a total life model which serves several purposes, such as expanding the design teams' understanding of quality and adding structure to total life scenarios in the specification phase.
110. Quality and the Bottom Line
by Soren Bisgaard. (December 1994).
Over the long term, Total Quality Management techniques must be validated economically or they will lose the support of management. In this article, a fictitious example is used to demonstrate how quality improvement tools can be applied to accounting data. These tools allow managers to make informed decisions about where quality improvement efforts will be most effective and show the resulting improvement in the bottom line. Publication(s): Quality Engineering, 1994-95, Vol. 7, No. 1, pp. 223-235.
111. Discrete Proportional-Integral Control with Constrained Adjustment
by George Box and Alberto Lucentildeo. (February 1994).
It is well known that discrete feedback control schemes chosen to produce minimum mean square error at the output can require excessive manipulation of the compensating variable. Also very large reductions in the manipulation variance can be obtained at the expense of minor increases in the output variance by using constrained schemes. Unfortunately, however, both the form and the derivation of such schemes are somewhat complicated. The purpose of this article is to show that suitable "tuned" proportional-integral (PI) schemes in which the required adjustment is merely a linear combination of the two last observed errors can do almost as well as the more complicated optimal constrained schemes. If desired, these PI schemes can be applied manually using a feedback adjustment chart which is no more difficult to use than a Shewhart chart. Several examples are given and tables are provided that allow the calculation of the optimal constrained proportional-integral scheme and the resulting adjustment variance and output variance. Methods of tuning such controllers using Evolutionary Operation and experimental design are briefly discussed. Publication(s): The Statistician, JRSS Series D, Vol. 44, pp. 479-495.
112. Orthogonal Design of Life Testing with Replacement: Exponential Parametric Regression Model
by Ilya Gertsbakh. (March 1994).
This paper describes how to plan an "optimal" life testing experiment when the lifetime is assumed to have an Exponential distribution. We further assume that the mean lifetime is equal to where the covariates xi form an orthogonal Hadamard-type matrix which depends on testing conditions, and ?i are the unknown parameters. n0 devices are put on test. The period of testing, t0, is divided into k stages of length ti , i=1,...,k, and on each of these stages all devices operate under a fixed testing regime. (The number of different testing regimes, k , equals to the number of parameters to be estimated). Each device which fails is immediately restored and continues to operate. A closed maximum likelihood solution is given for estimates of ?i which exists if and only if at least one failure has been observed on each of the testing stages. Also the approximate optimal duration of the i-th testing stage, ti*, which would provide the minimum of is derived. It is shown that the near-optimal testing policy is obtained when ti* is proportional to the square root of the mean lifetime for the corresponding testing regime. Finally, the expression for the Fisher information matrix is derived and the optimality criterion (which is the trace of its inverse) is expressed as a function of model parameters ?i, the duration of testing stages ti , and the number of devices operating on each of the testing stages.
113. Standard Errors of the Eigenvalues of Second Order Response Surface Models
by Soren Bisgaard and Bruce Ankenman. (March 1994).
When second order response surface models involve more than three factors, confidence intervals for the eigenvalues of the second order coefficient matrix play an important role in the interpretation of their geometric shape. In this article, we propose a new method for estimating the standard errors, and thus confidence intervals of these eigenvalues. The method is simple in both concept and execution and involves the refitting of a full quadratic model to the data using the rotated coordinate system obtained from canonical analysis. The standard errors of the pure quadratic terms from this refitting are used to approximate the standard errors of the eigenvalues. Since it uses the canonical form as a basis, the method is geometrically intuitive and thus is easily taught. Our approach is intended to provide practitioners with quick estimates of the standard errors of the eigenvalues. In our justification of the method, we show that it is equivalent to using the delta method as proposed for this problem by Carter, Chinchilli and Campbell (1990). Publication(s): Technometrics, 1996, Vol. 38, No. 3, pp. 238-246.
114. The Impact of Measurement Error on Specifications
by Søren Bisgaard, Spencer Graves, and René Valverde. (April 1994).
Measurement problems are extremely common and sometimes so serious that even 100% inspection is counter-productive as a means of separating good from bad parts. In this article, we first discuss the impact of measurement error on specifications. In particular we will discuss how to analyze the relationship among alternative tolerance limits, process variances, and measurement error variances using Monte Carlo simulation. Publication(s): Quality Management, 1998, 11(2), p. 331
115. Common Principles of Quality Management and Development Economics
by Spencer Graves. (May 1994).
AThree principles seem to be basic for quality management and for economic growth in rich and poor countries: 1) Improvements in quality of life are built on improvements in work methods. 2) Improvements occur most often when they are expected, supported and rewarded. 3) One must continually improve just to stay even. These three principles are illustrated and tied to literature in both quality management and the economics of development and growth. Publication(s): Quality Management Journal, 1995, Winter, pp. 65-79.
116. Projective Properties of Certain Orthogonal Arrays
by George Box and John Tyssedal. (May 1994).
The projective properties of two-level orthogonal array designs are important in factor screening. General results are given which, in particular, allow the designs derived by Plackett and Burman to be categorized in terms of these properties. The following results are given: 1) every saturated fractional factorial design is of projectivity P=2; 2) a design obtained by doubling is always of projectivity P=2; 3) any saturated two-level design obtained from a orthogonal array constructed by cyclic generation is either a factorial orthogonal array with P=2 or else has projectivity P=3; and 4) any saturated two-level design obtained from an orthogonal array containing n=4m runs, with m odd, is of projectivity P=3. Publication(s): Biometrika, 1996, Vol. 83, No. 4.
117. Tolerance Analysis Considering Manufacturing Variability and the Cost of Deviating from the Nominal
by Spencer Graves. (May 1994).
A number of different formulae for tolerance analysis and synthesis have appeared over the years. This article discusses the interrelationships between alternative formulae, showing how each is best for a specific set of assumptions regarding the cost of deviating from the nominal and the distributions of dimensions of parts. To increase the use of appropriate statistical tolerancing, a procedure is outlined for converting process capability studies into a simple formula tailored to a given manufacturing organization.
118. Normalizing Transformations for Shewhart-Type Control Schemes
by Haim Shor. (May 1994).
Traditional Shewhart-type process control schemes and process capability analyses assume that the process distribution (or distributions of statistics derived thereof) are approximately normally distributed. When this assumption fails to materialize, normalizing transformations are sometimes needed. However, many of the currently used transformations are difficult to apply and at times require expertise that the common practitioner does not possess. In this paper, a new set of normalizing transformations are suggested that are simple to carry out, may be generally applied (since they are distribution-free) and are associated with a non-iterative standard procedure that is easy to program. Implications for current practices regarding Shewhart-type control schemes and for capability analyses are discussed.
119. Analysis of Factorial Experiments with Defects or Defectives as the Response
by Soren Bisgaard and Howard Fuller. (June 1994).
The performance of a production process is often characterized by the number of defects in its products or the number of defective products. Typically, reduction of the number of defects or defectives is paramount to improving the quality of such a process. A powerful tool used for identifying variables that influence the process level of defects or defectives is experimental design. However, when using counts of defects or defectives as the experimental response the assumption of constant variance made with almost all standard analyses is violated. A common method for dealing with this problem is to transform the data before the analysis so that the assumption of constant variance is more likely. In this paper, we present various transformations that can be used to approximately stabilize the variance of counts of defects and the variance of proportion of defectives. We also re-analyze examples of each case where transformation of the experimental data followed by a simple analysis of the data led to significantly different conclusions. Publication(s): Quality Engineering, 1994-95, Vol. 7, No. 2, pp. 429-443.
120. Assuring Product Success with ISO 9001?
by Gunhild Dalen (July 1994).
Several research projects have been conducted, and several reports and books have been written with the hope of finding the factors important for successful new product development. This article compares the portions of ISO 9001 related to new product development with relevant research results. The conclusion is that ISO 9001 is mainly concerned with the formal written documentation of the development process, the adherence to these documents, documentation of the result, and qualification of personnel and resources available to the project. But ISO 9001 does not include all the elements necessary for assuring a successful product development, such as customer contact, teamwork, consistent project team, authority of the team leader, or design for manufacturability.
121. The Importance of Data Transformations in Designed Experiments for Life Testing
by George Box and Conrad Fung. (July 1994).
Data transformation can sometimes yield big improvements in model simplicity, variance homogeneity, and precision of estimation, especially in the analysis of designed experiments for life testing. This article shows several simple ways to choose an appropriate transformation. Lambda plots are introduced as a useful graphical way to understand how transformation can affect model simplicity. The methods are illustrated with several real examples, including a life test where some of the test items survived the experiment, resulting in "censored" data that needed to be imputed. Publication(s): Quality Engineering, 1995, Vol. 7, No. 3, pp. 625-638.
122. Follow-up Designs to Resolve Confounding in Fractional Factorials
by R. Daniel Meyer, David M. Steinberg and George E. P. Box. (November 1994).
Fractional factorial and Plackett-Burman designs are often effective in practice due to factor sparsity. That is, just a few of the many factors studied will have major effects. In those active factors, these designs usually have higher resolution. We have previously developed a Bayesian method based on the idea of model discrimination that uncovers the active factors. Sometimes, the results of a fractional experiment are ambiguous due to confounding among the possible effects and more than one model may be consistent with the data. Within the Bayesian construct, we have developed a method for designing a follow-up experiment to resolve this ambiguity. The idea is to choose runs that allow maximum discrimination among the plausible models. This method is more general than methods which algebraically decouple aliased interactions, and more appropriate than optimal design methods which require specification of a single model. The method is illustrated through examples of fractional experiments. Publication(s): Technometrics, 1996, Vol. 38, No. 4, pp. 303-313. Reply pp. 327-332.
123. Total Quality: Its Origins and Its Future
by George Box (January 1995).
This article discusses how an efficient organization is characterized by its knowledge and learning capability. It examines the learning ability of the human animal, the logic of continuous, never-ending improvement, the catalysis of learning by scientific method, and Grosseteste's Inductive-Deductive iteration related to the Shewhart Cycle. Total Quality is seen as the democratization and comprehensive diffusion of Scientific Method and involves extrapolating knowledge from experiment to reality which is the essence of the idea of robustness. Finally, barriers to progress are discussed and the question of how these can be tackled is considered.
124. A Case Study of the Use of Experimental Design and Multivariate Analysis in Product Improvement
by Marit Ellekjaer, M.A. Ilseng and T. Naes. (January 1995).
The overall purpose of this study is to identify an effective strategy for improving the sensory quality of a product. A study on processed cheese was used to develop and illustrate our ideas. A screening experiment, with seven processing and ingredients variables, was performed in order to identify the processing variables with the greatest effect on sensory quality. A fractional factorial design with resolution IV was used to keep the number of experimental runs to a minimum. ANOVA and normal plots were used to evaluate the effects of the different factors on the sensory variables one by one. The same factors were identified as being important when the scores from a principal component analysis (PCA) of the sensory variables were analyzed. PCA was found to be of value in identifying samples that had improved properties compared to today's product in addition to having a low intensity of undesirable properties.
125. Quality Quandaries - Analysis of Factorial Experiments with Ordered Categories as the Response
by Soren Bisgaard and Howard T. Fuller. (January 1995).
One of the more difficult aspects of setting up a quality improvement experiment is how to define an appropriate scale for the response. Often we must simply classify the outcomes into ordered categories such as how "discolored" how "clean," how "smooth," or how "good" an object is. A simple and useful approach to the analysis of such data recommended by several prominent statisticians is to attach a score to each of the categories and proceed with standard least squares techniques applied directly to the assigned score. In this article we present an illustration of the use of this technique to a two-level fractional factorial experiment involving the identification of the bad part in an assembled product originally due to Taguchi and Wu (1985). We also show how a very simple sensitivity analysis relative to the scale can be performed. Publication(s): Quality Engineering, 1995, Vol. 8, No. 1, pp. 199-207.
126. Variable Selection or Variable Assessment?
by R. D. Meyer and R. G. Wilkinson. (February 1995).
Variable-selection regression methods are oriented towards selecting a single model as the vehicle for further inferences. The appropriate inference about variables not included is unclear - the conclusion that they have no effect may be misleading. In many situations, the objective of the statistical method should be to assess the relative importance of every variable. The term variable assessment we think is more descriptive of this objective. We develop a method for variable assessment that makes use of Bayesian model-selection methodology. The marginal posterior probability that a variable is needed in the model is a measure of its importance. Using the Gibbs sampler for computation greatly reduces CPU requirements and also allows us to extend the model to one that allows for outliers. A simulation demonstrates that the method has good statistical properties.
127. Quality Quandaries - Reducing Variation With Two-Level Factorial Experiments
by Soren Bisgaard and Howard T. Fulle. (May 1995).
In many quality improvement projects the objective is to reduce variation. A powerful approach is to use factorial experiments with the log of the sample variance as the response. This paper demonstrates with an example how to reduce variation using this approach. Publication(s): Quality Engineering, 1995-96, Vol. 8, No. 2, pp. 373-377.
128. On Robust Design in the Conceptual Design Phase - A Qualitative Approach and A Semi-Analytic Approach to Robust Design in the Conceptual Design Phase
by Peder Andersson. (July 1995).
One of the most important contributions to quality engineering over the last decades is the concept of robust design and its accomplishment through the use of various experimental methods. However, the prerequisite parameter design in terms of a robust solution principle are seldom discussed and methods that aid robust design in the conceptual design phase are, to our knowledge, few. This article forwards the suggestion to use the principles behind the error transmission formula as a semi-analytic method for evaluation of robustness of concept solutions, prior to entering Taguchi's parameter design stage.
129. Analysis of Unreplicated Split-Plot Experiments with Multiple Responses
by Marit Risberg Ellekjarr, Howard T. Fuller and Kirsti Ladstein. (July 1995).
The purpose of this study is to demonstrate an effective strategy for analyzing unreplicated split-plot experiments with multiple responses. Through principal component analysis (PCA) the response variables are reduced to only those that describe different phenomena among the experimental samples. These selected response variables are then analyzed individually using ANOVA and Normal probability plots to identify the factors with the greatest influence on the quality and cost of the product. This approach makes it possible to take both the preferred quality characteristics and the production costs into account when studying a process or product. A case study from a fish food manufacturing company is used to illustrate our ideas.
130. Redesigning the Introductory Statistics Course
by Ronald D. Snee and Roger Hoerl. (July 1995).
There is general agreement that the traditional statistics course does not meet the needs of customer groups such as students and their future employers. While positive reform efforts are under way, many appear to be based on the belief that the course is fundamentally sound and just needs some modernizing. While we support these efforts, we argue that the traditional introductory course must be completely overhauled - not incrementally improved - if statistics is to have broad impact. Principles to guide this redesign are presented and then applied to the design of the introductory statistics course for business students. It is emphasized that both the content and delivery of the introductory statistics course must be changed. The proposed changes are supported by learning theories developed by educational and behavioral scientists.
131. Quality Quandaries -- Split-Plot Experiments
by George Box. (August 1995).
Industrial experiments are frequently by necessity run in a "split-plot" mode. The structure and analysis of such experiments is explained and illustrated. It is shown how split-plot experiments can increase experimental efficiency. Publication(s): Quality Engineering, 1995-96, Vol. 8, No. 3, pp. 515-520.
132. A Comparison of Dispersion Effect Identification Methods for Unreplicated Two-Level Factorials
by Howard T. Fuller and Søren Bisgaard (October 1995).
Identifying factors that affect the variability of a process has become an important step in improving quality. Over the years many methods based on unreplicated two-level factorial experiments have been proposed for identifying such factors. In this article we present a comparison of the statistical power associated with several alternative methods. To this end we also present a method of operationalizing the Half-normal probability plot which can be used to assess the relative "power" associated with it.
133. Charts With Variable Sample Sizes and Sampling Intervals
by Antonio F.B. Costa. (September 1995).
Recent theoretical studies have shown that the chart with variable sampling intervals (VSI) and the chart with variable sample sizes (VSS) are quicker than the traditional chart for detecting shifts in a process. This article considers the chart with variable sample sizes and sampling intervals (VSSI). It is assumed that the amount of time the process remains in control has an exponential distribution. The properties of the VSSI chart are obtained using Markov chains. The VSSI chart is shown to be quicker than the VSI or VSS charts in detecting moderate shifts in the process.
134. A Total Quality Improvement Approach to Student Learning
by W. Lee Hansen. (August 1997).
This paper describes a Quality Improvement Instructional Approach whose purpose is to improve the quality of undergraduate education by helping students realize their potential for learning in traditionally-taught university courses, and particularly those in the economics major. The innovation comes in combining three key elements of Total Quality Improvement - customer focus, student involvement, and continuous improvement - and applying them to university instruction. This paper concentrates on customer focus which refers to the knowledge and skills - proficiencies - that students are expected to demonstrate by the time they complete a course or graduate in the major. These proficiencies reflect what institutions teach and what the public (including employers) expects of new graduates. Particular attention is given to determining who are the customers of the economics major and the unique role of students as customers. Publication(s): Educational Innovation in Economics and Business Administration - the Case of Problem-Based Learning, 1995, Kluwer Academic Publishers, The Netherlands, Chapter 3.
135. Projective Properties of the Sixteen Run Two-Level Orthogonal Arrays
by George Box and John Tyssedal. (December 1995).
In 1994 in a technical report in this series (Box and Tyssedal, 1994) the present authors provided a mathematical basis for previous empirical discoveries concerning the projectivity P of two level orthogonal designs. A design for k factors in n runs was said to be of projectivity P or to provide a (n,k,P) screen if every choice of P out of k factors provided a 2p factorial design possibly replicated or partially replicated. In particular they showed why many of the standard designs in which n-1 factors were tested in n runs obtained from the orthogonal arrays of Plackett and Burman were of projectivity P=3 rather than P=2 and hence supplied (n,n-1,3) screens. However, in the important special case of n=16 the Plackett-Burman saturated design was equivalent to the corresponding fractional factorial and had projectivity P=2 and thus provided only a (16,15,2) screen. However, choice of a suitable subset of 8 columns from this design could produce an (16,8,3) screen thus making it possible to test the activity of up to 8 variables at projectivity P=3. The present paper explores the projectivity of four additional 16 run orthogonal arrays of a different class discovered by Hall in 1961. It is shown in particular that three of those can produce (16,12,3) screens and one can produce a (16,14,3) screen. Publication(s): Biometrika, 1996, Vol. 83, No. 4, pp. 950-955
136. Quality Quandaries - Two-Level Factorials Run as Split-Plot Experiments
by Soren Bisgaard, Howard T. Fuller and Ernesto Barrios. (October 1995).
Many industrial experiments are often executed more economically in split-plot mode when hard to change factors are varied less frequently than others. This, however, needs to be considered when analyzing the data. An example illustrates a sample approach. Publication(s): Quality Engineering, 1996, Vol. 8, No. 4, pp. 705-708.
137. Importance of Graphics in Problem Solving and Detective Work
by Soren Bisgaard. (October 1996).
This article discusses how statistical graphics catalyzes the problem solving process. Two industrial examples are used to illustrate this idea. Publication(s): Quality Engineering, 1996-1997, Vol. 9, No. 1, pp. 157-162.
138. Discrimination and Criticism of Single-Response Models
by Warren Stewart, George Box and Thomas Henson. (February 1996).
Formulas are developed for assessing the probability and adequacy of rival models fitted to a common data set. Cases of full, partial and minimal variance information are treated. The use of the formula is demonstrated with three examples, including a modeling study of a heterogeneous catalytic reaction. Publication(s): AIChE Journal, 1996, Vol. 42, No. 11, 3055-62
139. Time Series Models for Forecasting Wastewater Treatment Plant Performance
by P.M. Berthouex and George Box. (February 1996).
This paper describes a time series modeling procedure that can be useful for calculating predictions, with confidence intervals, of effluent quality one to five days ahead, and it explains how these predictions can serve as an early warning of process upsets that will sometimes enable an operator to take preventive action. The time series model has the form of an exponentially weighted moving average (EWMA). The interpretation of the model is that the response of the system can be predicted by deviations from the EWMA smoothed values of the predictor variables.
140. Nonstatistical Skills That Can Help Statisticians Become More Effective
by Ronald D. Snee. (March 1996).
The new economic era we live in has resulted in a variety of new work situations for statisticians. Many are asked to be a member of a team that involves several different functions of the organization. Statisticians are also asked to work with groups in nontechnical areas. These groups tend to have less experience with data-based problem solving methods but, nonetheless, are working on problems critical to the success of the organization. Many statisticians have the opportunity to work with mid-and upper-level managers. All of these opportunities that require new skills and methods that can help statisticians become more effective are discussed. It is also shown how these new skills have much in common with statistical thinking.
141. Conditional Inference Chart
by Soren Bisgaard. (March 1996).
In many technological applications, experiments are conducted with the explicit purpose of generating new ideas about how a system operates. The emphasis is therefore primarily on hypothesis generation and on suggesting what the experimenter should do next and not on hypothesis testing. In such situations demanding excessive replication or large sample sizes based on the rigor required for confirmatory experiments can be counterproductive. This paper illustrates how we may be able to use prior knowledge about the error variance to make inference about which factors appear active in small two-level factorial or fractional factorial experiments. Publication(s): Quality Engineering, Jan. 1999, Vol. 11, No. 2, pp. 276-272.
142. Joint X and R Charts with Variable Sample Sizes and Sampling Intervals
by Antonio F.B. Costa. (March 1996).
Recent studies have shown that the chart with variable sampling intervals (VSI) and/or with variable sample sizes(VSS) detects process shifts faster that the traditional chart. This article extends these studies for processes that are monitored by both, the and the R charts. A Markov chain model is used to determine the properties of the joint and R charts with variable sample sizes and sampling intervals (VSSI) scheme improves the joint and R control chart performance (in terms of the speed with which process mean and/or variance shifts are detected). Publication(s): Journal of Quality Technology, April 1997, Vol. 29, No. 2, pp. 197-204
143. The Anatomy and Robustness of Discrete Proportional -Integral Adjustment and Its Application to Statistical Process Control
by George Box and Alberto Lucentildeo. (April 1996).
This paper explains the nature and importance of Proportional -Integral control and shows how it may be adapted to Statistical Process Control. The relation of this type of control to exponential smoothing, minimum mean squared error control, and optimal constrained schemes is discussed. Robustness properties which simplify considerably the practical application of this type of control are demonstrated.
144. Team Work and Design of Experiments
by Joseph G. Van Matre and Neil Diamond. (June 1996).
The quality movement has brought renewed attention to the contributions that statistically designed experiments can make to the quality of products and processes. Experiments are only successful when the term involved in designing, executing and managing the experiment works together effectively, and follows some simple principles that tend to promote success. In this report these principles are outlined and illustrated with a number of case studies, and the important role that Managers have as de facto team members is discussed. Publication(s): Quality Engineering, 1996-1997, Vol. 9, No. 2, pp. 343-348.
145. Charts With Variable Parameters
by Antonio F. B. Costa. (June 1996).
The idea of varying the chart parameters has been explored extensively in recent years. Basically, the value establishes if the control should be relaxed or not. When falls near the target the control is relaxed because one will wait more to take the next sample and/or the next sample will be smaller than usual. When falls far from the target but not in the action region the control is tightened because one will wait less to take the next sample and/or the next sample will be larger than usual. In this paper, we extend this study to consider the action limits variable too. The idea is to draw the action limits wider than usual when the control is relaxed and narrower than usual when the control is tightened. This new feature makes the chart comparable with the CUSUM and EWMA schemes in terms of the speed they detect small shifts in the process mean.
146. Scientific Statistics, Teaching, Learning and the Computer
by George Box. (June 1996).
It is argued that the domination of Statistics by Mathematics rather than by Science has greatly reduced the value and the status of the subject. The mathematical "theorem - proof paradigm" has supplanted the "iterative learning paradigm" of scientific method. This misunderstanding has affected university teaching, research, the granting of tenure to faculty and the distributions of grants by funding agencies. Possible ways in which some of these problems might be overcome and the role that computers can play in this reformation are discussed.
147. Choice of Repeated and Bounded Adjustment Schemes
by Alberto Luceño, Francisco J. Gonzalez and Jaime Puig-Pey. (July 1996).
An important problem in process adjustment using feedback is how often to sample the process and when and by how much to apply an adjustment. Minimum-cost feedback schemes based on simple, but practically interesting, models for disturbances and dynamics have been discussed in several particular cases. The more general situation in which there may be measurement and adjustment errors, deterministic process drift, and costs of taking an observation, of making an adjustment, and of being off target, is considered in this article. Assuming all these costs to be known, a numerical method to minimize the overall expected cost is presented. This numerical method provides the optimal sampling interval, action limits, and amount of adjustment; and the resulting average adjustment interval, mean squared deviation from target, and minimum overall expected cost. When the costs of taking an observation, of making an adjustment, and of being off target are not known, the method can be used to choose a particular scheme by judging the advantages and disadvantages of alternative options considering the mean squared deviation they produce the frequency with which they require observations to be made, and the resulting overall length of time between adjustments. Computer codes that perform the required computations are provided in the appendices and applied to find optimal adjustment schemes in three real examples of application.
148. Why Three-Level Designs Are Not So Useful For Technological Experiments
by Soren Bisgaard. (August 1996).
This paper explains in non-technical terms why three-level factorial and fractional factorial designs are not so useful in technological applications where factors often are quantitative and the experiments can be conducted sequentially. The basic ideas of response surface methods are explained and it is shown how those methods constitute a much more effective alternative. Publication(s): Quality Engineering, 1997, Vol. 9, No. 3, pp. 545-550.
149. Quality Quandaries - Regression Analysis Applied to Happenstance Data
by George Box. (October 1996).
Care is needed in interpreting results when regression analysis is applied to happenstance data. Although a fitted regression model may be useful for prediction of future values of a series, it may be totally misleading for explaining causation relationships between the variables. Furthermore, such analysis can be disastrously affected by lack of independence between residual errors.
150. Product Design With Response Surface Methods
by George Box and Patrick Liu. (May 1998).
In this article, methods for demonstrating the iterative process of investigation are presented. As one example, it is shown how the sequential use of response surface techniques may be applied to devise an improved paper helicopter design with almost twice the flight time of its original prototype. The purpose of this paper is to demonstrate the process of investigation and how it can by catalyzed by the use of statistics. Although individual designs and analyses are used these are the "trees" behind which we hope the forest will be clearly visible. Publication(s): Journal of Quality Technology, Jan. 1999, Vol. 3, No. 1.
151. Quality Quandaries - Blocking Two-Level Factorial Experiments
by Soren Bisgaard. (April 1997).
To many people new to experimental design, blocking is an elusive concept. If you focus only on the mathematics of blocking, you are likely to miss its importance because blocking is related to the physical layout and execution of an experiment. It is a filter that filters out unwanted disturbances and thus helps reduce the influence of variability coming from the environment, the experimental material, or instabilities of the process. Blocking thus increases the experiment's sensitivity to detect the effects of the experimental factors. Thus blocking makes it possible to use experiments that are smaller and hence more economical. Publication(s):Quality Engineering, 1997, Vol. 9, No. 4, pp. 753-759.
152. 2k-q Experiments with Binary Response Sampling Until a Fixed Number of Defects
by Ilya Gertsbakh and Søren Bisgaard. (January 1997).
In this article we will discuss the use of two-level factorial and fractional factorial experiments with binary responses (defectives/non-defectives) where the purpose is to reduce the rate of defectives. Contrary to a traditional fixed sample size scheme, we will consider one where each factorial combination is sampled until a fixed number of defectives is observed. The total number of items until that occurs is then used as the response. Such an Inverse Binomial sampling scheme has many practical and economical benefits that will be discussed. For the design of experiments based on this idea, we provide a methodology for choosing the necessary number of defectives r, to detect a given change in the probability of producing a defective unit with fixed levels of type I and type II errors.
153. Statistical Quality Control of a Multi-Step Production Process Using Total Process Yield
by Spencer Graves. (April 1997).
Many production processes have several steps with non-zero defect rates. For such processes, an "overall yield" or "overall defect rate" is a concept with substantial utility but lacking today a standard definition. This article discusses the value of such a concept, reviews some of the difficulties encountered in defining it, and suggests a solution. The proposed solution is the product of the yields at the individual steps, called herein "Total Process Yield." A formula is derived for statistical control limits for this quantity. The technique is illustrated with data from a manufacturing company that is currently using it to help manage most of their production. Publication(s): Quality Engineering, Jan. 1999, Vol. 11, No. 2, pp. 187-196.
154. Tolerancing Mechanical Assemblies Using Computer Aided Design and Experimental Design
by Soren Bisgaard, Spencer Graves, and Garrick Shin. (April 1997).
Component tolerances for assembled products are often set with the help of the error transmission formula. However, this approach requires knowledge of the partial derivatives of the functional relationship between the component dimensions and the assembly quality characteristic. In many practical situations, those numbers are not easily obtained. In this article we will demonstrate a novel combined use of computer aided design (CAD) and design of experiments (DOX) to obtain partial derivatives of the functional relationship. With knowledge of these, we can use the error transmission formula to establish functional tolerances. The intent of the present article is to demonstrate, with some examples, an idea and a set of techniques that can be used to set functional tolerances for mechanical components and assemblies.
155. Designing Quality Into Products During the Design and Development Phase
by Soren Bisgaard and Marit Risberg Ellekjaer. (October 1996).
To be effective, quality needs to be built-in and planned already in the product design phase. In this article, we will provide an overview of how design engineers can use statistical experimental design to develop high quality, robust, low cost products. Specifically we will show how experimental design can be used to design, test and improve products, and how these tools can help reduce the cycle time from initial conception to market introduction. Our focus will be on general philosophy and ideas. Practical examples from industry will be used throughout to illustrate the concepts.
156. Improving Team Effectiveness
by Ronald D. Snee, Kevin H. Kelleher, and Sue Reynard. (April 1997).
Much has changed since teams became popular in the 1980's. Teams are now an established approach for getting work done, and organizations have learned important strategies for making teams effective. But barriers to team progress still exist, many of them the result of poor or ineffective management strategies for coordinating the efforts of many types of teams across an organization. Examination of selected case studies illustrates how effective use of teams is a skill that can be learned, practiced, and improved by managers and employees alike.
157. How to Reduce Costs Using a Tolerance Analysis Formula Tailored to Your Organization
by Spencer Graves. (April 1997).
In volume production, a few pennies per unit can add up quickly. In such cases, it may be worth considering whether cheaper components might be used without jeopardizing quality. One element in many tolerance analyses is the formula used to relate product tolerances to component tolerances. This article (a) discusses deficiencies with traditional tolerancing, (b) outlines a simple procedure for converting process capability information into an improved tolerancing formula tailored to a specific class of products, and (c) describes how this analysis can contribute to substantive improvements in profits by helping to identify improvement opportunities in production.
158. The Role of Scientific Problem Solving and Statistics in Quality Improvement: Some Perspectives
by Soren Bisgaard. (April 1997).
Scientific method of observation and experimentation play a key role in quality improvement. In this article, I provide numerous examples of the use of scientific method and argue that this a vital catalyst for Total Quality Management. Perspectives for the future are also provided.
159. Five Ways Statistical Tolerancing Can Fail and What to Do About Them
by Søren Bisgaard and Spencer Graves. (September 1997).
In this article we explore the general non-robustness of traditional root sum of squares statistical tolerancing and describe, in particular, how it can fail. These are  deficiencies in the functional model,  lower process capability in inputs that what is desired of outputs,  biases,  correlations, and  non-normality. We also show that statistical tolerancing is extremely non-robust to the first four types of causes. Moreover we provide examples of each type and discuss what to do about each.
160. A Negative Process Capability Index from Assembling Good Components. A Problem in Statistical Tolerancing
by Soren Bisgaard and Spencer Graves. (April 1997).
High values for the standard process capability indices of components do not guarantee low defect rates for an assembly. This article describes an electric transformer that experienced substantial problems in productions when statistical tolerancing predicted essentially zero defects. This article shows how such problems can arise in long tolerance chains where components have nonzero biases, no matter how large are Cpk or Cpm. Publication(s): Quality Engineering, 1997-1998, Vol 10, No. 2, pp. 409-414.
161. Fast Cycle Change in Knowledge-Based Organizations: Building Fundamental Capability for Implementing Strategic Transformation
by Ian Hau and Ford Calhoun. (June 1997).
This paper discusses the experience of a knowledge-based organization in the pharmaceutical industry in building a fundamental capability for implementing strategic transformation. The organization developed and implemented a methodology called Fast Cycle Change (FCC) that maximizes the "implementability" of change initiatives. That is, FCC ensures that change initiatives are initiated so that high impact can be realized rapidly, with high probability of success, and with minimal resources.
162. Quality Quandaries - Models, Assumptions and Robustness
by George Box and Alberto Lucentildeo. (July 1997).
The concept of robustness and its importance to modelling and to the iteration between empiricism and theory is discussed. As an example, evolution of the Exponentially Weighted Moving Averages (EWMA's) is described. Publication(s): Quality Engineering, 1998, Vol. 10, No. 3, pp. 595-598.
163. Needed Skills for Human Resource Professionals: A Pilot Study .
by W. Lee Hansen, Robyn A. Berkley, Carolyn J. Craig, Diane R. Denby, Jill A. Fitzpatrick, Paola Gheis, David M. Kaplan, Deborah J. Ruelle, Mark R. Seiler, Qiang-Sheng Yu, and Lisa A. Voss. (November 1997).
This pilot study identifies the skills needed by human resource/industrial relations (HR/IR) practitioners and contrasts them with the emphasis currently placed on developing these skills in a single master's degree industrial relations program.
164. Maximum Likelihood Regression on Censored, Experimental Data Using a Spreadsheet Program
by Spencer Graves. (December 1997). .
This article describes how to analyze censored, experimental data using an optimizing function in a spreadsheet program (such as Solver in Microsoft Excel or Lotus 1-2-3).
165. Quality Quandaries - Making Sure the Design Fits the Problem-An Example
by Soren Bisgaard. (January 1998).
For the design of experiments it is very important to carefully think through the objective as different objectives might require different designs. In this paper an example is presented where this is illustrated. Publication(s): Quality Engineering, 1998, Vol. 10, No. 4, pp. 770-775.
166. Joint X and R Charts with Variable Parameters
by by Antonio F. B. Costa.
Publication: IIE Transactions, 1998, Vol. 30, pp. 505-514
167. Improving Problem Solving
by Ian Bradbury and Gipsie Ranney. (June 1998).
Consideration of some popular methods applied in the context of problem solving along with the supporting thought process commonly observed in practice.
168. Quality Quandaries - The Impact of Measurement Error on Specifications
by Soren Bisgaard, Spencer Graves and René Valverde. (October 1998).
Measurement problems are more common than most people realize. Sometimes they are so serious that 100% inspection is counterproductive as a means of separating good from bad parts. In this article we discuss the impact of the measurement error on specifications. In particular we will discuss how to analyze the relationship between alternative tolerance limits, process variances and measurement error variances using Monte Carlo simulation. Publication(s): Quality Engineering, Jan. 1999, Vol. 11, No. 2, pp. 331-336.
169. Influence of the Sampling Interval, Decision Limit and Autocorrelation on the Average Run Length in Cusum Charts
by Alberto Lucentildeo and George Box. (September 1998).
This paper shows how the average run length (ARL) for a one-sided Cusum chart varies as a function of the length of the sampling interval between consecutive observations, the decision limit for the Cusum statistic, and the amount of autocorrelation between successive observations. It is shown that the rate of false alarms can be decreased considerably, without modifying the rate of valid alarms, by decreasing the sampling interval and appropriately increasing the decision interval. Also that this can be done even when the shorter sampling interval induces moderate autocorrelation between successive observations.
170. Quality Quandaries-Use of Cusum Statistics in the Analysis of Data and in Process Monitoring
by George Box. (October 1998).
The uses of the Cusum Chart are discussed. It can be used for on-line monitoring of an operating process or for post-mortem analysis of data. The Cusum statistic is discussed as a particular example of a Cuscore statistic for the detection of a signal in noise.
171. Quality Quandaries-Proposals: A Mechanism for Achieving Better Experiments
by Soren Bisgaard. (October 1998).
In this paper we outline how a simple procedure of requesting a proposal before management signs off on an experiment can dramatically improve the effectiveness of experimenters. An eleven step process is provided.
172. Statistics as a Catalyst to Learning
by Scientific Method Part II-Discussion by George Box. (June 1999).
A discussion on Part I (Box and Liu, 1999) concerning the implications raised when RSM is considered, as was originally intended as a statistical technique for the catalysis of iterative learning in the manner illustrated.
173. Detecting Malfunctions in Dynamic Systems
by George Box, Spencer Graves, Søren Bisgaard, John Val Gilder, Ken Marko, John James, Mark Seifer, Mark Poublon, and Frank Fodale. (March 1999).
This article outlines some of the fundamental concepts of systems monitoring and general principles for the design of monitors to detect certain malfunctions in the powertrain system that may cause excessive emissions.
174. Adapting a Quality Function Deployment Model to Optimize Professional Education in Human Resources/Industrial Relations Programs
by W. Lee Hansen, Nicole Mehlek, Michelle Murphy, and Dianne True. (July 1999).
175. Quality in the Public Sector: The Employees' Perspective
by Christian Korunka, Deiter Scharitzer, Franccedilois Sainfort, and Pascale Carayon. (May 1999).
This study investigated the effects of the implementation of quality in a public service organization on employees' strain and satisfaction.
176. Quality Quandaries-Six Sigma, Process Drift, Capability Indices, and Feedback Adjustment
by George Box and Alberto Luceño. (August 1999).
The Six Sigma specification makes an allowance of 1.5 standard deviations for process drift. Simple ways in which a major part of such drift can be removed are given. These employ feedback adjustment methods specifically designed for SPC applications.
177. Retargeting Higher Education Access and Persistence Efforts: Illustrating a 'System' Focused Process for Improving Public Policy
by Jacob O. Stampen and W. Lee Hansen. (July 1999).
This paper illustrates a "systems" approach, based on the quality function deployment model, to examine both the direct and interaction effects of multiple solutions aimed at improving access and persistence.
178. Split-Plots for Robust Product and Process Experimentation
by George Box and Stephen Jones. (April 2000).
Environmentally robust products and processes are designed to be insensitive to variation over the relevant ranges of the environmental conditions in which they need to operate. Split plots frequently provide efficient experimental arrangements whereby environmentally robust process and product design may be achieved. The various questions that arise in planning and analyzing such experiments are discussed and illustrated with examples.
179. Statistics for Discovery
by George Box. (March 2002).
In this paper the question is discussed why investigators in engineering and the physical sciences rarely use statistics. It is argued that statistics has been overly influenced by mathematical methods rather than the scientific method and consequent the subject has been greatly skewed towards testing rather than discovery.
180. Feedforward as a Supplement to Feedback Adjustment in Allowing for Feedstock Changes.
by George Box and Alberto Lucentildeo.
This paper considers the complementary use of feedback and feedforward adjustments to compensate for anticipated step changes in the process mean as may be necessary in a manufacturing process each time a new batch of feedstock material is introduced.
181. Process-oriented Tolerancing for Multi-station Assembly Systems
by Yu Ding, Jionghua Jin, Dariusz Ceglarek, Jianjun Shi (October 2002).
In multi-station manufacturing systems, the quality of final products is significantly affected by both product design as well as process variables. However, historically tolerance research primarily focused on allocating tolerances based on product design characteristics of each component. Currently, there are no analytical approaches to optimally allocate tolerances to integrate product and process variables in multi-station manufacturing processes at minimum costs. The concept of process-oriented tolerancing expands the current tolerancing practices, which bound errors related to product variables, to explicitly include process variables. The resulting methodology extends the concept of "part interchangeability" into "process interchangeability," which is critical in increasing requirements related to the supplier's selection and benchmarking. The proposed methodology is based on the development and integration of three models: tolerance-variation relation, variation propagation, and process degradation. The tolerance-variation model is based on a pin-hole fixture mechanism in multi-station assembly processes. The variation propagation model utilizes a state space representation but uses a station index instead of time index. Dynamic process effect such as tool wear is also incorporated into the framework of process-oriented tolerancing, which provides the capability to design tolerances for the whole life-cycle of a production system. Tolerances of process variables are optimally allocated through solving a nonlinear constrained optimization problem. An industry case study is used to illustrate the proposed approach.
182. Fault Diagnosis of Multistage Manufacturing Processes by Using State Space Approach
by Yu Ding, Jionghua Jin, Dariusz Ceglarek, Jianjun Shi (April 2002).
This report presents a methodoloy for diagnosis of fixture failures in multistage manufacturing processes (MMP).
183 Design Evaluation of Multi-Station Assembly Processes by Using State Space Approach
by Yu Ding, Jionghua Jin, Dariusz Ceglarek, Jianjun Shi (April 2002).
This report considers the problem of evaluating and benchmarking process design configuration in a multi-station assembly process.
184. Human Factors E-Security Workgroup Report of Findings
by Pascale Carayon and Sara Kraemer (April 2003).
One of the greatest barriers to effective e-security are the human and organizational factors that contribute to and cause the technical and social vulnerabilities of an organization's computer and information system. The purpose of this report was to explore the most critical factors facing e-security, to identify methods used to characterize those factors, and to examine best practices in attempt to relieve or solve these problems.
185. Reducing Workload and Increasing Patient Safety Through Work and Workspace design
by Pascale Carayon, Carla Alvarado and Ann Hundt (November 2003).
This paper was commissioned by the Institute of Medicine Committee on the Work Environment for Nurses and Patient Safety. It was used by the IOM for its report on Keeping Patients Safe - Transforming the Work Environment of Nurses released on November 4, 2003 (http://www.iom.edu/Reports/2003/Keeping-Patients-Safe-Transforming-the-W...).
186. Web Survey Mailer System (WSMS1.1)
by Ernesto Barrios (November 2003).
Nowadays, with the extended access to computers and more particularly to the Internet, web-based questionnaires are another tool available for sampling surveys. This document describes the use of the Web-based Survey Mailer System 1.1 (WSMS1.1), a computer package that administers surveys over the Internet. Created by David J. Solomon, WSMS consists in a bundle of HTML and PHP scripts that sends out personalized emails inviting people (in a database) to fill out questionnaires while anonymously tracking the individuals, so reminder emails are sent only to persons who have not submitted the survey. For any particular application WSMS scripts have to be modified. WSMS1.1 was developed in a modular form so changes to the code are localized to only few scripts. This layout facilitates the survey administration, clearly separating the tasks of the server side of the survey from the respondents side (the questionnaire and some information web-pages). In addition, a survey consent web-page has been included for the respondents explicitly agree or refuse to participate in the survey. Written more like a manual, however, this document explains some of the basics about web-servers and support software, so researchers with little experience with computers and systems and limited budget can have web-based surveys as another tool available for their investigations. WSMS1.1 ran successfully under both Windows and Linux operation systems.
187. Red Team Performance: Summary of Findings
by Sara Kramer and Pascale Carayon (June 2004).
This report summarizes a study the performance of the red team program at Sandia National Laboratories. This study describes the factors that contribute to and hinder red team performance, as well as various measures of red team performance.
188. An Adversarial Viewpoint of Human and Organizational Factors in Computer and Information Security: Final Report
by Sara Kraemer and Pascale Carayon(August 2006).
This report presents a multi-dimensional examination of the human and organizational factors that affect computer and information security (CIS) and explains how human and organizational factors contribute to CIS vulnerabilities, namely, the various pathways and mechanisms leading to a technical CIS vulnerability. Human factors in CIS, such as password memorability or usability of encryption methods, in addition to organizational factors in CIS, such as implementation and monitoring of security policies or procedures are analyzed.