Planning and Executing Credible Experiments. Robert J. Moffat
schedule may not be met.
The budget may be exceeded.
Before the experiment is authorized to proceed, it must be established to the satisfaction of the reviewers that the motivating question can be answered with the desired accuracy, within the time allowed, and within the allowable budget.
3.4 Questions to Guide Planning of an Experiment
Tables 3.1–3.3 elaborate the contents for the rest of this book – the topics that we feel should be addressed during the planning of an experiment.
Before some of the above topics can be addressed directly, some background material must be developed on the topology of experiments and the handling of experimental uncertainties.
Table 3.1 Overview of a research experiment plan.
Set up the experiment log. | Keep a detailed log of your decisions. |
Identify:The motivating question.The form of an acceptable answer.The allowable uncertainty. | What question are you trying to answer? What should the answer look like? What accuracy do you need? |
Design the data interpretation program (DIP). | What equations provide the answer? |
Specify the data you need. | Output data, peripheral data, and control values. |
Establish the allowable uncertainties. | How accurately must variables be measured in order to get useful results? |
Select the instruments. | Cross‐check with required uncertainties. |
Specify the operable domain. | What range of conditions must be covered? |
Estimate the shape of the response surface. | What will be the likely outcome? |
Select the data trajectories and data‐density distribution. | How should the data points be distributed over the operating surface? |
Design the hardware. | The apparatus must create the desired domain. |
Table 3.2 Review the program plan. Do risk assessment and plan risk abatement. If satisfactory, go ahead. If not, go back.
Build apparatus. | Watch schedule and cost. Track critical path and critical person lines. |
Write the DIP. | Convert equations and measurable inputs to selected response variables. Debug program. |
Shake down apparatus. | First, make it repeatable. Then, make it work. Finally, make it work well. |
Execute qualification runs and document credibility. | Calibrate. Document uniformity and stability. Certify baseline data. |
Table 3.3 Assess the credibility of the program. Do risk assessment and plan risk abatement. If satisfactory, go ahead. If not, go back.
Take production data. | These are the required results. |
Interpret the results. | What do the results mean to the client or target audience? |
Document the experiment. | Present the results and interpret them. Record the data that support these conclusions and establish credibility of the experiment. |
Homework
1 3.1 From your review of prior work and research, what risks might your experiment encounter?
2 3.2 What risks will your client accept?
3 3.3 What risks will your client face by not pursuing this experiment?
4 3.4 How much will your client stake on successful completion of this experiment?
5 3.5 What is the cost of a null answer?
6 3.6 How much risk will your client assume?
7 3.7 Do the Occupational Safety and Health Administration (OSHA) or state laws or national codes limit the direction and extent that your proposed experiment can pursue?
8 3.8 Rinse and repeat from Exercise 3.1.
4 Identifying the Motivating Question
The motivating question is the question that, if answered, justifies the entire cost of running the experiment.
4.1 The Prime Need
I strongly believe in organizing every research‐type experiment around a question, for several reasons.
1 When your goal is to answer a question, you know you can quit when you have an acceptable answer!
2 If your experimental objective was “to study…,” or “to investigate…,” or “to document…,” then you may never know when to quit. There is no end to “studying” or “investigating.” You will quit only when the money runs out or you get bored or reassigned, but you will never be finished. If, on the other hand, you have a specific question to answer, you can quit when you have the answer or when you can prove that you can't get the answer by this kind of experiment.
3 Knowing the motivating question helps in making the trade‐off decisions during planning and debugging.
A research program generally begins with a “need to know,” an urge on someone's part to solve a problem. Unfortunately, the person who first feels the urge may have in mind some steps toward what he/she thinks is the solution and may present an experiment plan that is a path to that particular “solution” rather than a path to solving the general problem.
To some extent, this can be avoided by specifically addressing the issue of “What question are we trying to answer?” instead of “What are we going to do?”
I (RM) came to this approach to experiment planning after many years of dealing with talented graduate students, each eager to get on with their programs but frequently stalling out when obstacles arose in the lab. I would find them at my door wondering what to do about the latest nuisance. Trying the simple approach of answering their questions, I found that the pace of research was then dictated by the number of hours I spent dealing with their problems. It was like pushing a rope! Every obstacle would bring the program to a halt. This sort of experience must be more common than I had thought, because it is the focus of an American folk song, “There's a Hole in My Bucket,”