Planning and Executing Credible Experiments. Robert J. Moffat
precision of the measurements of input and output and the density of data points as well as on the process being studied and the factors which are being considered.
For example, one might ask: “Does the width dimension of the test channel affect the measured value of the heat‐transfer coefficient h on a specimen placed in the tunnel?” This is a question about the dimensionality of the problem. The answer will depend on how accurately the heat‐transfer coefficient is being measured. If the scatter in the h‐data is ±25%, then only when the blockage is high will the tunnel dimensions be important. If the scatter in h is ±1%, then the tunnel width may affect the measured value even if the blockage is as low as 2 or 3%.
One common approach to limit dimensionality of an experiment is to carefully describe the apparatus, so it could be duplicated if necessary, and then run the tests by holding constant as many variables as possible while changing the independent variables, one at a time. This is not the wisest approach. A one‐at‐a‐time experiment measures the partial derivative of the outcome with respect to each of the independent variables, holding constant the values of the secondary variables. Although this seems to limit dimensionality, it does not. Running only a partial derivative experiment begs the question of sensitivity to peripheral factors: that is, holding the interaction effect constant does not make it go away, it simply makes it more difficult to find.
Wiser approaches are discussed in Chapters 8 and 9, whereby systematic investigation of the sensitivity of the results to the details of the technique and the equipment helps the dimensionality of the experiment to be known. Whatever remains unknown contributes to experimental uncertainty.
Be ready to defend your factors. As an experimentalist, there will be times when you work with a client, or with a theoretician, who fails to understand what cannot be measured. Or she may need guidance to tolerate uncertainty in measurements.
2.4.5 Similarity and Dimensional Analysis
Nature does not know how big an inch is (unless you experiment on inchworms), nor how big a centimeter is. The laws of physics are independent of the length, mass, and time scales familiar to us. For this reason, similarity analysis and the Buckingham Pi Π method are tools to cast the physics into nondimensional parameters which are independent of scale.
Perhaps the most familiar nondimensional parameter is the Mach number, the ratio of speed to the speed of sound. A fighter flying at Mach 2 is traveling at twice the speed of sound.
In thermo‐fluid physics, we use various nondimensional parameters, including Reynolds number (Re), Strouhal number (St), Froude number (Fr), Prandtl number (Pr), Mach, etc. We will see these again in later chapters. The Reynolds number is a ratio relating size, speed, fluid density, and viscosity. When NASA tests a model plane in a wind tunnel, it matches the Re of the model to that of the full‐size plane. Experimental results for the model plane and full‐size plane relate even better by simultaneously matching Mach number. And so forth as more parameters match.
As an expert in your field, you know which nondimensional parameters pertain to your experiment. The applicability of your measurements expands via nondimensional parameters. In your experiment, plan to ensure that you record all the factors, including environmental factors, so that all pertinent nondimensional parameters can be reported.
Upon reflection, the value percent (%) is likely the most familiar nondimensional number.
2.4.6 Listening to Our Theoretician Compatriots
Experimentalists and theoreticians need each other.
Richard Feynman, whose quote leads this chapter, was an experimentalist as well as a theoretician.
Einstein, whose paraphrased quote lead off Chapter 1, received his Nobel Prize for explaining experiments on the photoelectric effect. Einstein's theory of Brownian motion showed that prior experiments provided indirect evidence that molecules and atoms exist.
Yet just as Feynman stated, Einstein's theory of general relativity was “just a theory” until Arthur Eddington gave it experimental verification during a total solar eclipse in 1919.
NASA provides a good example of the interdependence of theory and experiment. The National Advisory Council on Aeronautics (NACA) was the precursor of NASA; “Aeronautics” is the first A of NASA. As airplane designs rapidly advanced during the 1900s, NASA deliberately adopted a four‐pronged approach: theory, scale‐model testing (wind‐tunnel experiments), full‐scale testing (in‐flight experiments), and numerical simulation (computational models verified by experiment). Each of the first three prongs have always been essential (Baals and Corliss 1981). Since the 1980s, numerical simulation has aided theory. Theory and experiment need each other. Since our numerical colleagues often refer to their “numerical experiments,” we do advocate an appropriate way to report the uncertainties of their results, just as we experimentalists do.
The science of fluid flow remains important, as another quote (from a personal letter) from Feynman makes clear:
Turbulence is the most important unsolved problem of classical physics.
Feynman spoke of basic turbulence. Turbulence can be further complicated by heat transfer; yet more complicated by mass transfer; yet more by chemical reactions or combustion; yet more complicated by electromagnetic interactions. Turbulence is key for weather, for breath and blood, for life, for flight, for circulation within celestial stars and their evolution. Turbulence remains unsolved to this day.
To consider more viewpoints, we include three panels:
Panel 2.1, “Positive Consequences of the Reproducibility Crisis”
Panel 2.2, “Invitations to Experimental Research, Insights from Theoreticians”
Panel 2.3, “Prepublishing Your Experiment Plan”
This text focuses on experimental strategies, planning, techniques of analysis, and execution. That is our expertise, in addition to thermo‐fluid physics. We have taught experimental planning to students in many fields using draft notes of this text for more than 60 years.
Panel 2.1 Positive Consequences of the Reproducibility Crisis
As researchers and instructors, we have been promoting experimental repeatability and uncertainty analysis for more than 60 years. When the work of Dr. J.P.A. Ioannidis brought the Reproducibility Crisis in the medical field to public awareness, we welcomed the positive impact it produced.
Two papers by Dr. Ioannidis in 2005 brought the Reproducibility Crisis to the fore. One was the Journal of the American Medical Association (JAMA) article mentioned in Chapter 1, “Contradicted and Initially Stronger Effects in Highly Cited Clinical Research” (Ioannidis 2005a). The second was “Why Most Published Research Findings Are False” (Ioannidis 2005b).
The two 2005 articles by Dr. Ioannidis appear to be a watershed moment for science. In various scientific disciplines, researchers have produced guidelines adopted by major publishers.
Going deeper into the 2005 JAMA article, Dr. Ioannidis chose a notably high criteria for the publications he evaluated. He considered only:
“All original clinical research studies published in 3 major general clinical journals or high‐impact‐factor specialty journals
in 1990–2003 and
cited