Applied Univariate, Bivariate, and Multivariate Statistics. Daniel J. Denis

Applied Univariate, Bivariate, and Multivariate Statistics - Daniel J. Denis


Скачать книгу
an elevated heart rate is much easier than convincing her that her son has a deficit in IQ points. One phenomenon is measurable. The other, perhaps so, but not nearly as easily, or at minimum, agreeably.

      Our point is that once we agree on the existence, meaning, and measurement of objects, soft science is just as “hard” as the hard sciences. If measurement is not on solid ground, no analytical method of its data will save it. All students of the social (and natural, to some extent) sciences should be exposed to in‐depth coursework on the theory, philosophy, and importance of measurement to their field before advancing to statistical applications on these objects, since it is in the realm of measurement where the true controversies of scientific “reputability” usually lay. For general readable introductions to measurement in psychology and the social sciences, the reader is encouraged to consult Cohen, Swerdlik, and Sturman (2013), Furr and Bacharach (2013), and Raykov and Marcoulides (2011). For a deeper and philosophical treatment, which includes measurement in the physical sciences as well, consult Kyburg (2009). McDonald (1999) also provides a relatively technical treatment.

      One of the most prominent advances in social statistics is that of structural equation modeling. With SEM, as we will survey in Chapter 15, one can model complex networks of variables, both measurable and unmeasurable. Structural equation modeling is indeed one of the most complex of statistical methods in the toolkit of the social scientist. However, it is a perfectly fair and reasonable question to ask whether structural equation modeling has helped advance the cause of social science. Has it increased our knowledge of social phenomena? Advanced as the tool may be statistically, has the tool helped social science build a bigger and better house for itself?

      Such a question is open to debate, one that we will not have here. What needs to be acknowledged from the outset, however, is that statistical complexity has little, if anything, to do with scientific complexity or the guarantee of scientific advance. Indeed, the two may even rarely correlate. A classic scenario is that of the graduate student running an independent‐samples t‐test on well operationally defined experimental variables, yet feeling somewhat “embarrassed” that he used such a “simple” statistical technique. In the lab next door, another graduate student is using a complex structural equation model, struggling to make the model identifiable through fixing and freeing parameters at will, yet feeling as though she is more “sophisticated” scientifically as a result of her use of a complex statistical methodology. Not the case. True, the SEM user may be more sophisticated statistically (i.e., SEM is harder to understand and implement than t‐tests), but whether her empirical project is advancing our state of knowledge more than the experimental design of the student using a t‐test cannot even begin to be evaluated based on the statistical methodology used. It must instead be based on scientific merit and the overall strength of the scientific claim. Which scientific contribution is more noteworthy? That is the essential question, not the statistical technique used. The statistics used rarely have anything to do with whether good science versus bad science was performed. Good science is good science, which at times may require statistical analysis as a tool for communicating its findings.

      In fact, much of the most rigorous science often requires the most simple and elementary of statistical tools. Students of research can often become dismayed and temporarily disillusioned when they learn that complex statistical methodology, aesthetic and pleasurable on its own that it may be (i.e., SEM models can be fun to work with), still does not solve their problems. Research wise, their problems are usually those of design, controls, and coming up with good experiments, arguments, and ingenious studies. Their problems are usually not statistical at all, and in this sense, an overemphasis on statistical complexity could actually delay their progress to conjuring up innovative, ground‐breaking scientific ideas.

      The cold hard facts then are that if you have poor design, weak research ideas, and messy measurement of questionable phenomena, your statistical model will provide you with anticlimactic findings, and will be nothing more than an exercise in the old adage garbage in, garbage out. Quantitative modeling, sophisticated as it has become, has not replaced the need for strict, rigorous experimental controls and good experimental design. Quantitative modeling has not made correlational research somehow more “on par” with the gold standard of experimental studies. Even with the advent of latent variable modeling strategies and methodologies such as confirmatory factor analysis and structural equation modeling, statistics does not purport to “discover,” for real, hidden variables. Modeling is simply concerned with the partitioning of variability and the estimation of parameters. Beyond that, the remainder of the job of the scientist is to know his or her craft and to design experiments and studies that enlighten and advance our knowledge of a given field. When applied to sound design and thoughtful investigatory practices, statistical modeling does partake in this enlightenment, but it does nothing to save the scientist from his or her poorly planned or executed research design. Statistical modeling, complex and enjoyable as it may be on its own, guarantees nothing.

      One might say that the ultimate goal of any science is still to establish causal relations, even if classical “Laplacian” determinism has been somewhat jettisoned by theoretical physicists, which would imply that there may actually not be “true causes” to events (despite our continued attempts to assign them). Our search for them may be entirely misguided. Still, and a bit more down to earth, nothing suggests a stronger understanding of a scientific field than to be able to speak of causation about the phenomena it studies. However, more difficult than establishing causation in a given research paradigm is that of understanding what causation means in the first place. There exist several definitions of causality. Most definitions have at their core that causation is a relation between two events in which the second event is assumed to be a consequence, in some sense, of the first event.

      For example, if I slip on a banana peel and fall, we might hypothesize that the banana peel caused my fall. However, was it the banana peel that caused my fall, or was it the worn out soles on my shoes that I was wearing that day that caused the fall? Had I been wearing mountain climbers instead of worn‐out running shoes, I might not have fallen. Who am I to say the innocent banana peel caused my fall? Causality is hard. Even if it seems that A caused B, there are usually many variables associated with the problem such that if adjusted or tweaked may threaten the causal claim. Some would say this is simply a trivial philosophical problem of specifying causality and it is “obvious” from the situation that the banana peel caused the fall. Nonetheless, it is clear from even such a simple example that causation is in no way an easy conclusion to draw. Perhaps this is also why it is extremely difficult to pinpoint true causes of virtually any behavior, natural or social. Hindsight is 20/20, but attributing causal attributes with any kind of methodological certainty in violent crimes, for instance, usually turns out to be speculative at best. True, we may accumulate evidence for prediction, but equating that with causation is under most circumstances the wish, not the reality, of a social theory.

      In our brief discussion here we will not attempt to define causality. Books, dissertations, and treatises have been written exclusively on the topic. At most, what we can do in the amount of space we have is to simply heed the following advice to the reader—If you are going to speak of causation with regard to your research, be prepared to back up your theory of causation to your audience. It is simply not enough to say A causes B without subjecting yourself to at least some of the philosophical issues that accompany such a statement. Otherwise, it is strongly advised that you avoid


Скачать книгу