Social Work Research Methods. Reginald O. York
about the research question. We have covered this mistake in our examination of the purposes of scientific inquiry (discovery rather than justification). Likewise, it is not logical to start with data and formulate a research question that fits the data. (However, it is legitimate to use an exploration of data as a springboard for focusing a set of questions that guide the investigation of the literature.) Furthermore, as mentioned above, we should not start the process with the selection of study methods. We need to know our research question before we can select the optimal means of measurement of our variables.
Let’s go over the critical steps in the social research process. First, we decide on the purpose of our study. Do we want to describe the members of a class of students in a university program, so that we will know the distribution of these people by age, gender, race, and so forth? Or do we want to examine whether males and females are different with regard to satisfaction with life? Or do we want to know if after-school tutoring helps at-risk children improve their grades? The first of these examples is about descriptive research—our attempt to describe people. The one about gender and life satisfaction is sometimes referred to as explanatory research because we wish to explain whether there is a relationship between variables, which would help us explain the variables. The one about tutoring is evaluative in nature because we are examining if a service program is effective with regard to the objectives it is seeking to achieve.
Two Heads Are Better Than One!
Because objective reality is so difficult to discover in the field of human behavior, we must rely on a method of inquiry that reduces human error in observation. One such method is to ask for more than one observation of a given phenomenon in order to become confident that we have a true picture of it. In research, we assume that reality is more likely to be discovered the more we find different people perceiving things in the same light. We know, of course, that it is possible that one person who is in the minority has the true picture while those in the majority are incorrect. But in view of the fact that we have so little truly “hard” evidence of reality about human behavior, we make the assumption that our best bet is to go with the consensus of many people rather than the unsupported opinion of one person. And we have many methods that have been developed to test the dependability of a given method of measuring our subjects of study. Thus, we could say that this principle serves as one of the assumptions of scientific inquiry.
Some Things Happen Just by Chance!
The fact that I had eggs for breakfast this morning does not necessarily mean that I prefer eggs over cereal for breakfast in general. It could be that I have eggs half the time and cereal half the time and I just happened to have had eggs this morning. If you observed me at breakfast several times and noted that I had eggs each and every time, you would have more reliable evidence that I prefer eggs for breakfast. The more observations you make, the more confident you would be in your conclusion that I prefer eggs for breakfast.
We are referring to a thing called “probability.” Let’s discuss this concept in a general way. Logic would suggest that there is a 50% chance of getting a heads on a given flip of a coin because there are only two possibilities—heads and tails. But let’s suppose that someone said that there was one coin in a set of coins that was rigged to land on heads more often than on tails because of the distribution of the weight of the coin. You pick out one coin, and you want to know if this is the one that is rigged. Let’s suppose that your first flip was heads and the second was also heads. Are you convinced you have the rigged coin? Probably not because you have only flipped it 2 times and we know that two heads in a row can happen just by chance. What if you have flipped this coin 10 times and it came out heads every time? Now you have more reason to believe that you have the rigged coin. A similar result after 20 flips would be even better. If you do not have the rigged coin, you would not likely have very many flips in a row that were similar. The more flips you have that are similar, the better are your chances that you have found the rigged coin. Determining how many flips you need to be confident is a matter for statistics. If you knew how to use a statistical test known as the binomial test, you could see that 5 flips in a row with only heads appearing would be so unusual that you would be safe to bet that you have found the rigged coin.
Now let us put the same lesson to use with a more practical example. Suppose that you wanted to know whether males and females differ in their satisfaction with instruction in research courses. Are females higher or lower than males in their level of satisfaction? You could ask a given group of students if they are generally satisfied with their research instruction, with the options of YES or NO. You could then compare the proportion of females who answered YES with the proportion of males who answered YES. What if you found that 63% of females were satisfied and that 65% of males were satisfied? Does that mean you can conclude that there is truly a difference between males and females? If so, would you be prepared to bet a large sum of money that a new study of this subject would result in males having a higher level of satisfaction? I doubt that you would, because you would realize that this small a difference between males and females could be easily explained by chance. If you had found that 60% of females were satisfied as compared with only 40% of males, you would be more likely to see this difference as noteworthy. However, such a difference with a sample of only 10 students would likely make you wonder if you should take these results seriously. Results with a sample of 100 students would be much more impressive.
You examine the theme of probability in scientific research with the use of statistics. A statistical test applied to your data will tell you the likelihood that these data could have occurred by chance. If you fail to achieve statistical significance with your data, you cannot rule out chance as a likely explanation of them. Thus, you cannot take them seriously in your conclusions. Suppose you found that students had a slightly higher score on knowledge of scientific research at the end of a lesson than before the lesson began but your data failed to be statistically significant. Under these circumstances, you should conclude that you failed to find that your students improved in research knowledge. You should not conclude that they had a slight improvement. Why? Because your data can be explained by chance, and you should not take them seriously. If you had found your data to be statistically significant, then you could conclude that you found that your students had achieved a slight gain in knowledge.
Limitations of Common Sense
There is much wisdom in common sense, but there are pitfalls as well. Common sense is not a form of knowledge based on scientific inquiry. It is used here to show the connections between ideas we may embrace and the nature of science. There are many commonsense phrases from past times that may have been refuted by science; so we no longer embrace them.
Pseudoscience as an Alternative to Science
Pseudoscience presents the appearance of science but lacks a scientific basis (Thyer & Pignotti, 2015). An assertion of an idea based on pseudoscience may provide tables and charts that are behind the idea presented, but these tables and charts have not been validated by scientific studies. Another characteristic of pseudoscience is the reliance only on anecdotal evidence to support the idea or theory. Anecdotal evidence is the use of single examples that fit one’s theory. But anecdotal evidence is quite weak and is not considered to be a legitimate basis for scientific inquiry. You can find an example to prove just about any point you make. Science is based on the systematic review of many facts, not just a few examples.
Another characteristic of claims based on pseudoscience is a tendency to cherry-pick facts to fit the theory rather than make an objective examination of all facts relevant to the theory. One of the red flags of pseudoscience is a profound claim of effectiveness. You have heard the statement “If something seems to be too good to be true, it probably is not true.” Solutions based on pseudoscience often claim greatness in the absence of scientific evidence of any effectiveness at all.
Advocates of approaches that are in the category of pseudoscience usually are not inclined to engage in serious scientific work to test the approach, and these people will work hard to make excuses when evidence is produced that refutes the theory. The approach of science is to put the burden of proof on the researcher, to prove