Health Psychology. Michael Murray

Health Psychology - Michael  Murray


Скачать книгу
as a condition of funding, approval and publication.

      There are several different types of power analysis, some being more robust than others. In a priori power analyses (Cohen, 1988), sample size N is computed as a function of the required power level (1–ß), the pre-specified significance level α, and the population effect size to be detected with probability 1–ß. Cohen’s definitions of small, medium and large effects can be helpful in effect size specifications.

      A variety of software is available to expedite rapid power analyses, including G* Power 3 (Faul et al. 2007) and free online software online such as OpenEpi (Dean et al., 2014).

      Qualitative Research Methods

      Qualitative research methods aim to understand the meanings, purposes and intentions of behaviour, not its amount or quantity. A huge variety of methods are available and these are described in this A–Z under the following headings: diaries and blogs; discourse analysis; focus groups; grounded theory; historical analysis; interpretative phenomenological analysis; interviews, especially semi-structured; and narrative approaches. Figure 7.3 shows the rapid growth of qualitative research in health psychology over the last few decades. This trend is expected to continue. A wide variety of software is available to support qualitative and mixed methods research analyses, such as NVivo, MAXQDA and QDA Miner Lite.

      Questionnaires

      Questionnaires in health psychology consist of a standard set of questions with accompanying instructions concerning attitudes, beliefs, perceptions or values concerned with health, illness or health care. Ideally, a questionnaire will have been demonstrated to be a reliable and valid measure of the construct(s) it purports to measure.

      Questionnaires vary in objectives, content (especially in their generic versus specific content), question format, the number of items, and sensitivity or responsiveness to change. Questionnaires may be employed in cross-sectional and longitudinal studies. When looking for changes over time, the responsiveness of a questionnaire to clinical and subjective changes is a crucial feature. A questionnaire’s content, sensitivity and extent, together with its reliability and validity, influence a questionnaire’s selection. Guides are available to advise users on making a choice that contains the appropriate generic measure or domain of interest (e.g., Bowling, 2001, 2004). These guides are useful as they include details on content, scoring, validity and reliability of dozens of questionnaires for measuring all of the major aspects of psychological well-being and quality of life, including disease-specific and domain-specific questionnaires and more generic measures.

      The investigator must ask: What is it that I want to know? The answer will dictate the selection of the most relevant and useful questionnaire. The most important aspect of questionnaire selection is therefore to match the objective of the study with the objective of the questionnaire. For example, are you interested in a disease-specific or broad-ranging research question? When this question is settled, you need to decide whether there is anything else that your research objective will require you to know. Usually the researcher needs to develop a specific block of questions that will seek vital information concerning the respondents’ socio-demographic characteristics. This block of questions can be placed at the beginning or the end of the main questionnaire.

      Questionnaire content may vary from the highly generic (e.g., How has your health been over the last few weeks? Excellent, Good, Fair, Poor, Very Bad) to the highly specific (e.g., Have you had any arguments with people at work in the last two weeks?). Questionnaires vary greatly in the number of items that are used to assess the variable(s) of interest. Single-item measures use a single question, rating or item to measure the concept or variable of interest. For example, the now popular single verbal item to evaluate health status: During the past four weeks how would you rate your health in general? Excellent, Very good, Good, Fair, Poor. Single items have the obvious advantages of being simple, direct and brief.

      Questionnaires remain one of the most useful and widely applicable research methods in health psychology. A few questionnaire scales have played a dominant role in health psychology research over the last few decades. Figure 7.4 shows the number of items in the ISI Web of Knowledge database for three of the most popular scales. Over the 20-year period 1990–2009, usage of scales designed to measure health status has been dominated by three front-runners: the McGill Pain Questionnaire (Melzack, 1975), the Hospital Anxiety and Depression Scale (HADS; Zigmond and Snaith, 1983), and the SF-36 Health Survey (Brazier et al., 2002). The SF-36 is by far the most utilized scale in clinical research, accounting for around 50% of all clinical studies (Figure 7.3).

      Figure 7.3 Trends in numbers of health psychology studies using different research measures and methods, 1990–2009

      Source: Marks (2013)

      Randomized Controlled Trials

      Randomized controlled trials (RCTs) involve the systematic comparison of interventions using a fully controlled application of one or more ‘treatments’ with a random allocation of participants to the different treatment groups. The statistical tests that are available have as one of their assumptions that participants have been randomly assigned to conditions. In real-world settings of clinical and health research, the so-called ‘gold standard’ of the RCT cannot always be achieved in practice, and in fact may not be desirable for ethical reasons.

      We are frequently forced to study existing groups that are being treated differently rather than have the luxury of being able to allocate people to conditions. Thus, we may in effect be comparing the health policies and services of a number of different hospitals and clinics. Such ‘quasi-experimental designs’ are used to compare treatments in as controlled a manner as possible, when, for practical reasons, it is impossible to manipulate the independent variable, the policies, or allocate the participants.

      The advantage of an RCT is that differences in the outcome can be attributed with more confidence to the manipulations of the researchers, because individual differences are likely to be spread in a random way between the different treatments. As soon as that basis for allocation of participants is lost, then questions arise over the ability to identify causes of changes or differences between the groups; in other words, the internal validity of the design is in question.

      Randomized controlled trials are complex operations to manage and describe, which has led to a difficulty in replication of RCTs. To help solve this problem, the CONSORT guidelines for RCTs published by Moher et al. (2001) and the TREND statement for non-randomized studies (Des Jarlais et al., 2004) were intended to bridge the gap between intervention descriptions and intended replications. These guidelines have driven efforts to enhance the practice of reporting behaviour change intervention studies. Davidson et al. (2003) expanded the CONSORT guidelines in proposing that authors should report: (1) the content or elements of the intervention; (2) the characteristics of those delivering the intervention; (3) the characteristics of the recipients; (4) the setting; (5) the mode of delivery; (6) the intensity; (7) the duration; and (8) adherence to delivery protocols/manuals.

      Another issue with RCTs has been bias created by industry sponsorship. Critics claim that research carried out or sponsored by the pharmaceutical industry should be treated with a high degree of suspicion as the investigators may have a hidden bias that can affect their ability to remain independent. Lexchin et al. (2003) carried out a systematic review of the effect of pharmaceutical industry sponsorship on research outcome and quality. They found that pharmaceutically sponsored studies were less likely to be published in peer-reviewed journals. Also, studies sponsored by pharmaceutical companies were more likely to have outcomes favouring the sponsor than were studies with other sponsors (odds ratio 4.05; 95% confidence interval 2.98–5.51). They found a systematic bias favouring products made by the company funding the research.

      There have been significant abuses of RCTs in clinical and drug trials. Many trials have not been registered so that there is no record of them having been carried out. Trials showing non-significant effects have been unreported,


Скачать книгу