An Introduction to Evaluation. Chris Fox

An Introduction to Evaluation - Chris Fox


Скачать книгу
improvement, alterations, and/or modification in the evaluand, while the aim of summative evaluation is to determine its impacts, outcomes, or results. (Lincoln and Guba 1986: 550)

      A less technical, but similar definition is provided by Robson:

      Formative evaluation is intended to help in the development of the programme, innovation or whatever is the focus of the evaluation. Summative evaluation concentrates on assessing the effects and effectiveness of the programme. (Robson 2011: 181)

      However, the distinction between summative and formative evaluations is not absolute (Robson 2011). For example, determining whether or not a policy has had an impact often involves asking questions about how it has done so, for whom, why, and under what conditions (Government Social Research Unit 2007a). These two broad approaches tend to carry with them some assumptions about the nature of the evaluation undertaken. For example, a formative evaluation needs to be carried out and reported in time for modifications to the policy, programme or project (Robson 2011), implying that the audience for the evaluation might more likely be programme managers as opposed to policymakers or the public. This in turn implies the evaluator might have a more interactive role and that data collection will be continuous, possibly with a greater emphasis on qualitative data collection. A summative evaluation implies that the audience are more likely to be policymakers, programme funders or the public, and that the role of the evaluator could be more independent and removed from evaluation subjects, with a focus on outcome measures and the production of formal evaluation reports. Potential distinctions are illustrated in Table 1.1.

      Process (implementation) and impact (outcome) evaluation

      Another common distinction in the evaluation world is that between process (sometimes referred to as implementation) and impact (sometimes referred to as outcome) evaluation. Traditionally, evaluation was restricted to questions concerning impact or outcome and typical among these might be (based on HM Treasury 2013):

       What were the policy, programme or project outcomes?

       Did the policy, programme or project achieve its stated objectives?

       Were there any observed changes, and if so how big were these changes and how much could be said to have been caused by the policy, programme or project as opposed to other factors?

       How did any changes vary across different individuals, stakeholders, sections of society and so on, and how did these compare with what was anticipated?

       Did any outcomes occur which were not originally intended, and if so, what were they and how significant were these?

      Process evaluation answers the question ‘How was the policy, programme or project delivered?’ (HM Treasury 2013) or the ‘What is going on?’ question (Robson 2011). Process evaluation may provide a useful additional element to an outcome evaluation, helping to explain why an intervention did or didn’t work. However, a process evaluation may be a ‘standalone’ exercise designed to provide a detailed description of the implementation process or a counterpoint to an ‘official’ view of what should be happening in a policy, programme or project (Robson 2011).

      Impact evaluation therefore looks similar to summative evaluation and process evaluation looks similar to formative evaluation, but there are distinctions. For example, a summative evaluation occurs at the end of a programme, whereas an impact evaluation need not necessarily do so.

      A note on terminology: impact versus outcome and process versus implementation

      In this book we generally use the term ‘impact evaluation’ rather than ‘outcome evaluation’. The two terms overlap, but have slightly different meanings. An outcome evaluation assesses whether a programme delivered outcomes specified in the programme, whereas an impact evaluation is broader and also considers unintended or wider outcomes. We also generally use the term ‘process evaluation’ rather than ‘implementation evaluation’. Again, the two terms overlap, but whereas an implementation evaluation assesses whether a programme was delivered as intended, a process evaluation also considers unintended or wider delivery issues.

      Economic evaluation

      A summative or outcome evaluation might demonstrate the impact of a policy, programme or project but will not by itself show whether those outcomes justified the investment (HM Treasury 2013). Evaluators may ask (based on Dhiri and Brand 1999):

       What was the true cost of an intervention?

       Did the outcome(s) achieved justify the investment of resources?

       Was this the most efficient way of realising the desired outcome(s) or could the same outcome(s) have been achieved at a lower cost through an alternate course of action?

       How should additional resources be spent?

      In general, attempts to address these issues fall into one of three forms:

       Cost Analysis is a partial form of economic evaluation that deals only with the costs of an intervention (Drummond et al. 2005).

       Cost Effectiveness Analysis values the costs of implementation and relates these to the total quantity of outcome generated to produce a ‘cost per unit of outcome’ estimate. The consequences of the policy, programme or project are not valued and the results are expressed as a cost-effectiveness ratio such as ‘the cost per additional individual placed in employment’ (HM Treasury 2013).

       Cost Benefit Analysis (CBA) goes further than a cost effectiveness analysis and the consequences of the policy, programme or project are valued in monetary terms (Drummond et al. 2005). Results are often expressed as a cost-benefit ratio such as ‘for every dollar spent placing an individual in employment there is a return of two dollars’. Potentially this makes it the broadest form of economic evaluation method, however, as we will discuss later, difficulties in capturing and measuring the wider consequences of an intervention means that, in reality, its scope can be limited.

      We look at economic evaluation in more detail in Chapter 6.

      Ex ante and ex post evaluations

      Evaluation can be prospective or retrospective. A prospective or ex ante evaluation takes place before a programme or project has been implemented, whereas a retrospective or ex post evaluation takes place once a programme or project is in place and has demonstrated that it has had an impact (Rossi et al. 2004).

      Ex ante evaluations are most commonly undertaken by governments or similar bodies as part of the policy and programme development cycle. They normally have a strong economic component. The European Commission (2001) defines ex ante evaluation thus:

      Ex ante evaluation is a process that supports the preparation of proposals for new or renewed Community actions. Its purpose is to gather information and carry out analyses that help to define objectives, to ensure that these objectives can be met, that the instruments used are cost-effective and that reliable later evaluation will be possible. (European Commission 2001: 3)

      Ex post evaluation is far more common than ex ante evaluation and the bulk of this book concentrates on approaches more commonly associated with ex post evaluation.

      The distinction between ex ante and ex post evaluation alerts us to the idea that different types of evaluation will be relevant at different points in the policymaking or programme development process. The Public Service Transformation Network (2014) identified a series of stages in a project or programme life-cycle: development and design; implementation; delivery; and scaling-up. They suggest that various types of evaluation are likely


Скачать книгу