SCPA 529-1 International Development & CED Part I

From ced Wiki
Revision as of 11:27, 7 June 2009 by Katherine (talk | contribs) (A synthesis of ''Research on the Utilization of Evaluations: A Review and Synthesis by Laura C. Leviton and Edward F.X. Hughes'')
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Utilization of Evaluations - A Synthesis

Introduction There exists an area of concern around the utilization of evaluations particularly when it concerns the usefulness in informing policy or improving programs. The scientific document that I reviewed for this assignment, Research on the Utilization of Evaluations: A Review and Synthesis by Laura C. Leviton and Edward F.X. Hughes examined three parts around the utilization of evaluations: a critical discussion of definitions of utilization, a discussion of methodology and a review of variables that have been found to affect utilization.

The concern around the utilization of evaluations stems from the fact that there exists an enormous amount of studies that deal with the utilizations of evaluations yet there are few researchers that can adequately differentiate the utilization of evaluations and other forms of research. It is important to note that utilization is confined here to use of evaluations results for programs and policy only. Leviton and Hughes assert that if findings from several different sources such as social science (where the dominate body of information exists), experienced evaluators, political science and organization behavior can corroborate each other and that a range of methodologies are used then it is possible to assess the convergent validity of concepts of utilization and variables that affect it.

Definition and Criteria It is important firstly to define the utilization of evaluations as limited to programs and policy. While there is a main criterion for all types of utilization which must be implied from observables, the authors assert the use of a second criterion. Primarily, to consider evaluations used there must be a serious discussion of the results in debates about a particular program or policy. Reading evaluation reports is not considered to be using evaluations. Secondly, the authors suggests that to be considered used there must be evidence that in the absence of the research information that those engaged in policy or program activities would have thought or acted differently. This second criterion makes sense when one considers that people could give serious thought to evaluation information and then simply choose not to use it. These criteria are relevant to determine utilization impact on programs and policy and the utility of evaluations. Types of Utilization There are three broad categories for using evaluation and each of these are distinguished by their purpose whether instrumental, conceptual, or persuasive.

1. Instrumental use is where an evaluation directly affects decision-making and influences changes in the program under review.

2. Conceptual use is more indirect and relates to generating knowledge and understanding of a given area that may influence thinking without any immediate new decisions being made about the program. Over time conceptual impacts can lead to instrumental impacts and ultimately to program changes.

3. The third type is persuasive and involves the justification of decisions already made about a program. For example, an evaluation is commissioned with no intention of utilizing the evaluation findings, but rather as a strategy to defer a decision. Alternatively, the evaluation follows after decision-making and provides a mechanism for retrospectively justifying decisions made on other grounds.

There are difficulties with the current definitions of these categories namely because not all of these definitions meet the “bottom-line” criteria. Moreover it can be difficult for respondents to trace a specific decision back to particular sources of information. Determining conceptual use and instrumental use of evaluations is difficult because problems (in programs or policy) are defined gradually over time and decisions are ultimately reached based on a set of information from many sources.

Leviton, Hughes contend a reconceptualization of utilization categories by having the bureaucratic decision-making and policy revision cycles determine the type of use to which evaluations can be put. The issue here though is that policy cycles do not follow clearly defined cycles.

Methodological Issues There are problems of method in the study of utilizations and this results from the types of research strategies relevant to the subject matter or the low importance to documenting utilization or because of the misconceptions of utilization itself. Most research on utilization has relied either on the case study method or on the policy makers’ statements in interviews and questionnaires. But case studies suffer important problems in the study of utilization for example it is difficult to document that utilization occurs, retrospective research may be biased and there remains the question of the unit of analysis insomuch as ‘what is an instance of utilization?’

There are a range of factors that affect utilization. These factors can be broadly categorized into two groups:

1. characteristics of the evaluation—the way that the evaluation is conducted; and 2. characteristics of the organizational setting in which findings are to be utilized—factors within the organization in which the evaluation is conducted.

With respect to the characteristics of the evaluation, Leviton & Hughes offers a tentative review of five major clusters of variables that are consistently related to utilization between evaluators and users, information processing by users, credibility of evaluation and user involvement. These variables include: relevance, communication, information processing, credibility, and user involvement and advocacy.

In their research, Leviton & Hughes fully deconstruct each of the variables and then summarize each variable in a series of tables by indicating whether or not the variable enhances the utilization (+), detracts from the utilization (-) or interacts with other variables (+) thereby either enhancing, detracting, or having no effect on other variables. The authors measure characteristics of evaluations against the following criteria: (1) evaluations address clients’ needs (2) policy makers’ needs (3) program managers’ needs and (4) timeliness.


To briefly summarize these variables:

1. Relevance –evaluations that were relevant to the needs of a particular audience were used more frequently 2. Communication –the importance of good communication between those who produce evaluations and potential users (which tends to be obstructed within bureaucracies) 3. Information processing – the way that an evaluation is presented to users affect their understanding and thereby the extent of use. Readable reports are used more. 4. Credibility – trustworthiness of the producer of an evaluation is likely important especially if there is suspicion that the researchers have been co-opted or have suppressed information 5. User involvement and advocacy -advocates of a program can become advocates of evaluations that support their position whereas evaluations that run counter to advocacy will be attacked.

Leviton & Hughes clarify existing concepts of utilization of evaluations while suggesting improvements in the methods of detecting use. The authors assert that by studying utilization, methods can be improved because utilization is directly related to the plan for evaluation.

Conclusion While this paper was written in 1981, it is interesting to note that subsequent research indicates an additional category of evaluation utilization - process use. Process use concerns how individuals and organizations are impacted upon as a result of participating in an evaluation. Being involved in an evaluation may lead to changes in the thoughts and behaviors of individuals which then results in cultural and organizational change. An example of process use is when those involved in the evaluation later say, “The impact on our program came not so much from the findings as from going through the thinking process that the evaluation required.” (Bayley, 2008)

It is also worth noting that the third category of use referenced by the authors as persuasive is today also referred to political or symbolic use.

Furthermore, additional characteristics in the case where the evaluation findings are used in conceptual or instrumental ways in today’s context include personal characteristics -the attitude of individuals towards evaluations and their influence and experience in organizations; and the financial climate –the economic impact of any changes to the program stemming from the evaluation (it is more likely easier to accept changes if a limited financial resources are required).

Lastly, while there still exists an importance in increasing the use of evaluations, most of the empirical studies on utilization of evaluation are focused on instrumental use and not on other types of use.

“Over and over again, the most important factor in assuring the use of evaluation findings was not the quality of the evaluation but the existence of a decision maker who wants and needs an evaluation and has commitment himself to implementing its findings.” (Chelimsky, 1977)

Chelimsky, E. (1977). A Symposium on the Use of Evaluation by Federal Agencies. Bayley, J.S. (2008). Maximizing the Use of Evaluation Findings. Based on T. Penney's Draft Research Proposal for PhD Candidature Leviton, L. C., & Hughes, E. F. (August 1981). Research on the Utilization of Evaluations: A Review and Synthesis. Evaluation Review .