Review of Bamberger and Paul Crawford, Paul Bryce papers on international evaluation

From ced Wiki
Jump to navigation Jump to search

Summary and comparison of International CED articles -

  • The Evaluation of International Development Programs: A View from the Front (Michael Bamberger)
  • Project monitoring and evaluation: a method for enhancing the efficiency and effectiveness of aid project implementation (Paul Crawford, Paul Bryce)

Bamberger summary

Bamberger's article focuses on the differences between US and international development project evaluation by donors. A review is conducted in current approaches, focusing on international approaches. For the document, development projects are defined as “social and economic programs in developing countries funded by multilateral and bilateral development agencies or by international non-government organizations (NGOs).”

The difference between monitoring and evaluation is examined, in some cases monitoring occurs during the project and evaluation means as a summary, while in some uses evaluation describes both cases.

Challenges include that donors influence evaluation in foreign countries, monitoring is emphasized rather than summary and comparative review, and the difficulty collecting and exchanging data between projects. Trends include participatory evaluation using qualitative methods (with challenges around consistency), thematic evaluations (for example, women's issues), assessing the impact of policies, the development of reusable evaluation databases, and national evaluation capacity.

Solutions include the Living Standards Measurement Study (LSMS), providing a standard set of socio-economic survey instruments which has generated data sets on living standards in over 40 countries. These include questionnaires around topics such as health, education, income, employment, and access to basic services, as well as specific themed modules.

Other sectors, such as agriculture and education, have developed their own data sets. The World Bank is often involved in these standardization efforts.

Participatory methods are intended to give voice to the project beneficiaries (or affected groups) in the identification, design, and management of projects. Rapid rural appraisal (RRA), beneficiary assessment, stakeholder analysis, and a wide range of social assessment methods are mentioned, as well as the Participation Tool Kit. The World Bank-financed Angola Social Assistance Fund (FAS) provides an example.

The “cross fertilization” between US and international projects in these efforts is mentioned as promising direction.

A challenge mentioned in evaluation is to develop guidelines for the integration of international quantitative and qualitative methods. Structural Adjustment Participatory Research Initiative (SAPRI) is mentioned as one initiative working on this challenge to design and implement methodologies. Issues around including local individuals has lead to increased time required to develop these approaches.

The use of quasi-experimental and rapidly developed, methodologically sound approaches is also considered appropriate, including using small samples and key informants.

Crawford and Bryce summary

This paper focuses on the inadequacies of logical framework analysis (LFA) for the purposes of monitoring and evaluation. A solution is proposed which provides a “time dimension” to the methodology, creating a “3D-Logframe.”

While funding of project management systems has been increasing concomitant with projects, and often monitoring and evaluation systems are required, operational environments inhibit monitoring and evaluation effectiveness compared to “hard” industries, affecting accountability (through transparency and reporting through an organizational structure) and performance (responsive project management decision making).

LFA is typically used to plan for a project. A matrix “logframe” document includes a hierarchy of objectives and assumptions based on cause and effect logic, called the “vertical logic” (due to its position on the axis), while the “horizontal logic” addresses the means by which objectives can be reached, using objectively verifiable indicators (OVI) and their means of verification (MOV).

A discussion of monitoring compared to evaluation, similar to in the Bamberger paper appears. Project effectiveness related to the project goal is examined through the what, who and why of a project, and strategic stakeholders are deemed to have more responsibility for effectiveness (and learning) and operational stakeholders have more responsibility for efficiency (and accountability). Therefore monitoring relates more to efficiency, and effectiveness to evaluation.

The paper then discusses the operationalisation of the logframe document beyond the design phase. Problems include the missing time (tracking) dimension (resulting in no way to track tasks, for example), inappropriateness in assigning efficiency level OVIs (since they are focused on effectiveness), inadequacies with MOVs (lack of detail), and the overall static nature of the document.

A time dimension is therefore proposed, substituting a timeline for the indicator column, which emphasizes the one to many relationship and causality between a single project goal, its several outcomes, the outcomes' outputs, the outputs' activities, and the activities' inputs. A triangle visualization is used, with the “Planner's View” being the right hand axis representing time (interactions such as cashflow, activities, etc), and the “Project Manager's View” on the front representing the vertical logic. In addition to extending the logframe to be a project management tool, performance, accountability, organisational learning and fostering of a common language are considered to be benefits.

Potential barriers are the overly conceptual nature of the 3D-Logframe; most project planners use a table in a word processor document. Computer software is proposed as solution, while this could create an information divide, it is proposed that this could be considered capacity building. Another opposition is philosophical, with the design school focusing on an objective, rational approach, whereas the soft system methodologies (SSM) focuses on the social and “organic” nature of projects. A proposed solution is to use SSM for operations.

Comparison of articles and comments

Both articles point out that there are subtle differences between monitoring and evaluation. Both articles suggest that donor/sponsor agencies have influence over the methodology of evaluation, and thus influence and bias over the conclusions of the evaluation. These stakeholders, while initiating programs, may also be steering them in directions that are not beneficial to recipients, particularly with a divide in methodologies and approach.

The Crawford & Bryce article appears much more theoretical than grounded, geared primarily to the upper managerial levels without much consideration of input from the ground up, from beneficiaries and those in the field implementing interventions - as does Bamberger’s article, which encompasses the gathering of data, prior to, during, and post project completion. Using the conceptual and jargon-y 3D-Logframe, it is unclear how the proposed design and SSM approaches would be reconciled, and likely that further, large disconnects between those “in the trenches” and those designing projects would result. Perhaps this decision is made for the sake of efficiency (letting the funders describe the project and judge successes or failures, from a remote geographical and methodological position), but ultimately it wouldn't seem to always effectively serve the actual needs of international development as recognized by those affected - all participants should be using the same notes. Assuming the availability of Internet access, a common, lightweight Web based system would seem more practical, inclusive and effective than a desktop application (though one could imagine nifty 3D visualizations using the 3D-Logframe's pyramid).

The proposed 3D-Logframe does not address adaptation in unfolding projects; methodologies should be in place to prevent waste of effort and money and adjust course, emphasizing better information from the recipient community (and two way communication in general). However, it does permit correlating variances.

This seems to be a major difference in approach between the two articles; Bamberger makes a point of discussing the usefulness (with examples) of beneficiaries’ input, Crawford & Bryce proposal is a top-down methodology of information collection and distribution, without defining exactly where this info is coming from, and as a new and theoretical approach, it would need to be reconciled with how funding, high level management, operations and affected bodies work and should communicate.

In addition to Bamberger's discussion of developing available data sets and instruments, ways of organizing and describing data are required. Initiatives such as SERONTO are designed to categorize and codify data in re-usable semantic formats.