Evaluation in the Practice of Development
Knowledge about development effectiveness is constrained by two factors. First, the project staff in governments and international agencies who decide how much to invest in research on specific interventions are often not well informed about the re...
Main Author: | |
---|---|
Language: | English |
Published: |
World Bank, Washington, DC
2012
|
Subjects: | |
Online Access: | http://documents.worldbank.org/curated/en/2008/03/9056351/evaluation-practice-development http://hdl.handle.net/10986/6561 |
Summary: | Knowledge about development
effectiveness is constrained by two factors. First, the
project staff in governments and international agencies who
decide how much to invest in research on specific
interventions are often not well informed about the returns
to rigorous evaluation and (even when they are) cannot be
expected to take full account of the external benefits to
others from new knowledge. This leads to under-investment in
evaluative research. Second, while standard methods of
impact evaluation are useful, they often leave many
questions about development effectiveness unanswered. The
paper proposes ten steps for making evaluations more
relevant to the needs of practitioners. It is argued that
more attention needs to be given to identifying
policy-relevant questions (including the case for
intervention); that a broader approach should be taken to
the problems of internal validity; and that the problems of
external validity (including scaling up) merit more attention. |
---|