On 1 July 2014, we – Michaela Raab and
Wolfgang Stuppert – presented the findings of our review at the DFID office in
Whitehall (London). Some 30 participants from DFID and external organisations –
representing chiefly NGOs and consulting firms working on violence against women and girls - attended the 2-hour event, including DFID staff outside Whitehall (who joined us
via videoconference). After our presentation (slides available via this link), Sam
Coope from DFID in Zimbabwe and Asmita Naik, an independent consultant,
reported about their experience in VAWG-related evaluation.
The discussion that followed drew from the
participants’ rich and diverse programming and research experience. We would
like to flag a few points
that we have found particularly interesting. Those who have been in the meeting will notice that we have added a couple of extra thoughts.
that we have found particularly interesting. Those who have been in the meeting will notice that we have added a couple of extra thoughts.
Do we want to have evaluation without high
quality data? If you put it that
way, no. And if, in our review, you don’t see “compelling evidence” in most paths to evaluation
effectiveness, that does not mean that good data is
superfluous. If your evaluation is going to be influential, you better make
sure you base it on accurate data.
However, there are evaluations that do not
need to collect data according to social science standards. For instance, one
evaluation we examined traced the history of a long-term programme on the basis
of interviews and group discussions with key staff only. The commissioner found
it effective because it made a previously implicit theory of change more
explicit, and helped the organisation to further develop its strategies.
Arguably, this type of participatory sense-making exercise does not require any
particularly rigorous data collection.
Choice of qualitative or quantitative
approach: We did not discover any pattern that
would link a certain evaluation approach to a certain type of intervention. It
is commonly assumed that qualitative approaches work particularly well with formative
evaluation, while in impact evaluation, quantitative research or a combination of qualitative and quantitative approaches can yield robust
evidence.
Every so often we run into people who
believe that the only rigorous evidence you can get is quantitative. That is
wrong. Poor qualitative research yields poor data, and so does poor
quantitative research. Good research of any kind tries to prevent bias through appropriate sampling and questions design; it gathers data from different
perspectives, and is transparent about the tools it uses and their limitations.
Many impact evaluations happen too early to make sense, for instance if the theory of
change of an intervention is still emerging, or if it is applied inconsistently
or incompletely. There are programmes that start without a clear idea of the
outcomes they want to achieve and how exactly they intend to get there. Where a
programme is still searching its form, it makes no sense to spend money on rigorous
impact evaluation – it won’t yield any evidence that can be used elsewhere.
Better go for some different form of research and reflection, possibly
something that involves substantive participation and sensitivity to
VAWG-related issues, to improve the programme.
There seems to be a divide between
evidence generated by academic research and evidence generated through
evaluations. Lori Heise (London School of Hygiene and Tropical Medicine) noted
that she could not see any overlap between the research-based publications on
VAWG her team worked on, and the evaluation reports our review was based on.
She advocated for tearing down the “Chinese Wall” between research and
programming.
One could argue that the primary purpose
of academic research is to generate knowledge (generalisable answers), while
the primary purpose of evaluation is to improve programmes and to assess their
effectiveness (specific answers). VAWG programming is mainly about
improving women’s and girls’ lives – not necessarily about increasing
knowledge.
Where a programme strives to produce scientific
evidence, researchers should be involved in the programme starting from its
design phase, to ensure a good match between programme implementation and the
conditions for research. Our review report describes an excellent example for
such work, Julia Kim’s evaluation of the Refentse Model of Post-Rape Care (2009;
summarised on pages 30-33 of our review).
No comments:
Post a Comment
Note: only a member of this blog may post a comment.