Tuesday, 7 October 2014

More reading on QCA and decision tree analysis

For those interested in our conversation with Rick Davies comparing the virtues of Qualitative Comparative Analysis (QCA) and decision tree analysis respectively, there is a new instalment on Rick's blog Rick On the Road. Renewed thanks to Rick and to everyone who have shown their interest in Qualitative Comparative Analysis!

Monday, 6 October 2014

Qualifying Rick Davies's findings from 'triangulating QCA'

We are delighted to see that there is interest in the resources we have posted on this blog (more than 5,000 page views as of today). Most recently, Rick Davies has used our dataset for a presentation at the 11th biennial EES conference (see earlier posts below) which compares decision tree analysis with Qualitative Comparative Analysis (QCA). Rick comes to the conclusions that (1) a decision tree analysis of our data would have yielded fewer paths to evaluation effectiveness, and that (2) those paths would have operated a more precise differentiation between cases of effective and ineffective evaluation respectively. Rick's presentation is available on YouTube. We have found it stimulating to examine the merits of decision tree analysis (and other methods) as compared to QCA. 

Meanwhile, we believe that Rick’s conclusions are flawed, for the following reasons

The poster - again

Apparently some organisations' servers make it difficult to download documents from dropbox, which is where many of the links on this site will take you. Please send in a comment if you experience any such problems and we'll send the file by e-mail. Meanwhile, here is our poster in its .jpg incarnation - a bit difficult to read on this page, I am afraid.


The link to our award-winning poster

As promised in the previous post, this is the LINK to our poster presenting the findings of our Qualitative Comparative Analysis (QCA). For those who are interested in innovative ways of visualising QCA findings, the diagram in the middle of the poster (diagram reproduced below) may be particularly interesting. A 2-pager presenting the findings can be downloaded here; links the full review report are available in the 'welcome' section on the right side of this blog.

Paths to effective evaluation in the field of VAWG - Diagram from the Raab/Stuppert poster


Friday, 3 October 2014

Our poster at the EES Conference

The poster presenting the results of our review has won the Best Poster Award at the 11th biennial conference of the European Evaluation Society (EES) in Dublin. If you happen to be around, come and have a look at it - find it in poster area 06 in Liffey A on the 1st floor (where the coffee and lunch breaks happen). 

We are delighted about this enthusiastic acknowledgement of our efforts to present the results of a QCA truth table in an accessible form.

Wednesday, 1 October 2014

EES Conference - pick up a leaflet

If you are at the 11th EES Conference these days, please have a look at our poster (poster area Liffey A, 1st floor) summarising our review findings, and pick up one of the leaflets we have placed near the poster. In case no leaflets are left - you can also download a copy using the link in the previous blog post.

And you are warmly invited to join us in Zoe Stephenson's panel session on Thurday 2 October at 9h45 in Wicklow (2nd floor), meeting room 2b.

Monday, 22 September 2014

Presentations at the EES Conference

For your diaries in early October:
The findings of our review will be presented at the 11th Biennial Conference of the European Evaluation Society (EES) at two occasions:

Tuesday, 29 July 2014

Presenting and discussing our findings

On 1 July 2014, we – Michaela Raab and Wolfgang Stuppert – presented the findings of our review at the DFID office in Whitehall (London). Some 30 participants from DFID and external organisations – representing chiefly NGOs and consulting firms working on violence against women and girls - attended the 2-hour event, including DFID staff outside Whitehall (who joined us via videoconference). After our presentation (slides available via this link), Sam Coope from DFID in Zimbabwe and Asmita Naik, an independent consultant, reported about their experience in VAWG-related evaluation.

The discussion that followed drew from the participants’ rich and diverse programming and research experience. We would like to flag a few points

Monday, 21 July 2014

Dataset available now!

At the early stages of our review we promised we would share the QCA dataset. We – Michaela Raab and Wolfgang Stuppert – hold the intellectual property rights to the information we have generated. Our DFID counterparts would like the data to be freely accessible for people wanting to explore it, replicate our analysis or do further analysis – and so do we.

Wednesday, 9 July 2014

Full Review report on-line!

Our full Review report is now available on-line, on the DFID Research for Development (R4D) site. This link will take you there; you can download the report for free. The actual report encompasses 30 pages with plenty of tables in-between, but the annexes are huge as we have appended all our data collection instruments and plenty of extra information.

For very rushed readers, a 2-page document summarising practical recommendations will be added at a later point. (Meanwhile feel free to study the short executive summary, which is part of the report.)

We would be delighted to have your comments, here or via the e-mail address you'll find in the report.

Thursday, 3 July 2014

Presentation of our findings and recommendations

On Tuesday 1 July, the two of us - Michaela Raab and Wolf Stuppert - presented our review findings and recommendations at the DFID office in Whitehall, London. Many thanks to everyone who attended and who contributed useful questions and insights. Within the next couple of weeks, we will summarise a few points that we have found particularly interesting in the discussion, and share them here.

Q&A on definitions

Some snag in our 'comments' function makes that comments are only shown if readers click on the titles of our posts. A couple of weeks ago, Rick Davies asked a few questions about our definitions:

Tuesday, 6 May 2014

Definitions for QCA

As announced earlier, we are now in a position to share the definitions we have used in our Qualitative Comparative Analysis (QCA). You can download the definitions as a PDF document by clicking here

Monday, 31 March 2014

Thoughtful comments

Our inception report has received a thoughtful review by Rick Davies on his blog Rick on the Road. Titled "The Challenges of Using QCA", the posting summarises some of the methodological difficulties we have encountered so far.

Wednesday, 26 March 2014

Inception report ready for sharing

The inception report, which summarises our review methodology, is finally ready for wider dissemination. You can download it by clicking on THIS LINK. It is a large file of some 1.9MB. 

Due to some technical complication that we do not want to explore in detail, some text magically disappears when you try to read the report in your browser. (For instance, in Firefox our response to the points raised by SEQUAS disappear.) So: if you wish to read the report, do not try to read it in your browser - rather click on "download" and use Adobe Reader to make sure you get the full text.

The annexes to the report include the tools we have used so far (coding instructions, survey questions and interview guides), as well as documentation on our initial dialogue with the External Reference Group, and with the Specialised Evaluation and Quality Assurance Service (SEQUAS). We will post more updates in the coming weeks.

Final report planned for June 2014!

We have received a few queries as to when our final report will be available. The plan is to complete it by/ in June 2014. It will include our findings from Qualitative Comparative Analysis and Process Tracing, as well as short descriptions of 15-20 evaluation approaches and designs that we have found effective or promising. 
Furthermore, we will produce a couple of papers:
  • A paper summarising our findings for development and evaluation practitioners.
  • A more academic, peer-reviewed article that will explain our review methodology, in particular the use of QCA in this study. Peer review means that it'll probably take until 2015 or even 2016 until the article is published.
  • We will also post summaries of our findings and of key steps in our research on this blog. It has been a bit quiet in recent weeks because we wanted to complete our dialogue with the external reference group that accompanies the review before sharing the details we explain in our Inception Report (see post above).
We are still looking for suitable international events in late May or in June to present our findings. If any interesting events come to your mind, please let us know!

Thursday, 30 January 2014

Helpful comments

A few comments have been posted on our blog. Apologies we are a bit slow with our reaction: We have been so absorbed by our hunt for contact details (see earlier post) and the launch of our survey with evaluation stakeholders - that we sort of lost sight of the useful comments which have appeared.

Rick Davies has shared a reference – thank you! – and directed us to an interesting blog posting on the question whether evaluations must fulfil certain quality standards to produce positive effects. The comments appear when you click on the titles “Inception” and “Evaluations identified for the first coding round” below.

Carol Miller (under “Soon to come: QCA conditions”) hopes we will look at evaluation processes in terms of how they contribute to empowerment of key stakeholders. Empowerment of stakeholders is indeed among the effects we intend to measure. By the way, our model for QCA will be published here - on this blog - with our final inception report, by March 2014. A couple of updates will be posted before that date.

One caveat: this project is not a huge piece of original research – it is a review of evaluation reports that is enriched by some primary data collection on evaluation effects, chiefly through a short web-based survey with four types of evaluation stakeholders – evaluators, people who commission evaluations, people who have implemented the interventions evaluated, and representatives of organisations that have funded the interventions. (We realise these categories sometimes overlap.) So, we won’t go into the fine detail of every possible evaluation effect. But we are confident we can find some interesting contours.

Thursday, 23 January 2014

Message from DfID to the survey participants

This week we have sent out our web-based survey. Our DFID counterparts, Zoe Stephenson and Clare McCrum, are sharing this message, to encourage prospective respondents to respond to our request to engage with the survey:

This piece of work is a review of VAWG evaluations, commissioned by DFID’s Evaluation Department.  It’s a really interesting piece of work exploring what makes evaluations effective/influential. The consultants have identified about 70 VAWG evaluations to explore and they will be using QCA (Qualitative Comparative Analysis) and process tracing to explore the factors that influence whether the evaluations were used/useful.  Doing this depends on seeking the views of some key stakeholders who were involved in the evaluations – ideally the commissioner, the evaluator and someone involved with the programme’s implementation.


The work is funded by DFID and the engagement of those who have received the survey would be greatly appreciated by us. It is important for a large number of persons to participate in the survey - otherwise we may have to remove some evaluations from the set that the researchers work on and that would be a shame. The larger the set the better!

Tuesday, 7 January 2014

Your help is needed: names and contacts

As explained in the previous post, we plan to contact (i) evaluators, (ii) people who have commissioned evaluations ("evaluation commissioners") and (iii) representatives of the organisations whose work has been evaluated - for the full set of evaluations. It has proven difficult to identify contact persons for all evaluation reports. 

Hence this crowdsourcing action: We would be immensely grateful if you could have a look at the list available under THIS LINK (click on "THIS LINK" to get there). Would you happen to know anyone who is knowlegeable about any of these evaluations? If so, please write to review-team@gmx.de, possibly with the name and e-mail address of the person who can share information about the evaluation.

Monday, 6 January 2014

Inception

The review process encompasses three phases: Scoping, Inception and the actual Review, i.e. the analysis of evaluations. We have completed the scoping phase, and we are deep into the inception phase now.

In parallel with our search for evaluations (Scoping Phase), we have reviewed literature and initiated a virtual discussion with the Review Reference Group on the dimensions of evaluation practice. We are interested in the characteristics of evaluations and the positive or negative results they produce. To obtain a first understanding as to the characteristics and effects we need to look for, we have studied relevant literature. (See your Scoping Report for the full literature list.)

We have identified a wide range of elements that are considered as influencing the effect of evaluations. That is, they are likely conditions for positive evaluation effects, in QCA terminology. These conditions have been provisionally clustered into five dimensions: 
  1. Conducive circumstances, which are present when the intervention is evaluable and the political environment (among and beyond evaluation stakeholders) favourable.
  2. Powerful mandate, something evaluators have if resources are appropriate, the evaluation is timely and the evaluation team commands high esteem.
  3. Convincing methodology that leads to compelling evidence, is well documented  participatory and ethically sound (‘do no harm’).
  4. Effective communication, which rests on presentation and dissemination of findings, conclusions, recommendations and lessons learned.
  5. High context sensitivity, in particular regarding gender, cultural and professional issues.
This is tentative and fairly abstract – our inception report will come with more precise definitions and calibrations to make fuller sense of these concepts. There is no hierarchy in these conditions. For the time being, the purpose of this initial inventory is to find out what could possibly influence evaluation effects. The provisional model we have built is a ‘maximum model’ in that it attempts to integrate a wide range of possible conditions.

We have also looked more closely at the effects of evaluations, clustering them into three groups:
  • Effects on development practice  – i.e. changes in the further implementation of the intervention evaluated, or in the implementation of subsequent interventions.
  • Effects on accountability and advocacy.
  • Effects on the wider knowledge base– in terms of learning beyond the actual intervention, for example the contribution an evaluation makes to the global knowledge base on “what works” in efforts to end violence against women. 
The Review Reference Group (RRG) examined the tentative model in October and provided rich comments. The dialogue with the RRG and our DfID counterparts has helped us to clarify the terminology used and to appreciate the many facets of these dimensions.

Following from that, we have developed detailed reporting sheets for the coders. The coders have started their first coding round, examining all 74 reports we identified in our search (see earlier post). At this point, their job is to map the data on conditions they find in the reports

As to the effects generated from the evaluations, we cannot rely on the reports for data. Therefore we are building a survey. For every evaluation in our set, we are planning to question at least two out of three types of stakeholders: (1) the evaluator, (2) a person who has commissioned the evaluation, (3) a representative of the organisation that has implemented the intervention evaluated and who can report on the effects of the evaluation. We have interviewed 2-3 representatives of each category to further enrich our picture of the effects evaluations can generate. At the moment, we are building a web-based survey that will be sent out in early January.

By the end of January, we expect to have:
  • An accurate picture of the data available from each of the 74 evaluation reports.
  • Rich data on many of the conditions in our model, from 74 evaluation reports.
  • Information from our survey respondents on evaluation characteristics which the reports have not provided sufficient data on.
  • Data on the effects the evaluations have produced.
Qualitative comparative analysis (QCA) is at the heart of our review methodology. If we obtain meaningful data on conditions and effects, we can go ahead with QCA. That is why we have put in extra shifts to make sure we can contact a large number of evaluation stakeholders – a task that has proven more difficult than expected! (See post below, “Review + detective work”.)

Review and detective work

Most of the 74 evaluation reports in our first coding round do not display the evaluator’s or the commissioner’s contact details. In some cases, the evaluators remain anonymous; in other cases, the only e-mail address available in the report is a generic info@xyz.org. This has surprised us – in our own evaluation practice, we always include our e-mail addresses so that our counterparts can get in touch with us in case, say, they wish to work with us again.

Even where we could find an e-mail address, it was not easy to elicit a response. One could blame the busy season - early December, when in many countries the festive season starts and/ or the fiscal year is about to end. But I was puzzled to see that even in organisations with dedicated monitoring and evaluation staff, knowledge about evaluations – including fairly recent ones (2011-2012) in the public domain – appeared uneven.

Our hunt for addresses continues; we are seeing light at the end of the tunnel. Many thanks to everyone who has helped finding evaluation stakeholders around the world! We are particularly indebted to people in organisations with several evaluations of VAWG-related work and who have been specially generous in sharing information at this busy time: extra thanks to CARE, FOKUS (fokuskvinner), the International Rescue Committee, Oxfam, the Population Council, UNICEF and WOMANKIND. We have also benefited from the support of the Review Reference Group members (special thanks to Amanda Sim, Helen Lindley and Krishna Belbase). Some UNDP country registries have also proven effective in identifying evaluation stakeholders when we had no other contacts.

For a handful of evaluations found via the web, we have not yet managed to obtain any addresses that work. We will post the list shortly to ask for 'crowdsourcing' support in identifying stakeholders. If the authors and users remain shrouded in mystery, we will have to remove these evaluations from our QCA set. Which is OK – QCA also works with small sets of cases. But it would be hard to draw conclusions for the overall evaluation landscape if we ended up with, say, just a dozen of evaluations.

Saturday, 4 January 2014

Evaluations identified for the first coding round

One thing that is special about our approach is that we do not only apply established quality standards to the evaluations we review. Instead, we will look into evaluation effects as well. Whether or not an evaluation has to fulfil established quality standards to produce positive effects is an open research question. To answer it, we have to include in our review evaluations that vary in the degree to which they fulfil certain methodological standards. We hope that our research will shed light on the factors that contribute to negative and positive evaluation effects.

We initially cast a large net, searching for any evaluations on work related to violence against women and girls.  A first, cursory exam of the reports we netted showed that summaries tended to contain too little information on evaluation approaches and methods.  Therefore we decided to work with full evaluation reports only.

We found 140 such reports. In many reports that included VAWG as a secondary component (e.g. evaluations of multi-sector country programmes, reproductive health initiatives and humanitarian aid), VAWG-related work tended to occupy a marginal position. Analysing those reports could yield useful information on the quality and effects of evaluations in general – but our focus are evaluations that are specifically designed for interventions on violence against women and girls.

In a further step, we narrowed down our set to reports completed in 2008-2012, excluding evaluations produced in 2013. This is because we will question (through interviews and a web-based survey) evaluation stakeholders about the effects the evaluation has produced. To make sure we can take into account effects that occur after an evaluation, we must allow for some time. One year seems a reasonable time-frame, even though we realise that some effects often occur at a later stage (for instance, the use of ‘lessons learned’ published in an article).

Of the 140 full evaluation reports, we have excluded 16 because they fell outside the 2008-2012 period, 43 because they evaluated interventions which included VAWG as a minor component, and 6 because both exclusion criteria applied. (One report did not show the year of publication.)
The remaining set includes 74 evaluations of VAWG-related interventions in low- to middle-income countries. This is the full set of evaluations we have found to meet all our criteria – i.e. we do not draw any sample. The evaluations cover three different contexts – development, humanitarian and conflict / post-conflict, and the four strategic priorities that inform DFID’s work on violence against women and girls (see figure below). Figures refer to the number of evaluations that match the criteria; the total exceeds 74 because some evaluations match several criteria.

DFID priorities
Development
Humanitarian
Post-/Conflict
Building political will and institutional capacity
22
7
12
Changing social norms
37
2
6
Empowering women and girls
12
3
3
Providing comprehensive services
16
7
7
The evaluations deal with a broad spectrum of interventions of varying complexity carried out by public and not-for profit actors (including women’s rights organisations), ranging from a single training project to multi-country programmes that bring together different types of interventions.  Most evaluations found have occurred near or after the end of an intervention, a smaller number are mid-term reviews.
The reports vary in size (8-258 pages); their median length is 52 pages (average length: 62). The degree to which they fulfil established quality standards (with regard to the methodology employed, protection of VAWG survivors and other aspects) is assessed in the first coding round. What can be said at this point is that quality, understood in this way, appears to vary significantly. This is also true for the appearance of the reports.
All published reports we have identified will be shared with DFID. 19 out of the 74 reports are unpublished or of uncertain publication status. We cannot share these reports with others, but we have obtained permission to extract data from these reports. It is important to keep them in the set of evaluations to be reviewed, as this is an opportunity to work on material that is not easily accessible to a wider public. 
For those who would like to take a peek at the published evaluation reports: the reports can be retrieved via this link. The link takes you to a folder which includes our full scoping report and a brief guide to the folder. 

Meet the coders

We are delighted to announce that we have found highly qualified coders: Miruna Bucurescu, Scout Burghardt, Sanja Kruse, Astrid Matten and Paula Pustulka. 

Astrid Matten holds a  Master's degree in Political Sciences, Sociology  and Social Politics from the University of Göttingen. Her research  focuses on migration, state borders and gender issues.

With a background in development studies, Miruna Bucurescu is a polyglot with expertise in women, peace and security. Since 2011 she has worked as an independent researcher focusing on social science statistics and evaluation.

Paula Pustulka is a sociologist, involved mostly in qualitative research projects in the fields of gender and migration studies. 

Sanja Kruse is a sociology graduate with a wide range of practical experiences in research and international cooperation, particularly related to employment, education and gender issues.

Scout Burghardt has a degree in Gender Studies and European Ethnology. She participated in the 2-year research project “Samenbanken – Samenspender” on reproductive technologies at Humboldt University Berlin.

For some reason, our provider refuses to upload the photographs that come with the text. You'll find them here as soon as it works again!