To generate realistic, practice-oriented findings
and recommendations, the Review needs to differentiate
between a wide range of evaluation approaches, methods and contexts. The
number of existing evaluations in the field of VAWG is too small for
statistical analysis to yield accurate conclusions. Yet, it would not do
justice to the variety of evaluation settings if we selected only few
evaluations for detailed analysis, as a conventional comparative case study
would do. Qualitative comparative
analysis (QCA) enables us to make full use of evidence from a wide spectrum of
evaluations - without jeopardizing the applicability and generalisability of
our findings. QCA has been designed for “medium-N” situations, i.e. situations
where there are more than a handful of cases, but too few for meaningful
statistical analysis.
QCA rests on the assumption that several cause-to-effect chains coexist. It matches sets of characteristics (in our case, the characteristics of evaluations) with specific outcomes (for instance, improved results of advocacy efforts). This method helps reveal which interactions between different kinds of methodology, resources and other conditions are necessary to achieve high quality evaluations under specific sets of circumstantial factors.
QCA is transparent and replicable: It makes it possible and necessary to explain the iterative process of categorizing and coding evaluation reports that will be included in the analysis. We will go back and forth between conceptual work (categorisations of evaluation practice) and the evidence (evaluation reports and users’ narratives on evaluation processes and outcomes). We will thereby refine the definitions of dimensions of evaluation practice, and indicators that can be used to categorise evaluations. New factors will be taken into account when they prove necessary; old differentiations between evaluation settings will be given up if they prove superfluous.
Statistical methods or “conventional” comparative case studies may include similarly iterative processes, but their movement between theoretical levels and the evidence tends to be unsystematic and implicit. This “black box” situation may lead to the omission of important explanatory factors, and makes it difficult to replicate the findings.
QCA rests on the assumption that several cause-to-effect chains coexist. It matches sets of characteristics (in our case, the characteristics of evaluations) with specific outcomes (for instance, improved results of advocacy efforts). This method helps reveal which interactions between different kinds of methodology, resources and other conditions are necessary to achieve high quality evaluations under specific sets of circumstantial factors.
QCA is transparent and replicable: It makes it possible and necessary to explain the iterative process of categorizing and coding evaluation reports that will be included in the analysis. We will go back and forth between conceptual work (categorisations of evaluation practice) and the evidence (evaluation reports and users’ narratives on evaluation processes and outcomes). We will thereby refine the definitions of dimensions of evaluation practice, and indicators that can be used to categorise evaluations. New factors will be taken into account when they prove necessary; old differentiations between evaluation settings will be given up if they prove superfluous.
Statistical methods or “conventional” comparative case studies may include similarly iterative processes, but their movement between theoretical levels and the evidence tends to be unsystematic and implicit. This “black box” situation may lead to the omission of important explanatory factors, and makes it difficult to replicate the findings.
No comments:
Post a Comment
Note: only a member of this blog may post a comment.