(There have been a few non-methodological ones as well. For example, the detective work we carried out to identify and contact hundreds of evaluation stakeholders so as to ask them about the effects the evaluations had produced.)
A couple of clarifications, to read with Rick's blog post: : The "model" reproduced in Rick's posting has a very minor place in our inception report (that is why it is part of the annexes). We we have used it only as a communication device to solicit ideas and comments from the Reference Group. The purpose of our QCA is _not_ to test the model, but to identify combinations of conditions that lead to effective evaluations. In a way, the model is just a list of potential ingredients for effective evaluation.
Since our dialogue with the Reference Group in late 2013 we have adjusted and defined the five central conditions and their components. They look different now. For instance "convincing methodology" has been replaced by "compelling evidence", which is about quality standards in research (such as triangulation of data sources, transparent documentation...). Methodological choices enter the analysis as separate conditions.
We will post the precise definitions of the conditions (or "ingredients") in April. Watch this space.