Surprised, anyone? Putting the debate about QCA into context
Creators
As is well known, QCA has been under intense scrutiny in recent years and subject to criticism (sometimes quite strong). I am not going to review studies on the validity of QCA that entail criticism; although it would be worthwhile as I am not always convinced that the simulations are set up properly (most inquiries are based on some form of simulation). If, for the moment, we take the findings at face value, it would be helpful to take a step back and ask how surprised one can and should be by them.
In my view, the critics have hardly provided anything that should come as a surprise. Measurement error threatens the validity of QCA solutions? Well, all empirical research is bedeviled by measurement error, regardless of the method that is used. The QCA solution you get can be dependent on which cases you include? This is to be expected, given that the truth table rows that feed into the QCA solution depend on whether cases fall into a row and its consistency value. The QCA solution is not valid in the presence of overspecification (too many conditions in the analysis)? I would be surprised if the QCA solution is wholly insensitive to the conditions we use.
In short, all of the issues that we know to be a problem for empirical research – sampling bias, measurement error, etc. – can also be expected to pose a threat to valid causal inference in QCA. No one could seriously argue anything to the contrary and I would be surprised if anybody ever claimed that QCA gets the right solution regardless of measurement error, calibration of conditions, etc. (Readers are invited to point my attention to such statements, but we should detach statements about QCA from its inherent qualities and not hold a method hostage to incorrect perceptions about it.)
The question is less about whether QCA is affected by these problems, but how and with what consequences. It is at this point where many inquiries into QCA overstep by making overly strong claims about QCA. (Here, it would be important to reconstruct in detail how studies of QCA produce their results because some turn out surprisingly badly for QCA (which is not to say they have to be wrong.) If one finds that QCA is sensitive to some issue such as measurement error, one has only demonstrated what has always been obvious. Dismissing QCA as a method based on such an insight takes the point too far and implies that we should cease doing empirical research because all methods have their problems with measurement error. Did knowledge about potential omitted variable bias hinder the widespread application of regression analysis? It did not and that's good because we do not only know what the problem is, but also what the consequences and remedies are.
The conclusion that QCA is affected by a problem can only be the first step toward developing a better understanding of how the validity of QCA results is threatened and whether and how to improve QCA to diminish those adverse effects. This is an important route for future work on QCA because, unfortunately, work that critically engaged with QCA only took the first step.
Additional details
Description
As is well known, QCA has been under intense scrutiny in recent years and subject to criticism (sometimes quite strong). I am not going to review studies on the validity of QCA that entail criticism;
Identifiers
- UUID
- 984a2cb0-d53e-4daf-93c4-f8a49d132de8
- GUID
- https://ingorohlfing.wordpress.com/?p=754
- URL
- https://ingorohlfing.wordpress.com/2015/05/23/surprised-anyone-putting-the-debate-about-qca-into-context
Dates
- Issued
-
2015-05-23T13:39:14
- Updated
-
2015-05-23T13:39:14