Adam, I was quite alarmed when I first read your post. Heck, I appeared to have co-authored a systematic review without even knowing that I had done so…….I went back and re-read the Adam et al paper, the two words “systematic review” do not occur anywhere in the paper other than in the bibliography. It seems a little unfair to hold the paper to the standards of a systematic review, if it does not purport to be one??? Perhaps akin to criticizing benefit incidence analyses for not ensuring that findings were validated with the poor: a standard hallmark of qualitative research (but typically not econometric research). Your post however raises some deeper concerns in my view. There is currently a momentum to document and standardize research approaches used within the health systems field, both for systematic review methods (as you describe) and health systems research methods more broadly. The PLOS Medicine series on health systems guidance for policy makers (see Bosch-Capblanch et al 2012) reflect this. As David Peters and I argued in our commentary on the PLOS Medicine series, though there is undoubtedly a need to build rigor and standards in the field, there is also a real danger of discrediting important but perhaps less well recognized approaches within the HSR field and closing off promising avenues of enquiry in an attempt to ensure standardization. To apply these thoughts to systematic reviews…..the Cochrane Collaboration has developed rigorous standards for the effectiveness reviews included in its database. But the standards around which there is broad agreement are only applicable for research questions that concern the effect of an intervention. As we know, this is not the only thing that policy makers are interested in, they are also concerned about how feasible it may be to implement a reform within their particular health system, what the likely reaction of the population may be to a reform, or potentially how a reform may affect other aspects of their health system (as discussed by Adam et al). It would be a major mistake to employ the same inclusion criteria (in terms of study design) in responding to this diversity of questions: while a good ethnographic study may illuminate how people react to a reform, it will not be very good at revealing impacts on service utilization for example. No one could support “naïve critical appraisal” – but getting agreement around appropriate critical appraisal for anything other than a straight effectiveness review, appears difficult. I have had several conversations recently with people in policy and decision-making positions in international or donor organizations who have expressed frustration with the rather limited insights that they find in systematic reviews of health systems questions….this goes much deeper than missing some studies, or not using quite the right research terms. The primary concern I have heard expressed is that such reviews can be time consuming and costly, and yet despite significant efforts to search and extract data from the literature in a systematic fashion, they fail to deliver new insights and often remain frustratingly inconclusive. I am not convinced that all reviews need to be systematic reviews and or exhaustive in their search. I think that in addition to Cadillac effectiveness reviews there is a need for scoping reviews, and relatively quick and dirty reviews that provide relevant evidence in a timely fashion. Even more importantly we need to be thinking more carefully about how we engage stakeholders in systematic reviews, whether this is filtering questions and evidence to ensure their relevance, or asking stakeholders to help interpret review findings based on their own experiences. The work of Sandy Oliver and colleagues at the EPPI center is very helpful in this respect. Clearly any research endeavor needs to be systematic, explicit about the methods employed, and rigorous (in the sense that methods used should be appropriate to the question asked). In this vein we clearly need guides to systematic review methods and processes….but we also need open and enquiring minds that are willing to experiment with alternative types of review questions and systematic review approaches.