Syndicate content

Add new comment

Submitted by Berk Ozler on
Hi Hugh,
 
My view on secondary analysis, including ex-post identification of a new independent variable or redefinition/recategorization of an existing variable, is OK – in fact sometimes necessary – as long as it is clearly specified as such and done as transparently as possible.

Sometimes new information, not anything regarding the effect sizes and their p-values but rather substantive new insights, can lead to such secondary analysis, the results of which may be worse left out than presented. For the funding agency in this case, while I don’t know the specifics, it is clear that they have been waiting to find out the about the results of this review – presumably with the intent to apply some lessons from it (hopefully carefully and judiciously) to cash transfer programs that they may be supporting in Asia and the Pacific. Presenting pooled effect sizes using a binary categorization of such programs that we know now to be noisy and at best a poor representation of reality could be a waste of their money and potentially lead to future programs with inferior designs. On the other hand, uncovering a strong moderator of the effects of cash transfer programs, which the designer can realistically manipulate, may turn out to be a bargain for the same funder. Again, however, the caveat about full disclosure and the need for a theoretical or empirical grounding for the ex-post analysis, applies.

This has implications for the final version of the systematic review that becomes published – at least in my humble opinion. First, in the section titled ‘Differences between the protocol and the review,’ there would be a full disclosure of the redefinition of the moderator variable concerning the ‘degree of monitoring.’ Second, in the section reporting the findings, EITHER the results based on the original plan could be presented first, followed by a clearly demarcated section that presents the secondary analysis, OR when a result is presented based on analysis that deviates from the original protocol, it can be clearly marked as such. We’d be happy to do all of these and will likely look for guidance from the CC’s Methods Editors.

One more point: in this particular case, we have tried only one other variable instead of the binary CCT/UCT variable. The coefficient estimate for the moderator variable has a p-value of 0.01. I am not familiar with multiple inference corrections that apply to random effects models in meta-analysis, but it seems to me that the coefficient estimate would still be statistically significant at the 95% level if we adopted a correction for the fact that we examined two moderators rather than one.

On test scores, I would not hold my breath if I were you. In the end, only a small number of studies reporting impacts on test scores were eligible for our review and the preliminary finding is that the effect sizes are very small and mostly insignificant. The pooled effect for all cash transfer programs is 0.06 standard deviation improvement for children in households offered cash transfers of some type (95% CI 0.01-0.12).