Syndicate content

Recent comments

  • Reply to: Starting life off on the wrong foot   1 day 10 hours ago
    Thansk Gaby.    One interesting thing about this result is that the parents seem to choose systematically who goes to school and who doesn't, based on cognitive ability.    On your broader point on the role of cash transfers in avoiding this -- that is something i understand the team is looking at (since this was a baseline for the cash transfer evaluation).  
  • Reply to: Starting life off on the wrong foot   1 day 17 hours ago

    Thanks for the very neat summary! certainly important but looking the situation in Burkina Faso form a holistic point of view (including from a policy perspective), it would have been good to know also if and how the cash transfer programme has helped mitigate the negative effects of shocks and what types of programs are best fit for certain type of shocks. I think we all know and acknowledge (researchers, practitioners, civil society and governments) that life in early years (starting from mothers health, in-utero, and early years) is important. No one questions now that any further. What sometimes we missed is to know why certain programs work more than others, what are those "soft part" components that can make a program successful or not. Not only looking at the technical issues (type of targeting, inputs, etc.) but at capacities, institutional and coordination mechanisms. It is in these issues that evidence and research is most needed, and is where it is more difficult to have. A sort of bridge is needed between academic research and policy questions. Is there a space to feed into any of these? and to have people in the field raise some of these? I'm sure there are more than a handful of important and interesting research questions to be addressed.

  • Reply to: Evaluating after the barn-door has been left open: Evaluating Heifer’s Give-a-Goat or Give-a-Cow Programs   2 days 3 hours ago

    Obviously randomizing individuals would be difficult given the organizations workflow; is there any reason that randomizing communities wouldn't be effective as a tool for evaluation?

  • Reply to: An addendum to pre-analysis plans: Pre-specifying when you won’t use data collected   3 days 15 hours ago
    Thanks for these thoughts. I totally agree in terms of wanting to use control group information only to avoid treatment effect issues - although I think it is worth also thinking more carefully about what to do if the treatment group decides not to respond to one set of measures (perhaps because they are fatigued from your intervention) while the control group does - this is clearly a treatment effect, but may then be completely uninformative for telling you about a particular outcome.

    I'm not sure "report everything but provide flags" can completely deal with this issue, since i) we may not want to include in multiple testing and indices measures which are not useful and just serve to make it harder for us to detect impacts on our other measures after correcting for multiple testing or averaging in some noise; ii) I do worry about appendix arms races, where the appendices get to be several times longer than (30-40 page) papers, and see part of the role of the pre-analysis plan as being to prioritize what you will look at and report.

    But totally agree these are tough questions and I'm not sure what I suggest here is the right approach either. So more thoughts/comments/critiques very welcome.
  • Reply to: An addendum to pre-analysis plans: Pre-specifying when you won’t use data collected   3 days 16 hours ago

    hi David -- these are really hard issues. My first instinct would be to worry about what biases might accompany any rules like this. For example it seems obvious that there is no point using a variable that has no variation. But of course a variable with no variation is one for which there was no treatment effect (all the patients are still sick). So discarding such variables could introduce a kind of fishing. Similarly a noisy variable might only be noisy in one arm because of an absence of an effect.

    So some other quick reactions:

    * Whatever rule you use, investigate the kind of effects it might have for bias or MSE. eg use monte carlo simulation (soon http://declaredesign.org/ should be able to help with this I hope!) to get a handle on the conditions under which the rule introduces biases

    * Such exercises might reveal the need for some form of correction

    * Use criteria that do not sneak in information on treatment effects -- eg look at variation in the control group only.

    * Be cautious about applying the rule if the criterion performs differently in treatment and control groups -- eg if there is differential non response (of course failure to find a difference between groups is not a guarantee that there are no differences)

    * Rather than dropping why not report everything but provide flags (eg indicate which analyses are underpowered)