Syndicate content

Recent comments

  • Reply to: Finally, a way to do easy randomization inference in Stata!   6 days 10 hours ago

    Hi Jacobus,

    What the right approach is depends very much on your original randomization, about which you did not provide all details. I can however give you some general hints which might help you compute what you want.

    (1) if the two treatments (and control) are mutually exclusive, i.e. they can be described by a single treatment variable taking on the values 0, 1, or 2, you might be able to run:

    ritest treat (_b[1.treat]-_b[2.treat]), cluster(cid) strata(sid): reg y x i.treatment

    to obtain what you are looking for.

    (2) If your treatments are overlapping, it is only slightly more complex to let the re-randomization follow the original procedure. For this you can use the samplingprogram()-option. A user-defined rerandomization program is allowed to alter more than just a single treatment-indicator variable and can thus rerandomize your data in the way you want. You can than still specify "(_b[1.treat]-_b[2.treat])" as the statistic of interest. You can always double-check what ritest does using the saveresampling()-option.

    Please let me know if I misunderstood your question or if you have any further questions.

  • Reply to: Finally, a way to do easy randomization inference in Stata!   6 days 11 hours ago

    This is an excellent resource and write-up. Thanks David and Jason for blogging, and thanks Simon for writing the code.

    I have also been struggling with how to test equality of coefficients with two treatments, using randomization inference.

    I think the p-value should be constructed by calculating the proportion of times where the *difference* in observed (re-randomized) treatment impacts is larger than the difference in the true treatment impacts.

    For example, take a dataset where “b1_estimated” and “b2_estimated” are the stored coefficients of the two treatment dummies, after running the main regression thousands of times, each time randomly re-assigning clusters to treatment. The code to calculate the randomization inference p-value would be:

    count if true_difference < abs(b2_estimated – b1_estimated)
    gen p1 = `r(N)' / obs

    where “true_difference” is a constant of the true difference in treatment impacts; and "obs" is the number of repetitions. With this code I get a very similar p-value to the simple t-test of equality of coefficients.

    But I am not sure if this can be done with the ritest command. At first I thought I could take a short-cut by setting one of the treatment arms as omitted category and throw in a dummy for the control (e.g. gen T0 = treat == 0)

    E.g. ritest assigntreat _b[T2], reps(1000) strata(strata) cluster(uid) seed(124): reg y T0 T2 $controls, cluster(uid)

    But this does not take into account the statistical uncertainty in assignment of treatment to T1. With this method one calculates:

    count if abs(true_difference) < abs(b2_estimated)
    gen p2 = `r(N)' / obs

    But this under-estimates the p-value. In my data, p2>p1. (Intuitively, the variance of the difference between two normally distributed random variables is higher than the variance of each of the random variable.)

    Let me know if you think this is the correct approach.

  • Reply to: What happens when you give $50,000 to an aspiring Nigerian entrepreneur?   6 days 17 hours ago

    Its of great importance to see the world bank being proactive too
    What is wrong with a world bank developing huge database of samples of businesses and business plans that can be tweaked by prospective entrepreneurs for use in their various countries?

    Knowledge-based empowerment matters and is key to greater success in a systemic integration of all to the empowerment train

    So if this is done , then it is possible to have a large number of would-be entrepreneurs who are knowledgeable in what they want to venture into

    This is my take

  • Reply to: Why don’t economists do cost analysis in their impact evaluations?   1 week 2 days ago

    As a management consultant I have been confronted with impact evaluations and to put it in layman's terms I must say 1) The benefits of an intervention, in most cases, continue to accrue over a long time in future therefore CBA becomes tedious and less accurate by making too many assumptions for the future. 2) Behavioral change is indeed the most sought after goal of most the interventions and they have a snowballing effect after a certain period of inertia. It is difficult to project with any accuracy what those effects would be so it becomes like an exercise of fantasy. 3. The benefits cut across sectors. At least in the long run. Again the same problem. So, in my view a more realistic and cogent set of KPIs other than an accounting approach would be a better solution. The KPIs to be set at the beginning of a project after a baseline to benchmark against would be the nearest to accurate and acceptable.

  • Reply to: Information-driven voter disagreement over more authoritarianism: Experimental evidence from Turkey: Guest post by Ceren Baysan   1 week 3 days ago