We -- and the Government of Malawi, who run the program we evaluated, and the World Bank, its major funder -- thought the country's large public works program would improve food security and increase use of fertilizer. We set out to study design variants that might make the program more effective, but instead, learned that it has no effect on either food security or fertilizer use (https://www.sciencedirect.com/science/article/pii/S0304387817300354). These aren't just imprecisely estimated effects -- we can rule out any meaningful improvements in the outcomes specifically targeted by the program.
We had a hard time getting either academics or policy makers to accept these results. Part of that is a shortcoming of the research itself; we can't explain why the program fails, though we can rule out many of the mechanisms that have been suggested, and therefore the research doesn't provide specific advice about what would fix it. But I concur, it's hard to publish or to engage in policy discussions around a null result, even if it's a null for an expensive and large-scale program!