Published on Development Impact

Richard Feynman’s advice for science in 1974 resonates still

This page in:

Since we’ve just had the latest Nobel Prizes announced, I thought I’d share this. I just finished listening to the audiobook of “Surely You’re Joking, Mr Feynman”, a collection of stories (released in 1985) from the life of 1965 Nobel Prize-winning Physicist Richard Feynman. I almost didn’t get very far into it, because he is a somewhat annoying character, especially in the start of the book, regaling many stories about how clever he was and how he tricked this or that person, not to mention sexist attitudes and behaviors that he was not only unrepentant, but oddly proud of. However, there was also a lot of curiosity for other cultures and for learning about a myriad of odd topics, and three things that stood out to me.  First, in discussing his work in Los Alamos as part of the Manhattan Project, there is an interesting discussion of management and the need to explain to workers why they are being asked to do what they are doing. He notes that secrecy concerns meant young workers were asked to work hard on problems without knowing why – and once he was able to tell them the reason, they drastically increased productivity and came up with lots of new solutions to fix problems on their own. Second, the discussion of the Brazilian education system of the time as one where there is enormous dedication to rote learning and people able to regurgitate definitions from books, but not apply them to real-world behavior is one we still see in many educational settings today I think. But perhaps the most interesting is his 1974 Caltech commencement address, which the book ends with – which he titled Cargo Cult Science.  It is amazing to me how several of the themes, written in the year I was born, resonant still so strongly today when applied to economic theory and applied work. Here are some examples which I thought I’d share.

On theory, empirics, and the need for pointing out what doesn’t fit

“It’s a kind of scientific integrity, a principle of scientific thought that corresponds to a kind of utter honesty—a kind of leaning over backwards.  For example, if you’re doing an experiment, you should report everything that you think might make it invalid—not only what you think is right about it: other causes that could possibly explain your results; and things you thought of that you’ve eliminated by some other experiment, and how they worked—to make sure the other fellow can tell they have been eliminated.

Details that could throw doubt on your interpretation must be given, if you know them.  You must do the best you can—if you know anything at all wrong, or possibly wrong—to explain it.  If you make a theory, for example, and advertise it, or put it out, then you must also put down all the facts that disagree with it, as well as those that agree with it.  There is also a more subtle problem.  When you have put a lot of ideas together to make an elaborate theory, you want to make sure, when explaining what it fits, that those things it fits are not just the things that gave you the idea for the theory; but that the finished theory makes something else come out right, in addition.” – i.e. does your theory merely fit the facts you had when putting it together – or does it generate new (and testable) additional predictions?

On publication bias and policy advice bias

“The first principle is that you must not fool yourself—and you are the easiest person to fool.  So you have to be very careful about that.  After you’ve not fooled yourself, it’s easy not to fool other scientists.  You just have to be honest in a conventional way after that.

One example of the principle is this: If you’ve made up your mind to test a theory, or you want to explain some idea, you should always decide to publish it whichever way it comes out.  If we only publish results of a certain kind, we can make the argument look good.  We must publish both kinds of result.  For example—let’s take advertising again—suppose some particular cigarette has some particular property, like low nicotine.  It’s published widely by the company that this means it is good for you—they don’t say, for instance, that the tars are a different proportion, or that something else is the matter with the cigarette.  In other words, publication probability depends upon the answer.  That should not be done.

I say that’s also important in giving certain types of government advice. Supposing a senator asked you for advice about whether drilling a hole should be done in his state; and you decide it would be better in some other state.  If you don’t publish such a result, it seems to me you’re not giving scientific advice.  You’re being used.  If your answer happens to come out in the direction the government or the politicians like, they can use it as an argument in their favor; if it comes out the other way, they don’t publish it at all.  That’s not giving scientific advice.”

And on the importance of replication and building on past evidence

“When I was at Cornell.  I often talked to the people in the psychology department.  One of the students told me she wanted to do an experiment that went something like this—I don’t remember it in detail, but it had been found by others that under certain circumstances, X, rats did something, A.  She was curious as to whether, if she changed the circumstances to Y, they would still do, A.  So her proposal was to do the experiment under circumstances Y and see if they still did A.

I explained to her that it was necessary first to repeat in her laboratory the experiment of the other person—to do it under condition X to see if she could also get result A—and then change to Y and see if A changed.  Then she would know that the real difference was the thing she thought she had under control.

She was very delighted with this new idea, and went to her professor.  And his reply was, no, you cannot do that, because the experiment has already been done and you would be wasting time.  This was in about 1935 or so, and it seems to have been the general policy then to not try to repeat psychological experiments, but only to change the conditions and see what happens.

Nowadays there’s a certain danger of the same thing happening, even in the famous field of physics.  I was shocked to hear of an experiment done at the big accelerator at the National Accelerator Laboratory, where a person used deuterium.  In order to compare his heavy hydrogen results to what might happen to light hydrogen he had to use data from someone else’s experiment on light hydrogen, which was done on different apparatus.  When asked he said it was because he couldn’t get time on the program (because there’s so little time and it’s such expensive apparatus) to do the experiment with light hydrogen on this apparatus because there wouldn’t be any new result.  And so the men in charge of programs at NAL are so anxious for new results, in order to get more money to keep the thing going for public relations purposes, they are destroying—possibly—the value of the experiments themselves, which is the whole purpose of the thing.  It is often hard for the experimenters there to complete their work as their scientific integrity demands.”

He concludes with a wish for students that is still a hope for us all today

 “I have just one wish for you—the good luck to be somewhere where you are free to maintain the kind of integrity I have described, and where you do not feel forced by a need to maintain your position in the organization, or financial support, or so on, to lose your integrity.  May you have that freedom”


Authors

David McKenzie

Lead Economist, Development Research Group, World Bank

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000