It seemed to me that there were two problems. First, it is hard to compare welfare before and after the experiment. If the problem at stake is about the take-up of HIV medication, then using a treatment increasing the take-up rate is probably indisputably an improvement. But other interventions, say on savings behavior, or less clear-cut.
The second problem is about the intervention itself. Mullainhatan discussed an experiment where he and some colleagues were just sending reminders to save money for the future purchase of fertilizer. The reminders "worked", in that the purchase of fertilizer increased. But what was the reminder actually doing? What behavioral bias did it solve? Was the problem actually a behavioral problem?
In any case, the thing I wanted to discuss was that the main issue with behavioral econ right now seems to be the unlimited number of biases that have been tested and proven. Have fun. So we don't really know what we're doing.
So let's say we choose the biases we want to study. We might then be entering into Hughes' theory of technology:
the technologies we end up using aren’t determined by any objective measure of quality. In fact, the tools we choose are often deeply flawed. They just happened to meet our particular social needs at a particular time and then became embedded in our culture.
If you think of the cars story, we're stuck today in an equilibrium with plenty of gasoline refueling station and people have cars working on gasoline, but for instance it would make a lot of sense to change that to have more cars, and more stations for natural gas, that function the same way, have a far lower energy-equivalent impact on greenhouse gas emissions, and is now far cheaper at least in the US. Here's James Hamilton quoting Christopher Knittel:
Large-scale adoption of natural gas vehicles requires coordination between vehicle manufacturers, consumers, and refueling stations-- either existing gasoline stations or replacements. This creates a chicken-and- egg problem, or a network externality issue.
I was thinking about this more generally because as Sendhil mentioned, there are a lot of theories we can test in behavioral. The first choice of what we choose to test might be important...
As a last point, one thing that might help us solve the problem is going one step back and look for models of thought process that will help us understand all those biases. This is what I feel Mike Woodford has been trying to do in an amazing paper that everybody should read. The simple idea is that people make choices not under a budget constraint, but under a cognitive constraint. With a budget/monetary constraint, you have to make choices between goods, say. With a cognitive constraint, you have to make choices about the precision of the information you're collecting. For instance, is it worth it to be able to distinguish between two different shades of blue? Is it worth it to make a distinction between two interest rates a couple of basis points apart? Generally, for instance, you discretize any continuous variable. You can also think of how you interpret probabilities. Certainty is easy to understand, but distinguishing 65% and 55% is pretty hard.
rational inattention, or Xavier Gabaix'"sparsity-based model of bounded rationality" can also generate a list of behavioral biases that could provide a more general approach. Importantly, Mullainhatan concluded his talk on the impact of scarcity on what we choose to focus on, which is quite consistent with the three models mentioned above.