Saturday, November 19, 2011

Improve cooperation easily

Update: link to NYT story on Philippines' politics

Experimentalists have tried to understand what could increase cooperation in games such as a prisoners' dilemma game. Charness, Frechette and Qin allow for side payements. Dal Bo and Dal Bo show that moral messages have a significant but temporary effect. Goette, Huffman and Meier, in a very interesting paper, show that if people are assigned to some groups, cooperation will be higher between two members of the same group than two members of different groups. You also have papers using trust games, typically of the form:
one agent, the “trustor” (A), can sendsomeor none of anendowmentprovided by the experimenter to another agent, the “trustee” (B), who receives triple the amount sent.Bcan then returnsomeornoneofwhathe or she received to A.
 Ben Ner, Putterman and Ren show that two-way communication improves trust(and trustworthiness) substantially. Charness and Dwufenberg find that the content of the communication is important: bare promises are as good as no communication at all. And that goes on and on.

All this is all good, but here is a more efficient way to prove your willingness to cooperate, provided by Filipino politicians. The background story is that the former president, Gloria Arroyo, is reportedly sick and cannot get treatment for her ailment in the Philippines. She wants to leave, but the current government is afraid she will not come back because of some potential corruption cases against her. Her husband's lawyer found the way to make her return credible:
Earlier in the week, Ferdinand Topacio, the lawyer for Mrs. Arroyo’s husband, said in a television interview that he was so confident that his clients would return if allowed to leave that he would have one of his testicles removed if they did not.
After the arrest warrant was issued on Friday, Edwin Lacierda, a presidential spokesman, said: “The order in the Pasay court has allowed Attorney Topacio to save his family jewels.”. 

FRANCE FRANCE FRANCE

Via Charles Blow, Pew had very interesting insights into people's perceptions of their own country. I always felt, without data, that French people had no strong feeling about their frenchness, or that French were less "patriotic"(for lack of a better word) than other countries, say OECD countries. Thanks to Pew, here is a small snippet of data to back this up:
And this makes this graph even more startling:

So this being said, I wanted to have a patriotic moment and put on my French hat for something that has been slightly upsetting me in the past months. A lot of articles have mentioned the widening French/German spreads in bond yields, in parallel mentioning the actual Spanish and Italian yields. This means that in the same paragraph, you hear about a relative value and an absolute value. This has been used as evidence that France is next. There is no denying that the spread has increased
10-Year French/German spread

Now, I haven't seen in any of those articles a way to distinguish whether the widening spread was due to a flight to safety to Germany or an aversion to France risk.  Via MarketBeat, here are the France v. German 10year -bond yield in the last 30 years:
Now, looking at the data, I would actually lean towards the position that France is considered riskier:


German Government Bonds 10 Year
French Government Bonds 10 Year













But I still think that there's an empirical test that has not, and that should, be done so that we can know the whole story.

On a related note, Floyd Norris gives us a nice chart today showing that Germany rebounded quite well from the bottom of the crisis
That's interesting, but here are the Real GDP growth rate quarter to quarter since 2005:
And the quarterly growth rate in German real GDP between 2008Q4 and 2009Q1 was -4% versus -1.5 for France. So yeah, Germany grew quicker afterwards.

The rising cost of financial intermediation

A lot of the recent debate on regulation in the finance industry has revolved around issues of size. How big is too big, in the financial sector? Should the largest financial corporations be taxed and monitored more than others? This set of issues has been dubbed "systemic risk", and a lot of measurement and theory work has gone into it.

Systemic risk, however, is about the size of one financial corporation relative to others. Thomas Philippon, in ongoing research, asks a different question: how big is the financial industry in the US as a whole? In relation to its output, does that size seem reasonable?

The graph above answers the question of the size of the financial sector. It shows the income share of the financial sector. Income is measured as wages and profits; there are some difficulties in obtaining those on a consistent basis, so Philippon looks at different data sources, which correspond to the different lines. Across sources, the message is the same: the income share of finance has changed over time, but it's larger than it ever used to be.

Now, of course, this need not be, per se, a bad thing. Perhaps there are good reasons to allocate a larger share of income to the financial industry. In fact, Baumol showed in 1967 that in a standard neoclassical growth model, the income share of a sector can grow over time if 1) it experiences higher technological progress than other sectors and 2) the elasticity of substitution across sectors is less than one.

The financial sector, however, is not just any type of sector: it does not produce a special variety of goods; instead, it provides intermediation services between different parts of the economy. Baumol's standard explanation for the apparent "superiority" of finance (relative technological progress, elasticity of substitution) does not apply. So Philippon sets out to construct variations of the neo-classical growth model that capture what he deems are the two most important roles of the financial sector: 1) it transfers funds from households to other households, corporate borrowers and the government, and in the process provides monitoring services 2) it provides liquidity services to households, ie it holds their cash and makes it available to them.

He then asks: what path of intermediation costs does the model need in order to match the observed path of finance's income share? This is the same type of approach that Mehra and Prescott took with the equity premium. They asked: given the large observed equity premium, and the comovement of equity values with consumption, what risk aversion is consistent with optimization of rational agents? The answer was: a really high risk aversion. Philippon's answer to his question (what intermediation cost is consistent with the rise in finance's income share in an efficient model of financial intermediation?) has the same flavor: you need pretty steeply rising costs.

How does he get to that answer? He focuses on the balanced growth path of his neoclassical economy with financial intermediation, on which he shows that the the following relationship has to hold:

\[ \phi = \psi_m \frac{m}{y} + \psi_c \frac{b_c}{y} + \psi_k \frac{b_k}{y} + \psi_e\frac{e}{y} + \psi_g \frac{b_g}{y} \]

This relationship links the share of finance to gdp ($\phi$) to the "output to gdp" ratios corresponding to the various functions of the financial sector. For example, $\psi_m$ is the unit cost of intermediation for liquidity services, and $\frac{m}{y}$ is the ratio of total "liquidity services" (in his measure, bank deposits and assets of money market mutual funds) to nominal gdp. The following terms correspond, respectively, to household debt, corporate debt, corporate equity, and government debt. Note that this is a pretty standard decomposition: in a standard neoclassical growth model, labor's income share ($\alpha$) is equal to the unit cost of labor ($w$) times the output share of labor ($h/y$).

The key idea of the paper is to use this relationship to back out something akin to the "intermediation cost" of finance - in much the same way Mehra-Prescott backed out risk aversion from the equity premium. There are some measurement problems here, related to flows and funds, and which I do not understand, but the idea is to rewrite the relationship above as:

\[ \phi_t = (\gamma_m \frac{m_t}{y_t} + \gamma_b \frac{b_t}{y_t} + \gamma_e \frac{e_t}{y_t} ) \psi_t \]

where now the terms are grouped by, respectively, liquidity services, debt (household, government and corporate), and equity. The term $\psi_t$ represents the average unit cost of financial intermediation, while $\gamma_m$, $\gamma_e$ and $\gamma_b$ represent the relative cost for a particular type of service. The implicit assumption here is that these relative costs are constant over time. (It's very unclear to me why he cannot just estimate the linear relationship above, with "output-specific" costs; it seems like he has all the data he needs).

Anyhow, one can now go ahead and compute a series for $\psi_t$, in what is probably the most straightforward estimation procedure ever: divide finance's share of income by what is, at its core, just a weighted measure of the output of the financial sector.
What he gets from this exercise is the picture below.
What do we learn from this? There are two points worth emphasizing.

First, intermediation costs have been moving in a stable range over the very long run - somewhere between 1.5 to 2.5 percent, ie a cost 1.5 to 2.5 cents per dollar of financial "stuff" produced. That is remarkable, given how variable the different series that go into the construction of the series are.

Second, it's been trending upwards for the better part of the last forty years.

This second fact is the main finding of the paper; and it is puzzling, for two reasons. The first one is that we tend to think of the early 1900's as times when the finance sector was highly monopolistic , and could have extracted higher rents (read: higher income per unit of financial stuff produced) than the current, probably slightly more competitive financial sector. Yet if you look at the graph, intermediation costs were lower then, than they are now. The second reason for which this rise in costs is puzzling is the IT boom. The financial sector invested heavily in IT; today most trade is conducted electronically; even the floor of the NYSE was closed down and replaced by computers. All of this suggests large productivity gains in intermediation, yet according to Philippon's measure, none of these gains were passed through in the form of lower intermediation costs.
So what happened? Philippon sees two possible types of answers. The first one are "efficient" answers. It is possible that the financial sector is providing services that he forgot to account for, and the remuneration of which increased in the past 40 years. Another way to put it is that there is an omitted variable in the decomposition above, which biases upwards the estimates of cost - because the financial sector actually produced more stuff. This other stuff could be services such as providing better information about financial assets. Think about portfolio managers, for example. This should still count as an increase in output - only one that the neoclassical model fails to capture. A second type of explanation is that the financial sector is doing something that would not contribute to an increase in output in any type of model, but for which financial intermediaries are still being rewarded. And indeed, the volume of asset trading has been booming in the financial sector in the past 20 years, suggesting that the increase in the financial sectors' share of income may be linked to a surge in transactions, some (or a lot) of which may not be creating any value added.
To be fair, this is a confusing paper. It sets out to do what Mehra and Prescott did, but in some sense, it only does half of it. The strength of Mehra-Prescott was that they could compare their high estimates of risk aversion to the low micro-founded estimates and say: ahah! there is an order of magnitude of difference; this is a puzzle. But Philippon does not really do this. He has -for now - no direct evidence that contradicts his model-based measure of the cost of financial intermediation. All he can say is that we have a hunch costs shouldn't have increased, especially given the IT revolution. But it's just a hunch. So this paper really needs further exploration of the "other half" of the puzzle: can we directly measure intermediation costs, and if so, have they gone up?

But imagine we do find stable or falling micro intermediation costs. Then we have a major puzzle. Where does might the wedge between macro and micro costs come from? Does it reflect some deep inefficiency in terms of resource allocation? Now, those questions are all fuzzy and predicated on the idea that there actually is a puzzle. But it looks like there might well be one, and making progress on accounting for it seems like a better use of our time than, say, setting up tents and clashing with police in public parks. Although it's probably a bit more austere.

Thursday, November 17, 2011

Guest post: Incremental Technological Growth

This is a guest post written by someone under the pseudonym Nicolas Anelka. This guest is really bad at choosing pseudonyms.

The recent passing of Steve jobs has generated a lot of discussion of the "Jobs" factor, that special something that could explain why the former Apple and Pixar CEO seemed to revolutionize every market and every product segment he laid his eyes onto.


An interesting dissenting voice among the chorus hailing the "genius innovator" in Jobs is New Yorker staff writer Malcom Gladwell. In a recent article in the New Yorker, he makes the point that Jobs did not really innovate, so much as tweak other people's innovations. He didn't invent the mouse and the windows-based UI for PCs; Xerox did, in the 70's. He didn't engineer the first cellphones, or imagine the first 3D movies, or produce the first tablet. The technology underlying all of Jobs' breakthrough products was already there before he came in. His genius, Gladwell argues, consisted in:
[T]aking what was in front of him —the tablet with stylus— and ruthlessly refining it
This view has echoes in the recent growth literature. Gladwell is actually taking his cue from research by Meisenzahl and Mokyr. The authors look at a sample of mechanics and engineers from the British Industrial revolution; they find that a majority of the sample did not contribute to "macro" innovations, that is, the invention of new products and techniques. Instead, they produced "micro" innovations, that is, tweaks and iterations of the other, groundbreaking but imperfect "macro" innovations. Interestingly, 40% of the "tweakers" in their sample did not attend school, but were instead trained through apprenticeship at workshops, the techniques of which they eventually went on to improve on in their own workshops. The widespread amount of tweaking, and the presence of a big workforce with the technical skills to carry it out, was crucial in the Britain's take-off, the authors argue.

Now, for the million-dollar question: is tweaking enough to generate aggregate productivity growth? Yes, argue two recent growth theory papers, one by Lucas (yes, that Lucas) and Moll and another by NYU grad students Tonetti and Perla. Both papers argue, though in different setups, that growth in aggregate labor productivity can be generated just from "unproductive" entrepreneurs replicating the processes and ideas of the "frontier" entrepreneurs, the most productive ones. One can think of it as "beggar-thy-neighbour", or "copycat" growth; another way to view this is through the lens of Gladwell's and Meisezahl-Mokyr "tweaking" concept.

The paper by Lucas and Moll, in particular, is fascinating. In a very simple setup, where firms' only decision is to allocate labor between searching for ideas by observing others, and producing using their own technology, they find that the copycat mechanism is sufficient to generate endogenous growth exactly when the intial productivity distribution has fat tails - that is, when the stock of ideas to discover is, in some sense, infinite. Because finding and adopting better ideas increases the number of firms using good ideas, and in turn, the probability of finding a better idea for those lower down on the productivity scale, there is a positive search externality in their economy: a planner would be keen on introducing a tax that encourages unproductive firms to search, rather than use their own bad ideas to produce stuff.


That is a stunning conclusion. Growth theory, both empirical and theoretical, has been arguing for quite a while that patenting and in general property rights were crucial in promoting long-run productivity growth, because they guarantee the right incentives for entrepreneurs to invest in Research and Development. Endogenous growth in Romer's (1990) model relies on the assumption that innovators are monopolists in the varieties they introduce; otherwise, under perfect competition, the present value of innovation benefits would be eroded down to zero, and any small research cost would drive entrepreneurs away from innovation. In that sense, copying is unambiguously bad for growth.


Lucas and Moll, on the other hand, argue that copying is good. Of course, these are very different models: R&D models of growth focus on what happens at the margin, while copycat models of growth focus on the average shift in productivity, but totally abstract from thinking of "where" the better ideas come from. Still, it raises the possibility that, after all, patenting and protection of innovation may not always be good things. This is something that I always thought was implicit in the law of, for example, industrial patents in France, that guarantee exclusivity only for a limited number of years. But it seemed to also have mattered during the British Industrial revolution. In the sample of Meisenzahl and Mokyr, 54% of the "tweakers" never deposited a patent. What's more, the authors show evidence that the tweakers actively engaged in the exchange and diffusion of their ideas. That begs the question of what the optimal amount of protection of innovators is optimal, in an economy where R&D costs coexist with "copycat" growth is. That question, to the best of my knowledge, is still open. (But my knowledge of this field is not very good).


One point, still: models of "copycat" growth are not really models of tweaking, in that there is no marginal contribution of the copiers to the innovation they replicate. Steve Jobs did, after all, improve on the dismally cluttered tablets of Microsoft, Lenovo and the like when he introduced the Ipad and its blinding simplicity of use. It's unclear whether allowing for tweaking in those recent "copycat" models would significantly alter their predictions - other than making the planner even more willing to incentivize copying. Still, the "Jobs" growth model hasn't yet been (completely) written. Yes, that was a terrible pun; but then again, it was just a tweak on the widely better, millions of jokes that Jobs' passing spurned.

Friday, November 11, 2011

A post about inequalities

Raghuram Rajan has a piece on Project Syndicate where he echoes David Brooks's column explaining that the inequality we should worry about is the inequality in education, and concludes that
the broken educational and skills-building system is responsible for much of the growing inequality that ordinary people experience
As he says, this relies on this observation:
the single biggest difference between those at or above the top tenth percentile of the income distribution and those below the 50th percentile is that the former have a degree or two while the latter, typically, do not.

For now, let us forget about the affordability of education in the US in particular, which could explain the consequences of income inequality on education inequality(see Mike Konczal for some graphs on the rising importance of student debt, the rise of student loans v. auto loans or mortgages in the last 10 years). The fact is, the inequalities fought by the 99% movement and the ones Rajan is talking about are...not the same thing. Autor, Katz and Kearney show that the rise in income inequalities has been a constant trend since the sixties, and its movement was sometimes uncorrelated with the college premium(especially in the seventies):

This might seem trivial, but they conclude that

These divergent patterns suggest that the growth of inequality is unlikely to be adequately explained by any single factor
Paul Krugman shows that the magnitude is also quite different .  There also has not been a significant college premium compared to high school in the last 30 years(though the previously mentioned paper by Autor et al.. seemed to show a higher change than the graph below)



It is also clear that the very top of the distribution has done far better than the rest. The relative income share of the top-thousandth has increased dramatically compared to the other fellow members of the top 1% club:


I was lucky to attend the INET conference which looked among others to trends in inequalities. What striked me with the explanation of rising inequalities by skill-biased technological change(SBTC) is that it cannot explain a lot of the movement in the income share of the top 0.1% in anglo-saxon countries. Institutions must play a role. Courtesy of Emmanuel Saez, how can you explain the difference in movements between those two graphs via a technological explanation?



It is interesting that at the conference, Arin Dube made the point that the main issue we do not understand is really what is going on at the top 1%(he mostly stressed compensations in the financial sector, but as we'll see below, it is not clear to me that this should be the main point of interest given the job composition of the top thousandth).

In his piece, Rajan also makes the point that
[M]any of the truly rich are entrepreneurs.[M]any of the wealthy are sports stars and entertainers, and(...) their ranks include professionals such as doctors, lawyers, consultants, and even some of our favorite progressive economists. In other words, the rich today are more likely to be working than idle. 
I sympathize with that, looking at the distribution of the top 0.1% in the US, via this awesome paper(pdf) by Bakija, Cole and Heim

However, the US has not a good record on income mobility, measured by, for example, the correlation between a parent's income and her children's.


(This TED talk made the joke: if an American wants to leave the american dream, he should go to Denmark).

The bottom line is that I don't think we can discard income inequalities, and that we can say that most inequalities come from differences in education: not only this is not true, but education reform might not change anything if the problem of income mobility is not tackled in parallel. Finally, Institutions (with a big I because that makes the word vague so I don't have to commit) clearly have an impact.

As a final point, this is not to say that SBTC does not matter. As the FT series on the international middle class squeeze shows, there clearly is a worrying dearth of middle income jobs in OECD countries, leading to median income stagnation and growing inequalities between highest paid and lowest paid jobs.

The fact that different explanations stand for different parts of the income distribution seems to be where research is converging. Dube made the point that

  • Minimum wage and institutional factor are important to explain the divergence in the bottom half of the distribution, say the 10th percentile
  • The problems at the median are better explained by institutional factors such as de-unionization(I am a bit confused that this can explain the changes internationally though. Globalization seems more reasonable, as Autor mentions here about the US job polarization: "Key contributors to job polarization are the automation of routine work and the international integration of labor markets").  

So reforming the education system and job-training programs should definitely be an objective. However, the income inequality at the top of the income scale seems like a big, and easily solvable problem, compared to the huge issue of adapting skills to technology and globalization.

Monday, November 7, 2011

The diversification of global supply chains

Today's New York Times on the impact of Thailand's floodings on hard drive suppliers:
Until the floodwaters came, a single facility in Bang Pa-In owned by Western Digital produced one-quarter of the world’s supply of “sliders,” an integral part of hard-disk drives. 
Last March, in Japan:
Shin-Etsu’s Shirakawa plant is responsible for 20 percent of global silicon semiconductor wafer supply. The plant is located in Nishigo Village, Fukushima Prefecture Shin-Etsu reported that there has been damage to the plant’s production facilities and equipment. 
It was surprising to me that for specific components, concentration seems to be the norm. In particular, Japan was a big object of analysis after last March earthquake


However, the trend seems to be general. Shin et al. discuss the trend towards a single-supplier chain and evoke explanations all, from what I see, related to economies of scale(reputation building, communication, coordination, efficiency in the use of transportation/containers). Kekre et al. stress the benefit in terms of quality control and argue that GM and Ford's adoption of single sourcing led to quality improvement, and show that firms with high quality restrict the number of suppliers.

An article in the Financial Times last April made a good summary of the fact that we are still in a transition phase.

First, the degree of diversification varies by sectors, depending notably on complexity:

It is in sectors such as carmaking, and the manufacture of construction equipment and electronics that the repercussions of last month’s disaster have been most marked. Suppliers in Japan(...)specialise in making parts hard for other businesses to create.
Though it is not clear that complexity is the main factor:
In electronics, about 80 per cent of basic component production, along with a great deal of final assembly, is based in China. The situation is similar for clothing and footwear. In such industries, there are few opportunities to mitigate the consequences of a disaster in south China of the type that gives Mr Cox nightmares. But in other sectors, particularly in engineering, where expertise in production is spread more widely and pricing pressures are less intense, many companies are instituting strategies to insulate themselves at least partially.

Interestingly, increasing labor costs make diversification easier with high-cost countries becoming attractive again, and this reduces the incentive to outsource production:
The trend towards localism in manufacturing is embodied in a gleaming new $6.8bn semiconductor manufacturing complex near Albany, New York state.
Finally, the diversification of the supply chain might be the final objective, but has still a long way to go:

Companies at the cutting edge of supply-chain planning have set up data systems to complement their multiple networks. These enable them to remain abreast of problems in various locations, using spare capacity from plants elsewhere to provide extra parts. Swiss-Swedish industrial group ABB has 5,500 suppliers linked via data networks and transport connections to assembly factories spread globally. Control of the flow of parts is devolved to 450 supply chain experts based in 40 countries, who ABB feels are best placed to match supply to fluctuations in local demand.

Wednesday, November 2, 2011

Emerging countries and US financial markets

FT Alphaville reads a UBS report showing that emerging markets and the G3(US, Europe, Japan), have highly correlated financial data. The first graph shows the long term bond yield spread over short-term interbank rates in both groups

They point out that the macro data are highly correlated in terms of rates of growth(e.g. inflation or real GDP growth). 

I am currently writing a paper with an awesome coauthor on the impact of some political shocks on emerging markets' bond spreads over US Treasuries. One part of the paper looks at the determinants of emerging market spreads. When you consider "global" variables, such as the 3-month swap rate or the rate on corporate BAA bonds(sometimes seen as good substitutes for emerging market bonds), the fit is quite amazing. Below, I show results of some regression of EMBI spreads(emerging bond market index by country, from JPMorgan) on long term and short term treasury yields, corporate BAA bond spreads and the 5-year swap rate, a commodity index and a volatility index. Each column is a country at a different date(note the small number of observations in the first column). The fit is quite impressive.



The high explanatory variables of global/US(yeah, global=american, haven't you heard of the World Series?) is definitely not new. Uribe and Yue(2006) find that variations in US spread, along with innovations to country spreads(i.e. shocks to local bond yields) explain 85% of the variation in specific country spreads. They argue:
Most of this fraction, about 60% points, is attributed to country-spread shocks. This last result concurs with Eichengreen and Mody (2000), who interpret this finding as suggesting that arbitrary revisions in investors sentiments play a significant role in explaining the behavior of country spreads.
Hund and Lesmond(2008) report that "Conversations with emerging market bond dealers and hedge fund managers confi rms that it is not uncommon for them to hedge their risk in US equity markets, most usually the liquid S&P 500 futures market, but occasionally in the more volatile NASDAQ market", which might explain the correlation.

I feel that it is quite understandable why bond spreads or stock markets are highly correlated in the two groups, if we assume that investors have an international strategy and capital flows can move easily from one investment to another. It is more puzzling to see the high aggregate correlation in the UBS report in GDP growth: