If you are a rich country resident and travel to a poor country, you are continuously amazed how inexpensive life is there. One rationalizes this with the lower local wages which make domestic goods (and price discriminating imports) cheaper. Is this anecdotal evidence true in general? Does it hold across all countries?
Fadi Hassan finds that indeed rich countries have higher price levels. But once you go further down the development ladder, the statistical evidence is not that clear, and once you reach the lowest rungs, the cost of things could be increasing again. This analysis is performed using the ratio of purchasing power parity to the exchange rate, as measured in the Penn World Tables and finds that the best non-linear fit of the price-income relationship is not increasing for 40% of the countries. The challenge is now to understand why it is so.
Showing posts with label measurement. Show all posts
Showing posts with label measurement. Show all posts
Wednesday, July 20, 2011
Thursday, June 30, 2011
Do not waste degrees of freedom with macro data
Dealing with microdata is relatively easy, as you have plenty of data points and can freely add explanatory variables with running the risk of running out of degrees of freedom. The story is different for macrodata, as series are much shorter, and one can quickly eat degrees of freedom by using lagged variables. The prime example here are the often abused vector autoregressions (VAR), that get larger and larger, and faster than new data points accumulate. The latest fad is to run regressions with time varying parameters, including in VARs, which is deadly for degrees of freedom as this is roughly equivalent to adding a boatload of dummy variables to the mix. Hence the need to be more parsimonious.
How parsimonious should one be? Joshua Chan, Gary Koop, Roberto Leon-Gonzalez and Rodney Strachan think the solution is in time-varying parsimony. The idea is that sometimes one needs a more complex model, and sometimes a few variables are sufficient. While this allows to spare degrees of freedom when one can do with few variables, this gain on paper is lost, and probably more than lost, by the implicit degrees of freedom used in selecting the right model. This is an old problem than is swept under the rug is many empirical applications, but in this case it becomes even more apparent because so many parameters and models are involved.
How parsimonious should one be? Joshua Chan, Gary Koop, Roberto Leon-Gonzalez and Rodney Strachan think the solution is in time-varying parsimony. The idea is that sometimes one needs a more complex model, and sometimes a few variables are sufficient. While this allows to spare degrees of freedom when one can do with few variables, this gain on paper is lost, and probably more than lost, by the implicit degrees of freedom used in selecting the right model. This is an old problem than is swept under the rug is many empirical applications, but in this case it becomes even more apparent because so many parameters and models are involved.
Friday, March 11, 2011
On the decline of the US manufacturing wage
It is always interesting to see how real wages evolve, as they allow to understand how much a worker can buy from his income. Usually, this is done by dividing the nominal wage by a price index, usually the commodities said worker would typically buy. The results may vary considerably, as a different price index is needed for different workers, and the basket of goods may also vary over time. The latter is particularly important when the sample period is long. It also depends whether you look a hourly, weekly or annual income, and how benefits are included.
John Pencavel reviews a centuries old literature on the topic that came to the conclusion that except for some periods on stagnation, real wages generally were upward bound. He then comes up with his own indexing procedure, and finds that real wages in the US manufacturing sector have declined by 40% since 1960. Wow, this seems to be a real big result, and this requires understanding how this was computed. Indeed, Pencavel does not measure the real wage in the conventional way, but rather the ratio of what workers get to what they could get if the firm made no profit. This does not necessarily mean that the buying power of the worker has decreased by 40%. but rather that a smaller share of firm income goes to labor. With the increased mechanization of manufacturing, this evolution should not surprise many people. But this is not necessarily a 40% fall in real wages as advertised in the paper's abstract.
John Pencavel reviews a centuries old literature on the topic that came to the conclusion that except for some periods on stagnation, real wages generally were upward bound. He then comes up with his own indexing procedure, and finds that real wages in the US manufacturing sector have declined by 40% since 1960. Wow, this seems to be a real big result, and this requires understanding how this was computed. Indeed, Pencavel does not measure the real wage in the conventional way, but rather the ratio of what workers get to what they could get if the firm made no profit. This does not necessarily mean that the buying power of the worker has decreased by 40%. but rather that a smaller share of firm income goes to labor. With the increased mechanization of manufacturing, this evolution should not surprise many people. But this is not necessarily a 40% fall in real wages as advertised in the paper's abstract.
Friday, March 4, 2011
Seasonal adjustment is difficult
As undergraduates, we are taught to make sure the macroeconomic data we are dealing with is seasonally adjusted. We are explained that statistical offices remove the seasonal factors is a way that is close to regressing the data on seasonal dummies and taking moving averages. If you really look into this, as so often, it turns out things are much more complex than that, and subtleties matter.
Stephen Pollock and Emi Mise do a technical review of the various methods and look at some alternatives. Broadly speaking, there are three strands of techniques. The first is based on ARIMA, the second removes seasonal frequencies found in a periodogram, and the third relies on clear distinctions between fundamental and seasonal components in spectral analysis. The difficulties are compounded by the fact that data usually has a trend, which may not be loglinear, and data thus requires pre- and post-treating. And as Pollock and Mise show, each of these methods matter, even for dating turning points. I can imagine this can become even more important when one throws data into a regression, especially if the series have been detrended in different ways. And it is rare to see statistical offices declare what method was used for that.
Stephen Pollock and Emi Mise do a technical review of the various methods and look at some alternatives. Broadly speaking, there are three strands of techniques. The first is based on ARIMA, the second removes seasonal frequencies found in a periodogram, and the third relies on clear distinctions between fundamental and seasonal components in spectral analysis. The difficulties are compounded by the fact that data usually has a trend, which may not be loglinear, and data thus requires pre- and post-treating. And as Pollock and Mise show, each of these methods matter, even for dating turning points. I can imagine this can become even more important when one throws data into a regression, especially if the series have been detrended in different ways. And it is rare to see statistical offices declare what method was used for that.
Thursday, February 17, 2011
Price points, good diversity and price rigidity
Much of the real impact of monetary policy hinges on some sort of rigidity in some prices. As regular readers must have noticed, I am not convinced I am not convinced that prices a rigid to the point that it matters, and I am particularly appalled how price rigidity is introduced in theoretical models. Let us have a look at some of latest research on price rigidity.
Edward Knottek uses supermarket scanner to find that price points are much more important than menu costs in determining prices. Price points are for example prices ending in 9, which make up 60% of retail prices. He also finds that in all but 10% of cases, prices return to the previous level after a sale. These two facts cannot be reconciled with menu costs being of relevance. Yet menu costs are the foundation, explicitly or implicitly of almost all models of price rigidity.
Saroj Bhattarai and Raphael Schoenle use producer prices and establish interesting patterns in decisions to change prices. They find that firms with a large variety of goods change prices more frequently, but by smaller amounts. If they change a price, they are more likely to decrease it, and the variance of positive price changes is larger. They also find that for a model to replicate such facts, one needs firm-specific menu costs and state-dependent pricing. This is definitely not Calvo pricing.
Edward Knottek uses supermarket scanner to find that price points are much more important than menu costs in determining prices. Price points are for example prices ending in 9, which make up 60% of retail prices. He also finds that in all but 10% of cases, prices return to the previous level after a sale. These two facts cannot be reconciled with menu costs being of relevance. Yet menu costs are the foundation, explicitly or implicitly of almost all models of price rigidity.
Saroj Bhattarai and Raphael Schoenle use producer prices and establish interesting patterns in decisions to change prices. They find that firms with a large variety of goods change prices more frequently, but by smaller amounts. If they change a price, they are more likely to decrease it, and the variance of positive price changes is larger. They also find that for a model to replicate such facts, one needs firm-specific menu costs and state-dependent pricing. This is definitely not Calvo pricing.
Subscribe to:
Posts (Atom)