Showing posts with label bad research. Show all posts
Showing posts with label bad research. Show all posts

Monday, September 26, 2011

Ethnic heterogeneity and natural disasters

Some countries seems to be very poorly located, as they are in the path of all sorts of natural disasters. But some do better than others in coping with their perilous situation. In particular, death tolls from cataclysms seems to be, in general, of an order of magnitude larger in developing countries. What else could influence such numbers?

Eiji Yamamura, from a rich and homogeneous country that does quite well with earthquakes, tsunamis and typhoons, studies ethnic heterogeneity in two different ways in this regard. The first is ethnic polarization, which describes how close the distribution of ethnic group is from a fifty-fifty one, and ethnic fractionalization, which can be interpreted as the probability that two random people are from the same ethnic group. Once one adds the miracle instrument of cross-country regressions, legal origins, the first indicator has shows that heterogeneity has a positive impact on natural disaster deaths (meaning more of them), while the second has none.

Now Yamamamura takes this as a sign that ethnic polarization is a better indicator of ethnic heterogeneity than ethnic fractionalization. This looks like some seriously flawed reasoning here, which is repeated several times in the papers: the fact that some indicator tests favorably some hypothesis does not necessarily mean that it measures what the hypothesis says. What if in really the hypothesis is false? And in any case, on what theory would this hypothesis be based? I can easily imagine good reasons why homogeneity would lead to fewer deaths, a better social cohesion that leads to better institutions coping with disasters, like in Japan.

Saturday, September 24, 2011

How to publish prolifically

I dedicated several posts to Bruno Frey and his chronic self-plagiarism. In retrospect, one should have seen that something was fishy from the mere fact that he was simply publishing too much for it to be normal, 600 articles by his own count. It is not possible for an academic, at least in Economics to be that productive. Yet, there are some who seem to be on a similar path.

Take, for example, Michael McAleer. He is an Australian econometrician who had a very respectable career in the 1980's, publishing in the AER with Adrian Pagan (and a homophone of Paul A. Volcker), four Review of Economic Statistics, a Review of Economic Studies, an Economic Journal and plenty of other decent publications. McAleer get elected into the Academy of Social Science in Australia in 1996. Then the quality of the publications dips, as he must be facing the same loss in productivity so many in the profession suffer in their forties. Still a good stream of publications.

Then suddenly, a burst of historic proportions.

Let us first look at working papers. According to his RePEc page (that is all I could find, a 2004 CV has 32 pages despite having no publications listed): 12 in 2008, 45 in 2010, 39 in 2010, and so far 15 in 2011. And these are according to their titles, at least, distinct papers. How can one do this? First McAleer has many co-authors, but he is no Paul Erdős, as his has a small set of regular collaborators. Second, many of the papers are about the same theme, with small variations: journal impact, with applications to neuroscience, tourism studies, econometrics, and economics in general, including one that I discussed. There is nothing wrong with this, except that entire sections are copy-and-pasted from one paper to the next. His other papers, for example on tourism demand in the Far East, are incredibly thin slices of research.

But these are all working papers, and he is free to write all this as long as he does not pretend this is all original and substantially new work when submitting to journals that have such requirements. McAleer is, however, also publishing avidly, although luckily few of the papers mentioned above get placed, and then only poorly. In terms of publishing, he has found another niche, the Journal of Economic Surveys:
  • 2011, issue 2: 1 article
  • 2011, issue 1: 2 articles
  • 2010, issue 1: 2 articles
  • 2009, issue 5: 2 articles
  • 2007, issue 5: 1 article
  • 2006, issue 4: 3 articles
  • 2005, issue 5: 1 article

The journal has 5 issues a year, averaging 7 articles in each issue. That is a remarkable publishing success in a generalist journal. It turns out frequent co-author Les Oxley is the editor, who himself does not hesitate to frequently publish in his own journal. I counted 17 articles of non-editorial nature, several over 60 pages long, as well as 7 reports on conferences he attended.

A good number of those articles are titled "The Ten Commandments of ...", which I find rather pretentious. I was curious about The Ten Commandments for Academics, which could reveal some of the motivations of McAleer. They are:
  1. choose intellectual reward over money;
  2. seek wisdom over tenure;
  3. protect freedom of speech and thought vigorously;
  4. defend and respect intellectual quests passionately;
  5. embrace the challenge of teaching undergraduate students;
  6. acknowledge the enjoyment in supervising graduate students;
  7. be generous with office hours;
  8. use vacation time wisely;
  9. attend excellent conferences at great locations;
  10. age gracefully like great wine.


What I find interesting here is what was not considered. I think a better alternative, and one that would condemn much of what McAleer is doing, are due to Wesley Shrum:
  1. Thou shalt not work for deadlines;
  2. Thou shalt not accept prizes or awards;
  3. Honor thy forebears and colleagues regardless of status;
  4. Thou shalt not compete for recognition;
  5. Thou shalt not concern thyself with money;
  6. Thou shalt not seek to influence students but to convey your understandings and be honest about your ignorance;
  7. Thou shalt not require class attendance or emphasize testing;
  8. Thou shalt not worry about thy own intelligence or aspire to display it;
  9. Thou shalt not condemn those with different perspectives;
  10. SEEK TO UNDERSTAND THE WORLD.


These are principles about integrity, about changing the world and putting the scientific interest ahead of oneself. McAleer, rather, seems keen on clogging journals and working paper series with useless drivel, showing off and self-plagiarizing. At least for the latter part of his career, I do not see a positive externality from his efforts.

To come back to my initial question, to be prolific: find willing co-authors and editors, slice thinly, copy-and-paste, and do not think too hard what academia is about.

Thursday, August 11, 2011

Has the US economic policy been Keynesian for centuries?

Suppose the abstract of a paper starts with "It is demonstrated that the US economy has on the long-term in reality been governed by the Keynesian approach to economics independent of the current official economical policy." My first reaction is that of puzzlement, as I would not have thought as the US being particularly keen on Keynesian policy, except for the recent years (which are not considered in the quoted study). But again, data may speak differently from policy intentions, so let us dig deeper.



A. (Agung?) Johansen and Ingve Simonsen come to this stunning conclusion by looking at the correlation between (nominal?) (federal?) public debt and the Dow Jones Industrial Average. One can first question whether public debt is a good indicator of Keynesian policy. Public deficits or even public expenses would be better. And does the DJIA represent the US economy? It is certainly not an indicator of current activity, but rather of expected present value of future profits from a particular class of firms.



Whatever. Let us go with that. The analysis is done by computing over the 1791-2000 sample a sliding correlation between these two indicators over a five-year window. Surprise, the correlation is zero most of the time, except during some wars when it is strongly positive (and strongly negative during the second war with the Seminole Indians). From this they conclude the Keynesian policy was mostly pursued during wars. Now let us take a step back: the authors show that there is by their definition no Keynesian policy during peacetime. But during wartime, the government is credited with a policy geared towards expansion of the DJIA. They, one may ask, if this is the government overwhelming policy, as the authors seem to believe, why did the US wait so long to get into the two World Wars when the opportunity was there? I cannot make sense of all this.

Monday, July 25, 2011

How not to think about class struggles

Depending on the research question being asked, some degree of heterogeneity is required in a model. Sometimes this modeling requires distinguishing between those who provide capital and those who work. This is obviously an abstraction, because in reality these "capitalists" may just be shareholders who also work on the labor market. In fact, they often are, and they save and invest for various purposes, like self-insurance, retirement or bequests. But making households purely capitalists and workers can sometimes prove useful in making a result emerge more clearly, as long as one is conscious of the abstraction. In some circumstances, it is useful to explain why these "classes" emerge, like differences in access to credit or in subjective discount rates. But again, these are abstractions useful for modeling.

Alberto Russo takes this abstraction very seriously. In his model, people are born capitalists or workers, which translates in households either investing in an activity with a multiplicative risk or working for a wage with additive risk. Why that is so and what should be achieved with this is left unexplained. Households face an additional risk: they randomly switch between classes, the probability depending on wealth. The model is "closed" with exogenous and distinct propensities to consume for both classes. The model is then calibrated with parameters values not related to anything observable. The simulations reveal that if one starts with everyone having the same wealth, wealth heterogeneity then emerges. Well, that was unexpected... Even back in 1993, Mark Huggett had a much better model to explain heterogeneity in wealth.

Wednesday, July 13, 2011

The welfare gain from inflation targeting

It is rather well accepted that transparency is preferred for policy, because it anchors better expectations, people generally do not like uncertainty, and discretion can lead to adverse biases compared to set policy rules. Yet, the United States exhibit little transparancy, with the Federal Reserve being one of the few western central banks not to declare some explicit policy target, and fiscal policy being as uncertain as ever. That would not be a big deal if the welfare costs were low, but you can think that there are high and in the case of fiscal policy are currenctly holding back the recovery.

Giorgio Di Giorgio and Guido Traficante are taking a closer look at the welfare benefits of inflation targeting. For this they use a model where households observe policy interest rates and do not know whether their changes are due to reactions to output gaps or shocks to the inflation target. Households are sophisticated, they use a signal extraction device to estimate the latent, unobservable variable. Yet, they still face substantial costs from the uncertainty. Money is not neutral because of Rotemberg pricing, a variant of Calvo pricing. Oh well, I guess this is what you need to do to get a result with some bite.

Households know there is a policy rule that determines the interest rate from the output gap (unobservable) and the inflation target (stochastic and persistent) as well as known preference and cost-push shocks. In other words, households know a lot about the structure of the economy and the shocks, except for the policy shock, but then somehow cannot figure out what the output gap is. The central bank can, though, but then has for obscure reasons a trembling hand when it comes to set its inflation target. That seems to be quite the opposite of what I would have thought: everyone is confused about the output gap, and only the central bank knows what the inflation target is. Instead of a story of households trying to disentangle output gap and inflation target from the interest rate signal, one would have a story of a central banker not quite sure what to do given the circumstances. Too bad, this could have been an interesting paper.

Friday, June 10, 2011

What is the value of research?

What is the value of the research we do? The typical way we have to evaluate the impact of research is to count citations, and possibly weigh them in some way, in Economics and any other sciences (except maybe where patents are relevant). But this only evaluates how the research output is viewed within a narrowly defined scientific community. The contribution to social welfare is an entirely different beast to evaluate.

Robert Hofmeister tries to give research some value. The approach is to consider the scientific process through cohorts, where each wave provides fundamental research as well as end-applications based on previous fundamental research. A particular research results thus can have a return over many generations. It is an interesting way to properly attribute the intellectual source of a new product or process, but the exercise is of little value if it is not possible to quantify the social value of the end-application. Indeed, Hofmeister goes back to using citations in Economics for a data application, which is equivalent to evaluate research only within the scientific community. In terms of the stated goal of the paper, we are back to square one. In terms of getting a better measure of citation impact, this is an interesting application of an old idea. And the resulting rankings of journals and articles look very much like those that are already available.

Wednesday, May 18, 2011

Is chocolate milk hip?

Sports and energy drink have become popular in the past decades or rather dubious grounds, as the recover and boost effect claimed in ads are in many cases false. See for example Vitamin water and Gatorade. In fact, plain water has much better recuperating properties than most of these sports drinks. And so does apparently chocolate milk, which has prompted marketing campaigns in the US with many athletes as spokespersons.

Senarath Dharmasena and Oral Capps, Jr. try to find the determinants of the demand for chocolate milk using a Heckman two-step demand model. Unfortunately, no regression results are presented, but the authors hint at a few interesting results. A quarter of all US households consume chocolate milk, with an average of 12 liters a year per household. Then they claim a number of household characteristics are significant, but with no indication in which way they are. For example, education of the household head is significant, and Hispanic household head as well. It would be interesting to know whether the relationship is positive or not. And as the authors ask in their title whether chocolate milk is the new-age sports drink, I am intrigued as how they could have answered this question. There is nothing in the paper itself about it.

Why am I reporting on such a thin paper? Because it always struck me how Europeans view adults drinking milk and especially chocolate milk as childish, while is it perfectly accepted in the US. I was wondering whether this difference could be seen in an empirical demand equation. Not in that one, though.

Friday, April 22, 2011

The key to understand money: vacations

Monetary theorists have struggled for decades if not centuries to explain why we use and value money. Modern theory, which needs to be more explicit about its assumptions, has highlighted how silly some axioms of monetary theory are. For example, why would money make any sense in a utility function when future consumption is already taken into account? Or what about cash-in-advance in quarterly models of the business cycle. Money search model bring progress to the table as they model the problem of the absence of double coincidence of wants, although still with some rather crude assumptions. But at least it is going in the right direction.

Andrew Clausen and Carlo Strub come up with a new motivation for money. Suppose that there is a fix cost in production. Unless you want to produce at full capacity every period, you will then choose to close all operations from time to time and take a vacation. But you must live from something when you do not work and you have no savings technology. This is where money comes to the rescue. Without it, it would have been impossible to smooth consumption across periods, and thus money is valued and welfare enhancing. But beyond the possible elegance of the model, is anybody actually believing this story? I do not think in makes sense to discuss the intertemporal allocation of resources in a world without assets, especially if you want to apply it to anything modern.

Tuesday, April 19, 2011

Crime on the job and the business cycle

The cyclical behavior of work effort is rather puzzling. One would expect that people would work harder during a recession to avoid getting laid off, yet measures of labor productivity (per worker or per hour) are consistently positively correlated with GDP. This also runs counter to the argument that the least productive workers are laid off first in a recession, which should improve the productivity of the remaining ones through a composition effect. Survey data is much more mixed, though, but that is often based on perceptions rather than facts.

One reason why labor productivity may vary could also come from counterproductive efforts from the workforce: stealing, sabotaging, annoying co-workers. Aniruddha Bagchi and Siddhartha Bandyopadhyay fold all these activities under the crime label and ask whether this is linked to the business cycle. There is no data about this, unless you think like the authors that this is only dimension that makes labor productivity vary. So you are left with purely theoretical exercises. The authors highlight here to contradictory effects. First, pretty much everyone gets a job in a boom, including those "criminals," which would lead to a negative correlation of labor productivity with output. But this effect could go the other way if labor market prospects are likely to weaken and jeopardize re-employment. Second, they assume that deviancy requires a setup cost, which one is less likely to bear when the labor market weakens. This would even reinforce this negative correlation.

This possible ambiguity would need to be sorted out with a tight calibration exercise at least, or some structural estimation with hidden variables. But the authors just wave hands and claim things can go either way. In any case, they are probably right not to pursue. Using a two-period model to study business cycles is silly anyway.

Thursday, April 14, 2011

On banning Youtube at work

While a strong case can be made that the information technology revolution has markedly improved productivity at the workplace, it is not that obvious that Internet at the workplace has such a positive impact. Indeed, it is very tempting to get distracted, and Youtube has certainly contributed to a shorter attention span in offices around the world (not to mention that these flash applications are huge resource hogs that require better and better computer equipment). And I cannot deny the Internet is providing me with a distraction that prevents from pursuing my regular duties, this blog for which my employer is not getting any credit whatsoever. Is then the solution to ban the Internet from work?

Alessandro Bucciol, Daniel Houser and Marco Piovesan do an experiment where some people get to see a funny video while others do not. The "frustrated" ones then turn out to be less productive thereafter. One should thus weigh whether to forbid the Internet, yes it wastes time, but you do not want to create this frustration effect. The authors conclude some basis that eludes me that the second effect is stronger.

But wait a moment. The experiment they perform is based on the fact that the frustrated ones hear a video but cannot see it. How would this relate to the Internet being banned from work? If that were the case, no one would hear the video and no one would get frustrated. And no time would be wasted. I cannot follow the authors' reasoning here. Maybe I am too distracted by the Internet.

Wednesday, March 23, 2011

Modelling without theory

In Economics, we have adopted the scientific method much like other sciences. As we teach our students, it consists of the following steps
  1. Observe regularities in the data.
  2. Formulate a theory.
  3. Generate predictions from the theory (hypotheses).
  4. Test your theory (is it consistent with data?)
In the context of Economics, the goal of the procedure is not only to explain why regularities in the data happened, but also to build a theory that is useful in predicting the consequences of particular policies or institutional designs.

David Hendry just published a paper about the scientific method in Economics that appears to fly in the face of what I just described. Here is an attempt to summarize his stand, and I apologize for quoting quite liberally:
  1. Specify the object for modeling, usually based on a prior theoretical analysis in Economics. An example of such an object is y=f(z).
  2. Defining the target for modeling by the choice of the variables to analyze, y and z, again usually based on prior theory. This is about deriving the data-generating process of the variables of interest, or fitting an equation with some statistical procedure.
  3. Embed that target in a general unrestricted model (GUM), to attenuate the unrealistic assumptions that the initial theory is correct and complete. The idea is to add other variables, lags, dummies, shift variables and functional forms to improve the empirical accuracy of the initial model.
  4. Search for the simplest acceptable representation of the information in that GUM. Or, now that the model has become huge (and may contain more variables than data points), let us get rid of some of them without loosing too much in accuracy.
  5. Rigorously evaluate the final selection: (a) by going outside the initial GUM in step three, using standard mis-specification tests for the ‘goodness’ of its specification; (b) applying tests not used during the selection process; and (c) by testing the underlying theory in terms of which of its features remained significant after selection.
In other words, this amounts to take some linearized version of some theory, through data at it, whether theoretically relevant or not, then massage it until it fits the data.

A part from the fact that this is really the blueprint for an automated data mining exercise that is not driven in any way to answering a particular policy question, this procedure not only disregards the scientific method, but also Occam's Razor and the Lucas Critique. What use is it to learn that the CPI follows a polynomial of degree five with three lags on exports of cabbage, the number of sunny days, 25 other variables and three structural breaks (not an actual example used by Hendry, but it could)? If you want to make some very short term forecasts, that may be accurate, and this method is abundantly used in the City or Wall Street by neural networks "experts." But when it comes to advising policymakers, you need to have some Economics, and by that I mean economic theory, to explain why economic agents behave in such a way and what an intervention would lead to.

The scientific method starts with the observation of the data. Hendry dismisses this with a slight of hand, stating that stylized facts are "an oxymoron in the non-constant world of economic data." What if there are constants in economic data? In fact there are plenty, and this is what theories are trying to explain. Has Hendry never observed something in his surrounding that he then tried to explain? Or does he really spend his days feeding linear equations into his computer to see what it can come up with with his database?

Such papers, especially by people who enjoy respect like Hendry does in the UK, deeply upset me. To top it off, there are 33 self-citations.

Tuesday, March 22, 2011

The spaceship problem

Suppose you have to plan a very long term mission in space. It will last for many years, and you need to provide a group of people the means to live in a hermetic environment. You do not have access to Star Trek technologies like warp speed, replication and teleportation. Your population can reproduce, but life length and quality of life depends on resources and population density. How many people should be on such a mission? This is known as the spaceship problem. Of course, economists have something to say about this.

Pierre-André Jouvet and Grégory Ponthière are not going to solve the problem, there are too many biological and physical constraints, but they point out that the solution will yield solutions that contradict utilitarianism. They focus on the trade-off between the number of people and their life length. Indeed, longevity impacts population size and thus density. They assume that a social planner uses the sum of residents' utilities as a criterion and, unfortunately, that resources are unlimited, which makes the paper stray away from Economics.

What Jouvet and Ponthière really want to do it is compare different social welfare criteria in this environment. The Classical Utilitarian, for example, sums the utility of all individuals, the Average Utilitarian only the living ones. In a model without reproduction and a finite mission time, Classical Utilitarianism yields a small population living very long, while the second may want to have a large population that lives for a short time. Add reproduction to the mix and anything can happen depending on parameters values and initial population size. Make the mission life infinite, and the authors run into problems and need to define additional social welfare parameters. That is mainly due to the fact that there is no discounting, and infinitively lived economies and ill-defined.

What do I learn from this exercise? It is not very clear, except that social welfare criteria matter, adding utilities gives us a lot of trouble and that discounting is essential. But we knew that already, even when the spaceship is called Earth.

Friday, March 11, 2011

On the decline of the US manufacturing wage

It is always interesting to see how real wages evolve, as they allow to understand how much a worker can buy from his income. Usually, this is done by dividing the nominal wage by a price index, usually the commodities said worker would typically buy. The results may vary considerably, as a different price index is needed for different workers, and the basket of goods may also vary over time. The latter is particularly important when the sample period is long. It also depends whether you look a hourly, weekly or annual income, and how benefits are included.

John Pencavel reviews a centuries old literature on the topic that came to the conclusion that except for some periods on stagnation, real wages generally were upward bound. He then comes up with his own indexing procedure, and finds that real wages in the US manufacturing sector have declined by 40% since 1960. Wow, this seems to be a real big result, and this requires understanding how this was computed. Indeed, Pencavel does not measure the real wage in the conventional way, but rather the ratio of what workers get to what they could get if the firm made no profit. This does not necessarily mean that the buying power of the worker has decreased by 40%. but rather that a smaller share of firm income goes to labor. With the increased mechanization of manufacturing, this evolution should not surprise many people. But this is not necessarily a 40% fall in real wages as advertised in the paper's abstract.

Tuesday, February 15, 2011

If anything goes wrong, it has to be the central banks

Some institutions are excellent scape goats to impose necessary reforms in a country, such as the International Monetary Fund or the World Bank. Local governments can always blame them if they have to put their fiscal house in order. Other institutions seem to attract conspiracy theorists in large numbers because someone needs to be blamed for some condition and the institution is poorly understood. The prime candidate here is the central bank. Indeed, the best central banks are those that act independently from the government, but people see this apparent lack of accountability as the origin of all trouble in the economy.

A very good example for this sorry confusion is a recent paper by Subhendu Das, with the following abstract:
In each country the central bank is a privately owned bank with no transparency and accountability to the government of that country. It is also the only bank that can print the money for that country and does it so out of thin air. At the same time this bank wants that the government returns the money with interest. We show that this structure creates deficit, introduces tax, and causes poverty around the globe. This paper shows how central banks control the economy by manipulating the financial system it has designed. The paper explains how easily the central banks can control the unemployment, create recessions, and transfer wealth from the lower economic group to higher economic group and perpetuate the poverty. The paper also proposes three methods of eliminating central banks.


Where to begin. Central bank governors are typically appointed by the government and are accountable to it (just see how frequently Bernanke is on Capital Hill). The central banker's decisions are independent from the government, and for good reasons: you want to avoid policy actions to be taken for a short term political gain that is detrimental in the longer run through higher inflation. Central banks that are not independent from governments typically have much higher inflation, as the government ends up relying on seigniorage for its expenses instead of taxation. The presence of the independent prevents deficits because governments know they cannot inflate debt away.

Then, central banks most often make profits. These profits are then transfered to the government. That reduces the need for taxes. But this does not absolve the government from raising taxes. If is it taking real resources form the economy (either labor or goods), that needs to be paid for in real terms one way or the other. For a typically government, the central bank would only be able to cover this with excessive inflation.

Do central banks create poverty? A central bank that is independent from the government leads to low inflation, which is good for poor people who are cash based. To repeat myself, a government controlled central bank will create more inflation, and that is when there is a transfer of wealth from poor to rich, who can shield their assets from inflation.

Do central banks have an impact on unemployment and recessions? Honestly, it is open to debate whether they have any significant impact on this. They can certainly mess up things, for example when influenced by the government, but a well-run independent central bank will just make sure the economic is well greased. It cannot fix structural problems. That is up to the government. Central banks cannot control the economy, and in fact in many countries do not even have regulatory authority over the financial sector.

All in all, this paper has everything backwards, except for the fact that money is created out of thin air. But keep in mind that most money is not created by the central bank, but by private banks, and the central bank makes sure not too much is created. So the author still got that wrong.

Monday, February 14, 2011

Heterodox money

Heterodox economics is frustrating because it keeps working in a self-referential vacuum, consistently ignoring advances in orthodox economics. This is not how progress can be made, and in particular this is not how you can get results from heterodox economics accepted or at least considered in the mainstream. It is now as if there were two parallel universes and no portal between them.

A good example is a recent paper by Randall Wray, grandiosely entitled "Money" that is supposed to teach us what money is. It lays on three principles (I quote):
  1. Money buys goods and goods buy money, but goods do not buy goods.
  2. Money is always debt; it cannot be a commodity from the first proposition because if it were that would mean that a particular good is buying goods.
  3. Default on debt is possible.
It then proceeds to talk about how to understand this and how this defines money, with references to Keynes, Marx, Sraffa and Kaldor, and their disciples.

But have we not made some progress since? The only mention of the mainstream in the paper is a criticism of representative agent models where agents pay money to themselves and thus never default. Really? Really? Modern models that try to rationalize the use of money explicitly have heterogeneous agents and explicitly take into account that some may refuse money for payment. And this is not exactly an obscure and recent literature, Kiyotaki and Wright, for example, dates back to 1989 and already has all these ingredients. This money search literature is mentioned nowhere. The same applies to the literature on trading posts, which has rationalized the emergence of particular commodities as money (See recent post).

Also, why this reluctance to use formulas to make arguments and assumptions explicit? In this paper, there is implicit talk about budget constraints and accounting identities, but they are never explicitly laid out, which can make it easy for the author to sweep something under the carpet (I am not saying Wray does, though). But there is something essential about writing an equation: it forces you to define variables precisely, and it forces to use a logical proof of your arguments. Only then will your arguments be water tight.

Monday, January 24, 2011

How not to distribute research funds

Citation counts are often used to proxy for the quality of an article, researcher or journal. They are not a perfect measure, everybody agrees on that, but they have proven to be a useful starting point for evaluation. Sometimes they are taken very seriously, too seriously, for the distribution of funds and pay. But at least this is done within a field, as it is obvious that citing conventions and in particular frequencies differ from field to field.

Javier Ruiz-Castillo goes further in trying to infer how budget priorities should allocated across research fields by using citations counts. Of course, for this one first needs to have a good understanding of how citations are distributed. Roughly, citations are distributed following power laws with fields and subfields. This means that few articles garner a lot of citations, while many go empty (especially in Business, Management and Political Science). And if I understand the paper right, one can apply readily a multiplier to compare the citation frequencies across fields. And these multipliers then make it possible to compare researchers or research units across fields within, say, a country, as long as one assumes that an adjusted citation is equally worth citing. For example, is political science worth the same support as biomedical engineering after using these multipliers, to take two randoms fields? And the "size" of the field is important as well. Here the author makes an attempt at some definitions of size which I frankly did not understand.

That said, I wonder why I forced myself in reading this paper. First is it indigestible because it is poorly written and uses very bad analogies. Second, because trying to compare fields and use citations for the allocation of funds or prizes across then is impossible because you have no identification: in statistical speak, the fixed effects capture all the variance. You can only compare how well a field does in a country compared to the rest of the world, but this cannot measure how important the field is. You need more information than just citations.