Showing posts with label citations. Show all posts
Showing posts with label citations. Show all posts

Tuesday, September 27, 2011

Bruno Frey's academic utopia

Bruno Frey has fallen into disgrace these days as he has been shown to play dangerous games of self-plagiarism, submissions to multiple journals and hypocrisy in describing the perverse incentives facing researchers. I have argued recently that he is living in a bubble that has now popped.

Apparently not. With this wife Margit Osterloh, he just authored a paper about the impact of rankings on academic publishing. Their argument is that the current emphasis on rankings pushes academicians to privilege publication to science. They want to decouple funding, tenure and promotion from any evaluation metric. Rather, scientist should be carefully selected at their initial appointment and then be given guaranteed funding and only be asked to evaluate themselves. This strikes as a very utopian view of academia, and in particular a view that surprisingly ignores the impact of incentives on motivation. While the Frey and Osterloh utopia may yield in a few cases the expected very innovative researchers willing to take substantial risks along new paths, most would free-ride to a large degree. The best example of this is the French system of "research associates" on the national research foundation, who are appointed right after their doctorate for a lifetime position of researcher with no other significant duties. While this system has yielded some success stories, the research impact of these associates is rather dismal compared to researchers elsewhere who are subject to regular evaluations using publication metrics.

Reading through the paper, I could not resist to see the irony in many of the arguments, where Frey could be really writing about himself. A few quotes:
In academia, examples can be found (e.g. the ‘slicing strategy’) whereby scholars divide their research results into a ‘least publishable unit’ by breaking the results into as many papers as possible in order to enlarge their publication list.

Of course, Bruno Frey and his students are big specialists in slicing.
there is evidence showing that academics with the highest score in publication rankings score only modestly in a ranking based on their contributions in editorial boards (Rost and Frey, 2011).

Note that Frey was kicked off an editorial board for resubmitting a published paper elsewhere.
For example, a broad and international selection of reviewing peers is necessary in order to avoid cronyism.

Frey's colleague Ernst Fehr recently anointed one of his students with the most prestigious prize for an European economists. Frey and Osterloh argue that awards cannot be manipulated, while citations metrics can. I do not think this is true, in fact awards are easier to manipulate because they are determined by small committees. Citations come from the whole research community.
Another example is editors who encourage authors to cite their respective journals in order to raise their impact rankings

Frey requires this for acceptance in the journal he co-edits, Kyklos. At least, the paper does not mention self-plagiarism this time.

Is this paper a mea culpa? It certainly does not read that way. As in his previous writings on the publishing game, he comes across as someone about it all, who tell everyone how things should be while claiming the moral high ground. The paper was completed on August 24, 2011, thus well after all the conflagration about the hypocrisy of Bruno Frey, yet it does not show anything about it. Bruno Frey has not learned a thing. And, do you want to bet that he is submitting this to journal, taking away space that young researchers need to get publications for tenure, as he has argued before? In fact this particular paper would fit much as a blog post than into a journal.

Addendum: And they are doubling up with another paper, entitled "Input Control and Random Choice - Improving the Selection Process for Journal Articles" with the following abstract:
The process by which scholarly papers are selected for publication in a journal is faced with serious problems. The referees rarely agree and often are biased. This paper discusses two alternative measures to evaluate scholars. The first alternative suggests input control. The second one proposes that the referees should decide only whether a paper reaches a minimal level of quality. Within the resulting set, each paper should be chosen randomly. This procedure has advantages but also disadvantages. The more weight that is given to input control and random mechanism, the more likely it is that unconventional and innovative articles are published.

You read it right: instead of peer review, they advocate complete tyranny by editors who randomize over a select set or, alternatively, to decide early who is worthy publishing and then let the chosen few do as they wish. Which is how Bruno Frey and his lackeys have been operating. But at least Frey and Osterloh have had the decency to withdraw that last paper (which is why I print the abstract above).

Saturday, September 24, 2011

How to publish prolifically

I dedicated several posts to Bruno Frey and his chronic self-plagiarism. In retrospect, one should have seen that something was fishy from the mere fact that he was simply publishing too much for it to be normal, 600 articles by his own count. It is not possible for an academic, at least in Economics to be that productive. Yet, there are some who seem to be on a similar path.

Take, for example, Michael McAleer. He is an Australian econometrician who had a very respectable career in the 1980's, publishing in the AER with Adrian Pagan (and a homophone of Paul A. Volcker), four Review of Economic Statistics, a Review of Economic Studies, an Economic Journal and plenty of other decent publications. McAleer get elected into the Academy of Social Science in Australia in 1996. Then the quality of the publications dips, as he must be facing the same loss in productivity so many in the profession suffer in their forties. Still a good stream of publications.

Then suddenly, a burst of historic proportions.

Let us first look at working papers. According to his RePEc page (that is all I could find, a 2004 CV has 32 pages despite having no publications listed): 12 in 2008, 45 in 2010, 39 in 2010, and so far 15 in 2011. And these are according to their titles, at least, distinct papers. How can one do this? First McAleer has many co-authors, but he is no Paul Erdős, as his has a small set of regular collaborators. Second, many of the papers are about the same theme, with small variations: journal impact, with applications to neuroscience, tourism studies, econometrics, and economics in general, including one that I discussed. There is nothing wrong with this, except that entire sections are copy-and-pasted from one paper to the next. His other papers, for example on tourism demand in the Far East, are incredibly thin slices of research.

But these are all working papers, and he is free to write all this as long as he does not pretend this is all original and substantially new work when submitting to journals that have such requirements. McAleer is, however, also publishing avidly, although luckily few of the papers mentioned above get placed, and then only poorly. In terms of publishing, he has found another niche, the Journal of Economic Surveys:
  • 2011, issue 2: 1 article
  • 2011, issue 1: 2 articles
  • 2010, issue 1: 2 articles
  • 2009, issue 5: 2 articles
  • 2007, issue 5: 1 article
  • 2006, issue 4: 3 articles
  • 2005, issue 5: 1 article

The journal has 5 issues a year, averaging 7 articles in each issue. That is a remarkable publishing success in a generalist journal. It turns out frequent co-author Les Oxley is the editor, who himself does not hesitate to frequently publish in his own journal. I counted 17 articles of non-editorial nature, several over 60 pages long, as well as 7 reports on conferences he attended.

A good number of those articles are titled "The Ten Commandments of ...", which I find rather pretentious. I was curious about The Ten Commandments for Academics, which could reveal some of the motivations of McAleer. They are:
  1. choose intellectual reward over money;
  2. seek wisdom over tenure;
  3. protect freedom of speech and thought vigorously;
  4. defend and respect intellectual quests passionately;
  5. embrace the challenge of teaching undergraduate students;
  6. acknowledge the enjoyment in supervising graduate students;
  7. be generous with office hours;
  8. use vacation time wisely;
  9. attend excellent conferences at great locations;
  10. age gracefully like great wine.


What I find interesting here is what was not considered. I think a better alternative, and one that would condemn much of what McAleer is doing, are due to Wesley Shrum:
  1. Thou shalt not work for deadlines;
  2. Thou shalt not accept prizes or awards;
  3. Honor thy forebears and colleagues regardless of status;
  4. Thou shalt not compete for recognition;
  5. Thou shalt not concern thyself with money;
  6. Thou shalt not seek to influence students but to convey your understandings and be honest about your ignorance;
  7. Thou shalt not require class attendance or emphasize testing;
  8. Thou shalt not worry about thy own intelligence or aspire to display it;
  9. Thou shalt not condemn those with different perspectives;
  10. SEEK TO UNDERSTAND THE WORLD.


These are principles about integrity, about changing the world and putting the scientific interest ahead of oneself. McAleer, rather, seems keen on clogging journals and working paper series with useless drivel, showing off and self-plagiarizing. At least for the latter part of his career, I do not see a positive externality from his efforts.

To come back to my initial question, to be prolific: find willing co-authors and editors, slice thinly, copy-and-paste, and do not think too hard what academia is about.

Friday, June 10, 2011

What is the value of research?

What is the value of the research we do? The typical way we have to evaluate the impact of research is to count citations, and possibly weigh them in some way, in Economics and any other sciences (except maybe where patents are relevant). But this only evaluates how the research output is viewed within a narrowly defined scientific community. The contribution to social welfare is an entirely different beast to evaluate.

Robert Hofmeister tries to give research some value. The approach is to consider the scientific process through cohorts, where each wave provides fundamental research as well as end-applications based on previous fundamental research. A particular research results thus can have a return over many generations. It is an interesting way to properly attribute the intellectual source of a new product or process, but the exercise is of little value if it is not possible to quantify the social value of the end-application. Indeed, Hofmeister goes back to using citations in Economics for a data application, which is equivalent to evaluate research only within the scientific community. In terms of the stated goal of the paper, we are back to square one. In terms of getting a better measure of citation impact, this is an interesting application of an old idea. And the resulting rankings of journals and articles look very much like those that are already available.

Monday, January 24, 2011

How not to distribute research funds

Citation counts are often used to proxy for the quality of an article, researcher or journal. They are not a perfect measure, everybody agrees on that, but they have proven to be a useful starting point for evaluation. Sometimes they are taken very seriously, too seriously, for the distribution of funds and pay. But at least this is done within a field, as it is obvious that citing conventions and in particular frequencies differ from field to field.

Javier Ruiz-Castillo goes further in trying to infer how budget priorities should allocated across research fields by using citations counts. Of course, for this one first needs to have a good understanding of how citations are distributed. Roughly, citations are distributed following power laws with fields and subfields. This means that few articles garner a lot of citations, while many go empty (especially in Business, Management and Political Science). And if I understand the paper right, one can apply readily a multiplier to compare the citation frequencies across fields. And these multipliers then make it possible to compare researchers or research units across fields within, say, a country, as long as one assumes that an adjusted citation is equally worth citing. For example, is political science worth the same support as biomedical engineering after using these multipliers, to take two randoms fields? And the "size" of the field is important as well. Here the author makes an attempt at some definitions of size which I frankly did not understand.

That said, I wonder why I forced myself in reading this paper. First is it indigestible because it is poorly written and uses very bad analogies. Second, because trying to compare fields and use citations for the allocation of funds or prizes across then is impossible because you have no identification: in statistical speak, the fixed effects capture all the variance. You can only compare how well a field does in a country compared to the rest of the world, but this cannot measure how important the field is. You need more information than just citations.