What is the value of the research we do? The typical way we have to evaluate the impact of research is to count citations, and possibly weigh them in some way, in Economics and any other sciences (except maybe where patents are relevant). But this only evaluates how the research output is viewed within a narrowly defined scientific community. The contribution to social welfare is an entirely different beast to evaluate.
Robert Hofmeister tries to give research some value. The approach is to consider the scientific process through cohorts, where each wave provides fundamental research as well as end-applications based on previous fundamental research. A particular research results thus can have a return over many generations. It is an interesting way to properly attribute the intellectual source of a new product or process, but the exercise is of little value if it is not possible to quantify the social value of the end-application. Indeed, Hofmeister goes back to using citations in Economics for a data application, which is equivalent to evaluate research only within the scientific community. In terms of the stated goal of the paper, we are back to square one. In terms of getting a better measure of citation impact, this is an interesting application of an old idea. And the resulting rankings of journals and articles look very much like those that are already available.
Showing posts with label research. Show all posts
Showing posts with label research. Show all posts
Friday, June 10, 2011
What is the value of research?
Wednesday, May 11, 2011
Are PhD dissertations lagging the research frontier?
A doctoral or PhD dissertation is supposed to be work that pushes the research frontier further. Obviously, not all dissertations are created equal and it is to be expected that some will push more that others. But they are all supposed to push. Well, do they? This is something that is quite difficult to measure as one needs to know where the research frontier lies and what the contribution of a dissertation is. For each dissertation, only few people can do this, and it is thus impossible to have an aggregate picture, unless you use a clever trick.
Sheng Guo and Jungmin Lee take publications in top Economics journals as the research frontier and look in which JEL categories they fall. They compare this to the JEL codes for US Economics dissertations and find a strong correlation controlling for the number of jobs available in the field. If you lag the publications by two years, the regression is just as good, which hints that dissertations react to the research frontier rather than the opposite (which is unfortunately undocumented), especially when you consider that with the long publication delays in Economics, the journals are in fact a few years behind the research frontier.
My interpretation here is not quite that of the authors, who really want to understand how students choose their field of study, given that they want to be on the job market with research on a hot topic. But when they start working on it, they do not know yet what will be hot. I am not sure this is quite such a conundrum, as seminars and conferences already give quite a good picture, and working papers as well. But it still looks like dissertations follow the trends instead of creating them.
Sheng Guo and Jungmin Lee take publications in top Economics journals as the research frontier and look in which JEL categories they fall. They compare this to the JEL codes for US Economics dissertations and find a strong correlation controlling for the number of jobs available in the field. If you lag the publications by two years, the regression is just as good, which hints that dissertations react to the research frontier rather than the opposite (which is unfortunately undocumented), especially when you consider that with the long publication delays in Economics, the journals are in fact a few years behind the research frontier.
My interpretation here is not quite that of the authors, who really want to understand how students choose their field of study, given that they want to be on the job market with research on a hot topic. But when they start working on it, they do not know yet what will be hot. I am not sure this is quite such a conundrum, as seminars and conferences already give quite a good picture, and working papers as well. But it still looks like dissertations follow the trends instead of creating them.
Saturday, April 30, 2011
On the ethics of research cloning
Even though the Journal of Economic Perspectives recently went open access, a move the American Economic Association should be applauded for, I am still receiving physical copies. It is a nice journal to read while lounging in the garden or on a plane ride. The last issue has as usual a good set of interesting articles, including one I had reported on earlier when it was still a working paper. But while checking what I had said about it, I noticed something rather odd: the paper I discussed was ultimately published in the Journal of Economic Behavior and Organization. I had to investigate.
The two papers are by Bruno Frey, David Savage and Benno Torgler. They both report on the sinking of the Titanic and discuss the characteristics of the passengers who survived versus those who perished. Both papers come to the same conclusions. The texts are different, though, and the published regressions are slightly different, with no explanation why, because there is no reference to the other paper. One has therefore to read in much detail to understand what the contribution of each paper is, if there is any.
All this is very fishy. It really looks like the authors are playing games here, trying to get multiple publications out of the same work. They do not mention the other work to fool editors and referees into thinking these are original contributions, as required for any submission to those journals. They tweak the results and rewrite the text so that they cannot be accused of blatant self-plagiarism. This is unethical behavior, but it is not unheard of in the profession.
But like a late-night infomercial, there is a bonus. Looking at the author's CVs, I notice that they have a third publication with the same topic and results, in the Proceedings of the National Academy of Sciences. Bruno Frey has also published two short pieces in German in magazines prior to the academic publications: 1, 2, both pdf.
Now, who are the authors? David Savage is a PhD student at Queensland University of Technology. He must have been following orders of the more senior authors, either without realizing their unethical behavior or watching in horror and not being able to do something about it. Let us give him the benefit of the doubt. His adviser is Benno Torgler, who has already an impressive track record for someone whose first refereed publication was in 2002. His RePEc profile lists 105 working papers and 52 journal articles. Looking at the published works, it seems to like to revisit previous papers by adding new twists to them. Nothing wrong with that, but it may explain why there is no major hit in the publications. There is simply too much slicing and no single slice is a major contribution worth a good publication. But early in his career, he published a series of articles on tax morale using the World Values Survey. Using the same data and the same methodology, he managed to publish several articles whose distinguishing feature is only that they look at a different set of countries: Asia, transition countries, Canada, Latin America, and possibly more. While I must confess that I have not read the papers in detail, there is simply too much material, and Benno Torgler may be innocent, I still find these patterns very disturbing.
It took me some time to figure out where Benno Torgler earned his doctorate. It is at the University of Basel, under the supervision of René Frey (Basel) and Bruno Frey (Zurich), who are brothers, after undergraduate studies at the University of Zurich. Which bring us to Bruno Frey. He is a researcher of international recognition, mostly for his work on welfare economics, happiness research, and critiques of fundamental assumptions in economic models. He credits himself with over 600 published articles and books, an astounding number in Economics. Of course, if this number comes about by slicing papers or republishing known results as described above, this number is less surprising. Looking at his list of major articles, one can surely suspect something is not quite right. I do not have the time (or the will) to go all of this, but there is indeed a lot of rehashing the same themes, which is OK when one uses new data sets or new approaches. But seeing those quantities, that seem unlikely.
Another aspect that I find disturbing in Bruno Frey's record is that his recent work has been railing against the tendency of academics (and especially their administrators and grant makers) to look for quantifiable evidence of their productivity, what he calls "evaluitis." He writes against the pressure to publish and the prominence of rankings of research output. I have reported about some of this writing myself (1, 2, 3). But again he seems to be repeating himself a lot, even in published articles, essentially criticizing a game that he seems to be excelling at. Either he is sarcastic or hypocritical, I cannot decide.
I realize the accusations I am making here can have severe consequences. But I am only accusing, not condemning. I leave the reader the opportunity to make her own opinion, as I have linked to plenty of evidence. I hope to be proven wrong, that these three individuals are indeed extremely innovative and productive. But from what I have seen so far, my prejudice is strongly negative in this regard.
Update (Sunday): I have been alerted that there is a fourth publication about the same Titanic study, in Rationality and Study.
Further update: A follow-up post.
The two papers are by Bruno Frey, David Savage and Benno Torgler. They both report on the sinking of the Titanic and discuss the characteristics of the passengers who survived versus those who perished. Both papers come to the same conclusions. The texts are different, though, and the published regressions are slightly different, with no explanation why, because there is no reference to the other paper. One has therefore to read in much detail to understand what the contribution of each paper is, if there is any.
All this is very fishy. It really looks like the authors are playing games here, trying to get multiple publications out of the same work. They do not mention the other work to fool editors and referees into thinking these are original contributions, as required for any submission to those journals. They tweak the results and rewrite the text so that they cannot be accused of blatant self-plagiarism. This is unethical behavior, but it is not unheard of in the profession.
But like a late-night infomercial, there is a bonus. Looking at the author's CVs, I notice that they have a third publication with the same topic and results, in the Proceedings of the National Academy of Sciences. Bruno Frey has also published two short pieces in German in magazines prior to the academic publications: 1, 2, both pdf.
Now, who are the authors? David Savage is a PhD student at Queensland University of Technology. He must have been following orders of the more senior authors, either without realizing their unethical behavior or watching in horror and not being able to do something about it. Let us give him the benefit of the doubt. His adviser is Benno Torgler, who has already an impressive track record for someone whose first refereed publication was in 2002. His RePEc profile lists 105 working papers and 52 journal articles. Looking at the published works, it seems to like to revisit previous papers by adding new twists to them. Nothing wrong with that, but it may explain why there is no major hit in the publications. There is simply too much slicing and no single slice is a major contribution worth a good publication. But early in his career, he published a series of articles on tax morale using the World Values Survey. Using the same data and the same methodology, he managed to publish several articles whose distinguishing feature is only that they look at a different set of countries: Asia, transition countries, Canada, Latin America, and possibly more. While I must confess that I have not read the papers in detail, there is simply too much material, and Benno Torgler may be innocent, I still find these patterns very disturbing.
It took me some time to figure out where Benno Torgler earned his doctorate. It is at the University of Basel, under the supervision of René Frey (Basel) and Bruno Frey (Zurich), who are brothers, after undergraduate studies at the University of Zurich. Which bring us to Bruno Frey. He is a researcher of international recognition, mostly for his work on welfare economics, happiness research, and critiques of fundamental assumptions in economic models. He credits himself with over 600 published articles and books, an astounding number in Economics. Of course, if this number comes about by slicing papers or republishing known results as described above, this number is less surprising. Looking at his list of major articles, one can surely suspect something is not quite right. I do not have the time (or the will) to go all of this, but there is indeed a lot of rehashing the same themes, which is OK when one uses new data sets or new approaches. But seeing those quantities, that seem unlikely.
Another aspect that I find disturbing in Bruno Frey's record is that his recent work has been railing against the tendency of academics (and especially their administrators and grant makers) to look for quantifiable evidence of their productivity, what he calls "evaluitis." He writes against the pressure to publish and the prominence of rankings of research output. I have reported about some of this writing myself (1, 2, 3). But again he seems to be repeating himself a lot, even in published articles, essentially criticizing a game that he seems to be excelling at. Either he is sarcastic or hypocritical, I cannot decide.
I realize the accusations I am making here can have severe consequences. But I am only accusing, not condemning. I leave the reader the opportunity to make her own opinion, as I have linked to plenty of evidence. I hope to be proven wrong, that these three individuals are indeed extremely innovative and productive. But from what I have seen so far, my prejudice is strongly negative in this regard.
Update (Sunday): I have been alerted that there is a fourth publication about the same Titanic study, in Rationality and Study.
Further update: A follow-up post.
Wednesday, April 20, 2011
Should universities focus on teaching or research?
The UK is going through a rather traumatic reform of the finances of its higher education system. There is much debate, both public and among academics, on how much of the burden should be carried by the students and how. If we take as given that universities are funded by public grants, how to allocated them is still complex, in part due to the dual goals of universities: teaching and research. Research grants are now quality driven, while teaching grants are mostly quantity driven. Given that their is a budget constraint, either at the government or at the university level, any economist would tell you there is a trade-off and choices need to be made.
John Beath, Joanna Poyago-Theotoky and David Ulph discuss how universities choose whether to focus on research or teaching. Universities are funded according to a formula depending on the number of students taught, the number of academics and their research quality, which depends itself on the academic/student ratio, Every academic gets the same pay, and universities maximize an objective based on the quality-weighted volume of research and the quality adjusted number of students, where the quality of teaching is a consequence of the academic/student ratio.
It is clear there is not going to be a free lunch. A university cannot be good at both teaching and research. As one plays around with the funding formula, all sorts of equilibria can emerge. For example, if research is well rewarded and a minimum quality of research is required for it to be funded, a bifurcation occurs where a few universities concentrate on research and others on teaching. Which funding formula is best for society remains open, though.
John Beath, Joanna Poyago-Theotoky and David Ulph discuss how universities choose whether to focus on research or teaching. Universities are funded according to a formula depending on the number of students taught, the number of academics and their research quality, which depends itself on the academic/student ratio, Every academic gets the same pay, and universities maximize an objective based on the quality-weighted volume of research and the quality adjusted number of students, where the quality of teaching is a consequence of the academic/student ratio.
It is clear there is not going to be a free lunch. A university cannot be good at both teaching and research. As one plays around with the funding formula, all sorts of equilibria can emerge. For example, if research is well rewarded and a minimum quality of research is required for it to be funded, a bifurcation occurs where a few universities concentrate on research and others on teaching. Which funding formula is best for society remains open, though.
Tuesday, March 29, 2011
Which are the most efficient universities?
University rankings are not particularly useful because they only measure the output and hardly the input. And if they measure the input, having more inputs is better. This means that rankings reflect size and resources, not how well resources are used, and how much students improve from when they started their studies. This implies that any university that does not have a medical school or an engineering school starts with a disadvantage. Internationally, university rankings have become very important, to the point that, for example, France is now reversing the splitting of its large universities in field specific institutions. The new monster universities, now again covering all fields, will rank much better thanks mostly to their sheer size.
To counteract all this, you need to measure the efficiency of universities. Thomas Bolli does this for 273 universities across the world by estimating a production possibilities frontier. Unfortunately, the sole measured input it full-time equivalents of staff, while possible outputs are FTE of undergraduate and graduate students, and citation numbers. But it is a start. Universities in Switzerland and Israel appear to be very efficient (and indeed they are small are generate a good amount of research) while those in the UK seem particularly inefficient. That should fan some flames in the debate on university financing there.
To counteract all this, you need to measure the efficiency of universities. Thomas Bolli does this for 273 universities across the world by estimating a production possibilities frontier. Unfortunately, the sole measured input it full-time equivalents of staff, while possible outputs are FTE of undergraduate and graduate students, and citation numbers. But it is a start. Universities in Switzerland and Israel appear to be very efficient (and indeed they are small are generate a good amount of research) while those in the UK seem particularly inefficient. That should fan some flames in the debate on university financing there.
Friday, February 25, 2011
Are hot teachers better teachers?
It is well known that beautiful and tall people have better lives and are better paid. This is especially thought to be true in activities where skills are relatively unimportant. What about economics professors?
Anindya Sen, Marcel Voia and Frances Woolley use the hotness indicators from student evaluations at ratemyprofessor.com in Ontario and find hot economics university professors are paid a whooping 10% more than their less attractive counterparts. Not only is this a large number, it also runs counter to previous results that such effects are limited to unskilled professions. This effect is especially strong for men and not present for women, Indeed women who negotiate hard are not deemed attractive.
In addition, hotter teachers also get better student evaluations, even after controlling for all what the authors could put their hands on. For other indicators of professor productivity, it turns out that hotness affects positively women for citations, although this could be due to a few highly cited women (citation counts are always very skewed). But neither men nor women publish significantly more when hot, but they tend to attract more co-authors. I really need to be careful with my appearance.
Anindya Sen, Marcel Voia and Frances Woolley use the hotness indicators from student evaluations at ratemyprofessor.com in Ontario and find hot economics university professors are paid a whooping 10% more than their less attractive counterparts. Not only is this a large number, it also runs counter to previous results that such effects are limited to unskilled professions. This effect is especially strong for men and not present for women, Indeed women who negotiate hard are not deemed attractive.
In addition, hotter teachers also get better student evaluations, even after controlling for all what the authors could put their hands on. For other indicators of professor productivity, it turns out that hotness affects positively women for citations, although this could be due to a few highly cited women (citation counts are always very skewed). But neither men nor women publish significantly more when hot, but they tend to attract more co-authors. I really need to be careful with my appearance.
Labels:
Economics profession,
labor market,
research,
teaching
Sunday, February 6, 2011
Why should I write grant applications?
My administrators insist I should go for grants. They say it raises prestige and my research would benefit from it. I have not applied for a significant grant for quite some time for a reason: it is a horrible waste of time. I do not need grants. I do my research very well without the need for support money. All I need is a pencil, paper and a computer. I can even do without a printer. I do not need to pay for data, software and subscription, as all this is available for free (thanks to open source and open access). I do not need a research assistant as I do that much faster and better myself. And I do not need summer money as I am already well paid. In other words, I am doing just fine without grants, why should I put the time and effort into maybe getting a little money I do need, that comes with all sorts of strings attached?
My administrators do not care about the impact on my research, or my welfare for that matter. They want the overhead. They are begging for money to justify their existence. I already bring lots of money to the college by teaching many, many tuition paying and public funding attracting undergraduates. In fact, from a back of the envelope calculation, my pay should double just for that. I am already subsidizing the administrators, why would they need grant overhead? They need to feed a machinery that deals with those grants. The office of research, which manages the grants, is twenty people strong. And if I hire a research assistant among the graduate students, I have to pay his or her full tuition before anything can be assigned. I cannot hire outside the university. So why would I want to hire anyone?
In some way, the administration wants me to pay for my salary through grants, a salary I have already more than earned with teaching to overflowing classrooms. To be honest, if I were successful in obtaining grants, I would leave the university and keep everything for myself. I would then be able to concentrate on research instead of putting up with all the red tape. But most funding agencies do not accept submissions from independent researchers, so I continue doing my research without grants and try to ignore these administrators. Let them show their self-importance elsewhere.
My administrators do not care about the impact on my research, or my welfare for that matter. They want the overhead. They are begging for money to justify their existence. I already bring lots of money to the college by teaching many, many tuition paying and public funding attracting undergraduates. In fact, from a back of the envelope calculation, my pay should double just for that. I am already subsidizing the administrators, why would they need grant overhead? They need to feed a machinery that deals with those grants. The office of research, which manages the grants, is twenty people strong. And if I hire a research assistant among the graduate students, I have to pay his or her full tuition before anything can be assigned. I cannot hire outside the university. So why would I want to hire anyone?
In some way, the administration wants me to pay for my salary through grants, a salary I have already more than earned with teaching to overflowing classrooms. To be honest, if I were successful in obtaining grants, I would leave the university and keep everything for myself. I would then be able to concentrate on research instead of putting up with all the red tape. But most funding agencies do not accept submissions from independent researchers, so I continue doing my research without grants and try to ignore these administrators. Let them show their self-importance elsewhere.
Monday, January 24, 2011
How not to distribute research funds
Citation counts are often used to proxy for the quality of an article, researcher or journal. They are not a perfect measure, everybody agrees on that, but they have proven to be a useful starting point for evaluation. Sometimes they are taken very seriously, too seriously, for the distribution of funds and pay. But at least this is done within a field, as it is obvious that citing conventions and in particular frequencies differ from field to field.
Javier Ruiz-Castillo goes further in trying to infer how budget priorities should allocated across research fields by using citations counts. Of course, for this one first needs to have a good understanding of how citations are distributed. Roughly, citations are distributed following power laws with fields and subfields. This means that few articles garner a lot of citations, while many go empty (especially in Business, Management and Political Science). And if I understand the paper right, one can apply readily a multiplier to compare the citation frequencies across fields. And these multipliers then make it possible to compare researchers or research units across fields within, say, a country, as long as one assumes that an adjusted citation is equally worth citing. For example, is political science worth the same support as biomedical engineering after using these multipliers, to take two randoms fields? And the "size" of the field is important as well. Here the author makes an attempt at some definitions of size which I frankly did not understand.
That said, I wonder why I forced myself in reading this paper. First is it indigestible because it is poorly written and uses very bad analogies. Second, because trying to compare fields and use citations for the allocation of funds or prizes across then is impossible because you have no identification: in statistical speak, the fixed effects capture all the variance. You can only compare how well a field does in a country compared to the rest of the world, but this cannot measure how important the field is. You need more information than just citations.
Javier Ruiz-Castillo goes further in trying to infer how budget priorities should allocated across research fields by using citations counts. Of course, for this one first needs to have a good understanding of how citations are distributed. Roughly, citations are distributed following power laws with fields and subfields. This means that few articles garner a lot of citations, while many go empty (especially in Business, Management and Political Science). And if I understand the paper right, one can apply readily a multiplier to compare the citation frequencies across fields. And these multipliers then make it possible to compare researchers or research units across fields within, say, a country, as long as one assumes that an adjusted citation is equally worth citing. For example, is political science worth the same support as biomedical engineering after using these multipliers, to take two randoms fields? And the "size" of the field is important as well. Here the author makes an attempt at some definitions of size which I frankly did not understand.
That said, I wonder why I forced myself in reading this paper. First is it indigestible because it is poorly written and uses very bad analogies. Second, because trying to compare fields and use citations for the allocation of funds or prizes across then is impossible because you have no identification: in statistical speak, the fixed effects capture all the variance. You can only compare how well a field does in a country compared to the rest of the world, but this cannot measure how important the field is. You need more information than just citations.
Subscribe to:
Posts (Atom)