And I wonder whether the Economist article on the quality of science is well researched itself.
It mentions the Science study, that send out fake erroneous manuscripts to scientific journals. It claims that half of the journals accepted the paper, but without mentioning that a large part of the manuscripts were send to a list of journals known to be bad. That has quite an influence on the number.
It mentions the recent spread-sheet problem in economics. “He tried to replicate results on growth and austerity by two economists, Carmen Reinhart and Kenneth Rogoff, and found that their paper contained various errors, including one in the use of a spreadsheet.” For balance one would expect that the article mentions that the result were only quantitatively wrong, but, as far as I know, not qualitatively.
And their calculation example with the number of false positives does not convince me. Yes, scientists would like to find relationships that are “unlikely”, much less likely even as just 10%, but you find these theories not by random testing, but guided by theory. Purely empirical shotgun research is very unproductive.
When I see three problems, even if they are not that big, I wonder how many others there are. I would not let this article pass review. 🙂
The link of item 4 is dead.
Fixed. Thanks.
And I wonder whether the Economist article on the quality of science is well researched itself.
It mentions the Science study, that send out fake erroneous manuscripts to scientific journals. It claims that half of the journals accepted the paper, but without mentioning that a large part of the manuscripts were send to a list of journals known to be bad. That has quite an influence on the number.
It mentions the recent spread-sheet problem in economics. “He tried to replicate results on growth and austerity by two economists, Carmen Reinhart and Kenneth Rogoff, and found that their paper contained various errors, including one in the use of a spreadsheet.” For balance one would expect that the article mentions that the result were only quantitatively wrong, but, as far as I know, not qualitatively.
And their calculation example with the number of false positives does not convince me. Yes, scientists would like to find relationships that are “unlikely”, much less likely even as just 10%, but you find these theories not by random testing, but guided by theory. Purely empirical shotgun research is very unproductive.
When I see three problems, even if they are not that big, I wonder how many others there are. I would not let this article pass review. 🙂
u made me jew! :0)
So I did. Fixed.