The rise and fall of a physics fraudster
May 2, 2009 1:21 PM Subscribe
The rise and fall of a physics fraudster. In the spring of 2002, the world’s most productive young scientist was a 31-year-old physicist at Bell Labs in New Jersey in the US. With eight papers published in Nature and Science in 2001 alone, Jan Hendrik Schön was emerging with breathtaking speed as a star researcher in physics, materials science and nanotechnology...But in September 2002, managers at Bell Labs released a report [pdf] that...made clear that much of Schön’s data were fake. His discoveries were lies. Many of his devices had probably never existed...On the day of the report’s release, Schön was fired and fled the US to an unknown location. In all, 21 of Schon's papers were withdrawn from Nature, Science and Physical Review Journals.
In a sense this is an example of science working correctly. Others tried to replicate his work, and failed, and in fairly short order they determined that he was a liar. And his work was then repudiated and withdrawn.
It's happened many times in the history of science, and science has not been crippled by it.
The cases where science has stumbled badly and been harmed were those in which skeptics were demonized and punished without regards to the merits of their claims.
posted by Chocolate Pickle at 1:51 PM on May 2, 2009 [27 favorites]
It's happened many times in the history of science, and science has not been crippled by it.
The cases where science has stumbled badly and been harmed were those in which skeptics were demonized and punished without regards to the merits of their claims.
posted by Chocolate Pickle at 1:51 PM on May 2, 2009 [27 favorites]
What's interesting is just how brittle science can be. It seems like you can get published before people replicate your work, which is probably important (after all, how would people find out about your work without being published?) but once a "bad" paper gets into the system, it takes a while to get back out.
Another example was this example where researchers wrote a paper calming that MDMA caused brain damage. But turns out they had been giving their subjects the wrong drug (they were actually given meth) that had been mislabeled! The research was widely publicized at the time in order to promote "the war on drugs". When it turned out to be false, most people didn't hear about it, and as a result a widespread misconception remained.
posted by delmoi at 2:00 PM on May 2, 2009 [7 favorites]
Another example was this example where researchers wrote a paper calming that MDMA caused brain damage. But turns out they had been giving their subjects the wrong drug (they were actually given meth) that had been mislabeled! The research was widely publicized at the time in order to promote "the war on drugs". When it turned out to be false, most people didn't hear about it, and as a result a widespread misconception remained.
posted by delmoi at 2:00 PM on May 2, 2009 [7 favorites]
An important story. But this:
This dramatic end to Schön’s case brings us back to the question of whether science is, or is not, self-correcting. Science was corrected in the Schön case, but not by itself — only because individual scientists made corrections.
What a ridiculous thing to say. When people say that science is self-correcting, they don't mean that there is this thing, Science, and it looks itself over in a mirror now and again. They mean that individual scientists catch each other's errors (or fabricated results) as part of their work.
If anything, this story just highlights the dependence of the error-correcting process on both time and economic constraints. Great. This just in: Science takes time and money!
posted by voltairemodern at 2:03 PM on May 2, 2009 [35 favorites]
This dramatic end to Schön’s case brings us back to the question of whether science is, or is not, self-correcting. Science was corrected in the Schön case, but not by itself — only because individual scientists made corrections.
What a ridiculous thing to say. When people say that science is self-correcting, they don't mean that there is this thing, Science, and it looks itself over in a mirror now and again. They mean that individual scientists catch each other's errors (or fabricated results) as part of their work.
If anything, this story just highlights the dependence of the error-correcting process on both time and economic constraints. Great. This just in: Science takes time and money!
posted by voltairemodern at 2:03 PM on May 2, 2009 [35 favorites]
That was my reaction to that statement, too, voltairemodern. What a strange thing to say.
I'm sure I can't even imagine how frustrating it must be to have spent precious years trying to replicate these results. I'm glad that was one focus of the article.
posted by Durn Bronzefist at 2:05 PM on May 2, 2009
I'm sure I can't even imagine how frustrating it must be to have spent precious years trying to replicate these results. I'm glad that was one focus of the article.
posted by Durn Bronzefist at 2:05 PM on May 2, 2009
Vaguely reminds me of the Bogdanov affair.
(Which was also an example of science working correctly.)
posted by Dumsnill at 2:09 PM on May 2, 2009
(Which was also an example of science working correctly.)
posted by Dumsnill at 2:09 PM on May 2, 2009
Remember the Horizon doc about this guy... see it's on youtube 1, 2, 3, 4, 5 (transcript)
Though to be honest, I don't remember it as being a great doc, just another example of sad dumbed down trash that Horizon has become now
posted by fearfulsymmetry at 2:53 PM on May 2, 2009 [1 favorite]
Though to be honest, I don't remember it as being a great doc, just another example of sad dumbed down trash that Horizon has become now
posted by fearfulsymmetry at 2:53 PM on May 2, 2009 [1 favorite]
His history of fraud seems to have lasted about four years, with some scientists raising doubts during that period. Many criminal trials and government enquiries into wrongdoing have lasted much longer.
posted by binturong at 2:54 PM on May 2, 2009 [1 favorite]
posted by binturong at 2:54 PM on May 2, 2009 [1 favorite]
Not that it's an uninteresting story in itself, just the woeful spin Horizon put on it.
posted by fearfulsymmetry at 2:55 PM on May 2, 2009
posted by fearfulsymmetry at 2:55 PM on May 2, 2009
Not really surprising the story. Having worked in several labs in the EU and the US it is easy to spot how scientific fraud happens. There are labs where a fraudulent paper never could happen and there are places where you wonder why it has not happened yet.
posted by yoyo_nyc at 3:00 PM on May 2, 2009 [1 favorite]
posted by yoyo_nyc at 3:00 PM on May 2, 2009 [1 favorite]
Well, it's not like people would sit around believing a certain type of nanomachine was possible. People would want to use these technologies. And if they didn't work, well, what would you be able to say.
I think if you planned the fraud more carefully, such that the results would be interesting, but not all that awesome, not something that people would really want to build on and extend on, you might be able to get away with it.
I wonder what this guy was thinking. You'd have to be really damn smart just to come up with the fake material and theories, presumably you could do something useful.
But I think it also highlights one of the problems with science, there's a bit of a lottery aspect to it. If you pick one thing to study, and it turns out that you don't learn anything or prove the null hypothesis then that really doesn't do you any good, even though it's just as important.
Say for example people had thought up 10 possible cures for AIDS, but only one could actually work. In the end, only the team that picked the 'winning' cure would earn all the plaudits, wins all the Nobel prizes, gets all the fame, etc. But each of the teams that turned up bust would have been just as important in the effort.
posted by delmoi at 3:09 PM on May 2, 2009 [6 favorites]
I think if you planned the fraud more carefully, such that the results would be interesting, but not all that awesome, not something that people would really want to build on and extend on, you might be able to get away with it.
I wonder what this guy was thinking. You'd have to be really damn smart just to come up with the fake material and theories, presumably you could do something useful.
But I think it also highlights one of the problems with science, there's a bit of a lottery aspect to it. If you pick one thing to study, and it turns out that you don't learn anything or prove the null hypothesis then that really doesn't do you any good, even though it's just as important.
Say for example people had thought up 10 possible cures for AIDS, but only one could actually work. In the end, only the team that picked the 'winning' cure would earn all the plaudits, wins all the Nobel prizes, gets all the fame, etc. But each of the teams that turned up bust would have been just as important in the effort.
posted by delmoi at 3:09 PM on May 2, 2009 [6 favorites]
But in September 2002, managers at Bell Labs released a report that laid out a series of shocking revelations. Written by a panel of outside scientists (chaired by Stanford University physicist Malcolm Beasley), the report made clear that much of Schön’s data were fake....
After talking to another researcher at a conference in 2003, Chesterfield went home, took one of his old crystals out, and made some final measurements. “Then I concluded Schön was lying,” he said.
Now that's a scientist.
posted by dhartung at 3:35 PM on May 2, 2009 [2 favorites]
After talking to another researcher at a conference in 2003, Chesterfield went home, took one of his old crystals out, and made some final measurements. “Then I concluded Schön was lying,” he said.
Now that's a scientist.
posted by dhartung at 3:35 PM on May 2, 2009 [2 favorites]
If you pick one thing to study, and it turns out that you don't learn anything or prove the null hypothesis then that really doesn't do you any good, even though it's just as important.
Yes, that was one of the sad aspects to the story, imagining those poor grad students who were set on reproducing Schon's results...their work was in some sense important, but it didn't even lead directly to the eventual revelation that Schon was a fraud. It made me furious to imagine Schon corresponding with these poor guys, urging them on.
posted by voltairemodern at 3:39 PM on May 2, 2009
Yes, that was one of the sad aspects to the story, imagining those poor grad students who were set on reproducing Schon's results...their work was in some sense important, but it didn't even lead directly to the eventual revelation that Schon was a fraud. It made me furious to imagine Schon corresponding with these poor guys, urging them on.
posted by voltairemodern at 3:39 PM on May 2, 2009
results would be interesting, but not all that awesome, not something that people would really want to build on and extend on, you might be able to get away with it
I'm working out the final details of a contract with the defense department for providing Heisenberg nanobot invisibility suits. They work great, so long as nobody looks at them.
posted by It's Raining Florence Henderson at 3:44 PM on May 2, 2009 [4 favorites]
I'm working out the final details of a contract with the defense department for providing Heisenberg nanobot invisibility suits. They work great, so long as nobody looks at them.
posted by It's Raining Florence Henderson at 3:44 PM on May 2, 2009 [4 favorites]
Interesting article, but this part made me shake my head:
This result, called ambipolarity, would have made it easier to wire the transistors into inverters (logical circuits that are able to reverse the direction of an incoming signal).
posted by Tapioca at 3:54 PM on May 2, 2009
This result, called ambipolarity, would have made it easier to wire the transistors into inverters (logical circuits that are able to reverse the direction of an incoming signal).
posted by Tapioca at 3:54 PM on May 2, 2009
How could it have gotten this far? 21 papers were withdrawn?! Hadn't anyone duplicated any of his experiments by that time? Was anyone checking up on him? How is this not cold fusion all over again?
It's enough to make me suspect that scientists generally don't duplicate experiments to check if they're reproducible. Perhaps what we need is some kind of obsessive experiment-reproducing infrastructure to weed out frauds.
posted by JHarris at 4:06 PM on May 2, 2009
It's enough to make me suspect that scientists generally don't duplicate experiments to check if they're reproducible. Perhaps what we need is some kind of obsessive experiment-reproducing infrastructure to weed out frauds.
HEY OBAMA ARE YOU LISTENING?
posted by JHarris at 4:06 PM on May 2, 2009
It's possible that there are major differences in the studies and journals of different fields, but I don't find any of this to be all that surprising.
For one, if you've come across a paper that you want to investigate, you will have to do your own study. This might involve applying for and getting grants, which are often only awarded at certain times of year. This also might involve scheduling the use of high tech equipment to run samples or other such things, or in this scenario, building these things. Don't know how to build it? Oh well. And if you somehow manage to build it and it doesn't work, it could just mean you fucked up. Your study then finds the previous results to be non-repeatable. This is something you're going to want to check to make sure you haven't fucked up before you go on an international callout, because it's your name, reputation, and likely your career at stake. This is not something you do lightly. None of this takes into consideration publishing your results, which is a whole 'nother mess in itself.
At any rate, yes, the science world is rife with controversy, missteps, and the occasional blatant liar. There are hundreds of stories. A few years back a similar thing happened in dealing with dating desert varnish on rocks; I can't remember any of the details, but in short it was likely the samples were spiked to show the expected ages. It all got settled so that the guy kept his job and no one involved could ever discuss it. Then there are the people who steal papers: a reviewer for your article who rejects it and then turns around and publishes his own version, for example.
If I knew anything about the quality of the journals he got published in I would comment, but it's possible what he was working on was just plausible enough that the scientists selected for peer-review didn't catch it. And thus it came to experimentation and reproducibility, which take time.
posted by six-or-six-thirty at 4:47 PM on May 2, 2009 [1 favorite]
For one, if you've come across a paper that you want to investigate, you will have to do your own study. This might involve applying for and getting grants, which are often only awarded at certain times of year. This also might involve scheduling the use of high tech equipment to run samples or other such things, or in this scenario, building these things. Don't know how to build it? Oh well. And if you somehow manage to build it and it doesn't work, it could just mean you fucked up. Your study then finds the previous results to be non-repeatable. This is something you're going to want to check to make sure you haven't fucked up before you go on an international callout, because it's your name, reputation, and likely your career at stake. This is not something you do lightly. None of this takes into consideration publishing your results, which is a whole 'nother mess in itself.
At any rate, yes, the science world is rife with controversy, missteps, and the occasional blatant liar. There are hundreds of stories. A few years back a similar thing happened in dealing with dating desert varnish on rocks; I can't remember any of the details, but in short it was likely the samples were spiked to show the expected ages. It all got settled so that the guy kept his job and no one involved could ever discuss it. Then there are the people who steal papers: a reviewer for your article who rejects it and then turns around and publishes his own version, for example.
If I knew anything about the quality of the journals he got published in I would comment, but it's possible what he was working on was just plausible enough that the scientists selected for peer-review didn't catch it. And thus it came to experimentation and reproducibility, which take time.
posted by six-or-six-thirty at 4:47 PM on May 2, 2009 [1 favorite]
JHarris: For one thing, I imagine that funding cycles have something to do with it. Money to duplicate this year's interesting result probably doesn't get into budgets until NEXT year. It could be even longer -- remember that a good chunk of science is funded by the government, and requisition cycles in government are famously prolonged.
Basically, verification gets done, but it's somewhat delayed in many cases. Time and funding are limited. It does still self-correct -- the fact that it doesn't do so instantly usually isn't that important. There just aren't that many fraudulent scientists, because they get caught.
Trying to design the whole system to prevent some weird corner case will tend to slow down and foul up the common cases.... always optimize for the routine scenario, as long as the failure modes don't break the system. This was simply an annoyance, not an outright failure. The corruption was detected, investigated, and ultimately rejected.
posted by Malor at 4:51 PM on May 2, 2009 [1 favorite]
Basically, verification gets done, but it's somewhat delayed in many cases. Time and funding are limited. It does still self-correct -- the fact that it doesn't do so instantly usually isn't that important. There just aren't that many fraudulent scientists, because they get caught.
Trying to design the whole system to prevent some weird corner case will tend to slow down and foul up the common cases.... always optimize for the routine scenario, as long as the failure modes don't break the system. This was simply an annoyance, not an outright failure. The corruption was detected, investigated, and ultimately rejected.
posted by Malor at 4:51 PM on May 2, 2009 [1 favorite]
It's enough to make me suspect that scientists generally don't duplicate experiments to check if they're reproducible. Perhaps what we need is some kind of obsessive experiment-reproducing infrastructure to weed out frauds.
Huh? This guy was caught. Generally, you would only have to checkup a certain percentage of papers to verify that they are real.
On the other hand, if you fake something really interesting, people are going to want to build off what you've done, and they'll need to replicate the original experiment in some form to do it.
posted by delmoi at 4:58 PM on May 2, 2009
Huh? This guy was caught. Generally, you would only have to checkup a certain percentage of papers to verify that they are real.
On the other hand, if you fake something really interesting, people are going to want to build off what you've done, and they'll need to replicate the original experiment in some form to do it.
posted by delmoi at 4:58 PM on May 2, 2009
You'd have to be really damn smart just to come up with the fake material and theories, presumably you could do something useful.
Looks like not. I mean, sure, smart enough to get into the program, but it's a big leap from there to actually producing the goods.
Long term, of course he was going to get found out. But Long Term is a long time away, and huge numbers of people don't really think about it all too seriously. No surprise there.
His big mistake was not going into the humanities.
(Then again....)
posted by IndigoJones at 5:14 PM on May 2, 2009
Looks like not. I mean, sure, smart enough to get into the program, but it's a big leap from there to actually producing the goods.
Long term, of course he was going to get found out. But Long Term is a long time away, and huge numbers of people don't really think about it all too seriously. No surprise there.
His big mistake was not going into the humanities.
(Then again....)
posted by IndigoJones at 5:14 PM on May 2, 2009
What's interesting is just how brittle science can be. It seems like you can get published before people replicate your work … but once a "bad" paper gets into the system, it takes a while to get back out.
Brittle in comparison to what? In several other fields, it takes hundreds of years for change to occur. Heck, in some cases it's taken over two thousand years and the bugs still haven't been ironed out.
Four years flash in the pan shows the flexibility of a Slinky, not brittleness.
posted by five fresh fish at 5:26 PM on May 2, 2009 [3 favorites]
Brittle in comparison to what? In several other fields, it takes hundreds of years for change to occur. Heck, in some cases it's taken over two thousand years and the bugs still haven't been ironed out.
Four years flash in the pan shows the flexibility of a Slinky, not brittleness.
posted by five fresh fish at 5:26 PM on May 2, 2009 [3 favorites]
His big mistake was not going into the humanities.
It is extremely unlikely he would have been very successful there, either. In any academic profession, you are expected to act in good faith. Moreover, this guy wasn't producing new theories that he knew weren't correct -- that would at least suggest that he might have the soul of a writer -- he was claiming to have constructed new pieces of technology that he knew didn't exist.
I don't think he missed a calling in the humanities; his fraud doesn't translate. Would he have pretended to have written a great treatise or history that he wouldn't let anyone read?
posted by voltairemodern at 5:39 PM on May 2, 2009
It is extremely unlikely he would have been very successful there, either. In any academic profession, you are expected to act in good faith. Moreover, this guy wasn't producing new theories that he knew weren't correct -- that would at least suggest that he might have the soul of a writer -- he was claiming to have constructed new pieces of technology that he knew didn't exist.
I don't think he missed a calling in the humanities; his fraud doesn't translate. Would he have pretended to have written a great treatise or history that he wouldn't let anyone read?
posted by voltairemodern at 5:39 PM on May 2, 2009
How could it have gotten this far? 21 papers were withdrawn?! Hadn't anyone duplicated any of his experiments by that time? Was anyone checking up on him?
The physicsworld article (first link) is long but goes into this in some detail. It took a long time for people to notice because a lot of labs were working competitively and in secret, trying in vain to reproduce the results but keeping their activities hidden for fear of being scooped. The general subtext of the article is that these frauds illustrate that science has been compromised by greed and the quest for glory. If science were more pure, Schon would not have been tempted to fudge his results for personal glory; his co-authors would have examined the data more critically; other scientists trying to reproduce his results would have trusted their eyes more than the reputations of the prestigious journals they were comparing against; and negative results from failed experiments, the key to unraveling a fraud like this, would be disseminated as widely as positive results.
posted by PercussivePaul at 6:00 PM on May 2, 2009 [1 favorite]
The physicsworld article (first link) is long but goes into this in some detail. It took a long time for people to notice because a lot of labs were working competitively and in secret, trying in vain to reproduce the results but keeping their activities hidden for fear of being scooped. The general subtext of the article is that these frauds illustrate that science has been compromised by greed and the quest for glory. If science were more pure, Schon would not have been tempted to fudge his results for personal glory; his co-authors would have examined the data more critically; other scientists trying to reproduce his results would have trusted their eyes more than the reputations of the prestigious journals they were comparing against; and negative results from failed experiments, the key to unraveling a fraud like this, would be disseminated as widely as positive results.
posted by PercussivePaul at 6:00 PM on May 2, 2009 [1 favorite]
This would benefit from a "where is he now" followup.
Seriously-- when you commit fraud at this level within your highly specialized field--30+ years before you can expect to retire--where do you go from there?
posted by availablelight at 6:32 PM on May 2, 2009 [1 favorite]
Seriously-- when you commit fraud at this level within your highly specialized field--30+ years before you can expect to retire--where do you go from there?
posted by availablelight at 6:32 PM on May 2, 2009 [1 favorite]
From the excerpt of Plastic Fantastic:
Two years later, a colleague forwarded to me a press release that reminded me of Hendrik Schön. It announced the invention of transistors similar to some of those that Schön had claimed to make. This time, the devices were real, built by researchers at Rutgers University, a few dozen miles from Schön’s former lab at Bell Laboratories’ site in New Jersey. The Rutgers group had used techniques different from Schön’s and obtained results that were in many ways more modest, but the same physical principle of turning carbon-based materials into electrical switches was demonstrated by their experiments. The investigators of Schön had considered this possibility, writing in their report that the finding of scientific misconduct against Schön would remain valid even if the science of Schön’s claims was validated in the future. Even so, it was interesting to think about what might have happened if this research, or other work similar to Schön’s, had been completed sooner. Schön might have earned the credit for being one of the first to jump into a novel area, while the scientists who did the work to test his claims appeared to come in second. Paul McEuen and Lydia Sohn might never have searched for evidence of fraud, and Schön might have gotten away with it. Suddenly, the self-correcting nature of science looked as if it could slice both ways.
Hooked by this idea, I couldn’t help but wonder whether it had occurred to Schön too. Had Schön been banking on the possibility that his false but plausible scientific claims might one day be validated through the honest work of others? At the same time, the appearance of similarity between Schön’s work and other genuine results helped to explain why other scientists had been so willing to believe Schön in the first place. Schön had apparently imitated the outline of real scientific breakthroughs well enough that his data seemed both groundbreaking and plausible at the same time. [...] From other scientists I learned an enormous amount about the ideas, experimental suggestions, and feedback provided to Schön during his time in science. Comparing this to the claims he made, I found a pattern of compelling resonance. Schön was apparently working furiously to integrate the research ideas of the scientific community around him into his publications. No wonder, then, that scientists were so thrilled by his papers. Schön had turned their best ideas into fabricated data that were bound to seem appealing. This helped to explain both why his claims got a good reception and why those claims had something in common with results later achieved in reality by other scientists. Schön hadn’t guessed the onward course of science as much as listened to colleagues who had.
posted by you're a kitty! at 6:32 PM on May 2, 2009 [2 favorites]
Two years later, a colleague forwarded to me a press release that reminded me of Hendrik Schön. It announced the invention of transistors similar to some of those that Schön had claimed to make. This time, the devices were real, built by researchers at Rutgers University, a few dozen miles from Schön’s former lab at Bell Laboratories’ site in New Jersey. The Rutgers group had used techniques different from Schön’s and obtained results that were in many ways more modest, but the same physical principle of turning carbon-based materials into electrical switches was demonstrated by their experiments. The investigators of Schön had considered this possibility, writing in their report that the finding of scientific misconduct against Schön would remain valid even if the science of Schön’s claims was validated in the future. Even so, it was interesting to think about what might have happened if this research, or other work similar to Schön’s, had been completed sooner. Schön might have earned the credit for being one of the first to jump into a novel area, while the scientists who did the work to test his claims appeared to come in second. Paul McEuen and Lydia Sohn might never have searched for evidence of fraud, and Schön might have gotten away with it. Suddenly, the self-correcting nature of science looked as if it could slice both ways.
Hooked by this idea, I couldn’t help but wonder whether it had occurred to Schön too. Had Schön been banking on the possibility that his false but plausible scientific claims might one day be validated through the honest work of others? At the same time, the appearance of similarity between Schön’s work and other genuine results helped to explain why other scientists had been so willing to believe Schön in the first place. Schön had apparently imitated the outline of real scientific breakthroughs well enough that his data seemed both groundbreaking and plausible at the same time. [...] From other scientists I learned an enormous amount about the ideas, experimental suggestions, and feedback provided to Schön during his time in science. Comparing this to the claims he made, I found a pattern of compelling resonance. Schön was apparently working furiously to integrate the research ideas of the scientific community around him into his publications. No wonder, then, that scientists were so thrilled by his papers. Schön had turned their best ideas into fabricated data that were bound to seem appealing. This helped to explain both why his claims got a good reception and why those claims had something in common with results later achieved in reality by other scientists. Schön hadn’t guessed the onward course of science as much as listened to colleagues who had.
posted by you're a kitty! at 6:32 PM on May 2, 2009 [2 favorites]
His big mistake was not going into the humanities.
Well, as I understood it, the humanities were entirely discredited by that one fake article that Alan Sokal published in Social Text back in 1998.
A bit worrying though to discover now that science is also just a load of bunkum.
posted by washburn at 6:49 PM on May 2, 2009 [3 favorites]
Well, as I understood it, the humanities were entirely discredited by that one fake article that Alan Sokal published in Social Text back in 1998.
A bit worrying though to discover now that science is also just a load of bunkum.
posted by washburn at 6:49 PM on May 2, 2009 [3 favorites]
This is reminiscent of this recent episode. For those who didn't see the linked post, an anesthesiologist who published extensively on the use of COX-2 inhibitors (drugs like Celebrex) was recently found to have commited a similar type of fraud by simply making up the patients in some of his studies, leading to the withdrawal of more than 20 papers with his name on them from the leading anesthesia journals. I was recently at a meeting where an editor at one of those journals talked about the incident, and he (and others) made some good points. One is that the peer review system, which is the scientific community's mechanism for ensuring the quality of research, is not designed to weed out blatant fraud. It is good at detecting plagiarism and shoddy work, but if you just make shit up and it is plausible, it is possible to get it published. One reason the current system has this weakness is that negative results are hard to get published. The editor went on to make the point that we will never know how many people tried to replicate the fraudulent results and couldn't; it was only through carelessness in dealing with the IRB that the fraud was detected. Eventually the system does work, and when enough people are unable to replicate an experiment the results are questioned. It can take years, but then again, it can take years for positive results to become widely accepted if they challenge the conventional wisdom (see Warren and Marshall). Science is not meant to be quick, just ever slightly more accurate over time.
posted by TedW at 7:24 PM on May 2, 2009 [1 favorite]
posted by TedW at 7:24 PM on May 2, 2009 [1 favorite]
His big mistake was not going into the humanities.
XKCD!!!
posted by Chocolate Pickle at 7:57 PM on May 2, 2009 [2 favorites]
XKCD!!!
posted by Chocolate Pickle at 7:57 PM on May 2, 2009 [2 favorites]
Some other scientific fraud cases full of intrigue:
Sames (at Columbia) publically retracts several papers by his own grad student, Sezen, after other students can't repeat it. Except that he apparently does so without her knowledge, and she claims she can repeat the results just fine and he shouldn't have publicly shamed her out of a postdoc job. And, just for some good measure, some speculation that they may have been sleeping together, just for good measure. Also, for good measure, some of these results claim faster than diffusion catalyst turnovers, which Sames had been bragging about. As a result of this case, everybody with their name on a paper is now required to certify that they actually know what's going on in it (Sames claimed ignorance of the fraud).
Meanwhile, if you write federal grant proposals based on fraudulent data, the feds will jail your sorry ass. Especially if basically half of what we thought we knew about the biochemistry of aging turns out to be made up. If you look up the backstory, it was a tech in his group who figured it out and turned him in, and Poehlman went after this guy and set out to destroy him.
So, the moral? Well, trust your gut on new results and know that there are plenty of good, hardworking scientists out there, and that eventually it will get sorted out.
posted by Dr.Enormous at 8:12 PM on May 2, 2009
Sames (at Columbia) publically retracts several papers by his own grad student, Sezen, after other students can't repeat it. Except that he apparently does so without her knowledge, and she claims she can repeat the results just fine and he shouldn't have publicly shamed her out of a postdoc job. And, just for some good measure, some speculation that they may have been sleeping together, just for good measure. Also, for good measure, some of these results claim faster than diffusion catalyst turnovers, which Sames had been bragging about. As a result of this case, everybody with their name on a paper is now required to certify that they actually know what's going on in it (Sames claimed ignorance of the fraud).
Meanwhile, if you write federal grant proposals based on fraudulent data, the feds will jail your sorry ass. Especially if basically half of what we thought we knew about the biochemistry of aging turns out to be made up. If you look up the backstory, it was a tech in his group who figured it out and turned him in, and Poehlman went after this guy and set out to destroy him.
So, the moral? Well, trust your gut on new results and know that there are plenty of good, hardworking scientists out there, and that eventually it will get sorted out.
posted by Dr.Enormous at 8:12 PM on May 2, 2009
What a ridiculous thing to say. When people say that science is self-correcting, they don't mean that there is this thing, Science, and it looks itself over in a mirror now and again. They mean that individual scientists catch each other's errors (or fabricated results) as part of their work.
I think some people would have the expectation that the structure of scientific evaluation might work universally, ie, that it would not depend on individual scientists happening across a fraudulent claim, but that every claim would be reviewed as a matter of course. I'm not saying that's a knowledgeable thing to assume, but I don't think it's ridiculous to make the distinction. Yes, science probably gets around to most claims eventually - but there's no guarantee, certainly not that the data claimed was actually found, since people can always claim results they want and then take credit when someone else is able to actually pull it off (that is, even if it's believable data, that could just mean the scientist made lucky conjectures)
Well, as I understood it, the humanities were entirely discredited by that one fake article that Alan Sokal published in Social Text back in 1998.
It wasn't a "fake" article; it's just pretty simplistic and poorly written - but it has a clear thesis and claim, and follows through on it. The journal editors weren't impressed by it as a humanities article, but they were interested that a world-renowned scientist wanted to argue for the social implications of quantum theory. It was published in an edition of the journal devoted to "science wars", looking at issues of 'objective' science vs social/cultural paradigms, so it was fitting.
IN any case, that's one limited area of conversation - there is more to humanities than issues of social imaginaries / etc.
posted by mdn at 9:18 PM on May 2, 2009
I think some people would have the expectation that the structure of scientific evaluation might work universally, ie, that it would not depend on individual scientists happening across a fraudulent claim, but that every claim would be reviewed as a matter of course. I'm not saying that's a knowledgeable thing to assume, but I don't think it's ridiculous to make the distinction. Yes, science probably gets around to most claims eventually - but there's no guarantee, certainly not that the data claimed was actually found, since people can always claim results they want and then take credit when someone else is able to actually pull it off (that is, even if it's believable data, that could just mean the scientist made lucky conjectures)
Well, as I understood it, the humanities were entirely discredited by that one fake article that Alan Sokal published in Social Text back in 1998.
It wasn't a "fake" article; it's just pretty simplistic and poorly written - but it has a clear thesis and claim, and follows through on it. The journal editors weren't impressed by it as a humanities article, but they were interested that a world-renowned scientist wanted to argue for the social implications of quantum theory. It was published in an edition of the journal devoted to "science wars", looking at issues of 'objective' science vs social/cultural paradigms, so it was fitting.
IN any case, that's one limited area of conversation - there is more to humanities than issues of social imaginaries / etc.
posted by mdn at 9:18 PM on May 2, 2009
And, just for some good measure, some speculation that they may have been sleeping together, just for good measure. Also, for good measure...
Holy repetition repetition, Batman!
(See, sometimes scientists also make honest mistakes too--i.e. writing while drinking. Which don't always get caught on peer review/preview)
posted by Dr.Enormous at 9:36 PM on May 2, 2009
Holy repetition repetition, Batman!
(See, sometimes scientists also make honest mistakes too--i.e. writing while drinking. Which don't always get caught on peer review/preview)
posted by Dr.Enormous at 9:36 PM on May 2, 2009
This kind of thing does happen in the humanities, too. Fraud happens, and the checking happens too. For instance, Michael Bellesiles.
posted by Chocolate Pickle at 11:25 PM on May 2, 2009
posted by Chocolate Pickle at 11:25 PM on May 2, 2009
You'd have to be really damn smart just to come up with the fake material and theories, presumably you could do something useful.If you're Hwang Woo-Suk, you can do both!
If you pick one thing to study, and it turns out that you don't learn anything or prove the null hypothesis then that really doesn't do you any good, even though it's just as important.And usually it's not even considered publishable, which I think does a disservice to both the researchers and the field as a whole. There are attempts to rectify this, eg in biomedicine and in mathematics. But who's going to take the time to read those? (Actually, Rejecta Mathematica sounds like a pretty interesting journal, if I were a mathematician…)
posted by hattifattener at 12:43 AM on May 3, 2009 [1 favorite]
When people say that science is self-correcting, they don't mean that there is this thing, Science, and it looks itself over in a mirror now and again. They mean that individual scientists catch each other's errors (or fabricated results) as part of their work.
No time to read the linked articles right now, but what surprised me back when this guy got his pants pulled down was that several of his papers had tons of co-authors, many of which were well-respected. If the fact that you can get (and want to have) your name on a paper despite having no fucking clue what the main author has actually done isn't a problem with today's scientific process, I'm not sure what is.
posted by effbot at 4:24 AM on May 3, 2009 [1 favorite]
No time to read the linked articles right now, but what surprised me back when this guy got his pants pulled down was that several of his papers had tons of co-authors, many of which were well-respected. If the fact that you can get (and want to have) your name on a paper despite having no fucking clue what the main author has actually done isn't a problem with today's scientific process, I'm not sure what is.
posted by effbot at 4:24 AM on May 3, 2009 [1 favorite]
no time to read the linked articles
No time to read this post either, it seems. The guy I was thinking of was the physician with the same initials and roughly the same number of fraudulent articles, Jon Sudbø: The commission deemed much of Sudbø's work invalid because of manipulation and fabrication of raw data: of the 38 articles he had published since 1993, 15 were condemned as fraudulent, including his doctoral dissertation. [...] The commission could not rule out that Sudbø's false conclusions could have had an impact on cancer patients around the world, because his findings were used by other scientists and incorporated into cancer treatments [...].
Sudbø had a total of 60 co-authors on his papers. None of those noticed anything, despite things like using the same birth date for 20% of the population in one random sample in one of the papers, using the same sample set to prove different things in different papers, citing data from patient registers that didn't exist, etc.
posted by effbot at 4:44 AM on May 3, 2009
No time to read this post either, it seems. The guy I was thinking of was the physician with the same initials and roughly the same number of fraudulent articles, Jon Sudbø: The commission deemed much of Sudbø's work invalid because of manipulation and fabrication of raw data: of the 38 articles he had published since 1993, 15 were condemned as fraudulent, including his doctoral dissertation. [...] The commission could not rule out that Sudbø's false conclusions could have had an impact on cancer patients around the world, because his findings were used by other scientists and incorporated into cancer treatments [...].
Sudbø had a total of 60 co-authors on his papers. None of those noticed anything, despite things like using the same birth date for 20% of the population in one random sample in one of the papers, using the same sample set to prove different things in different papers, citing data from patient registers that didn't exist, etc.
posted by effbot at 4:44 AM on May 3, 2009
Several years ago a friend of mine was working in a lab and got results indicating if you added X to Y, there was an increase in Z. She then took a job in another lab, mainly for personal reasons having nothing to do with her original workplace. Her work was taken over by a new graduate student who repeated her experiments and got the opposite results, adding X to Y caused a decrease in Z. Her former boss was very upset and accused her of falsifying the data. As it turned out the technique she was using at the time, quantitative RTPCR, had been supplanted by a much better technique, realtime quantitative RTPCR. She had just been getting a single data point at an unfortunate point on a curve which became clear when using a technique that supplies all the points of the curve. Had they published her data, they might have had to retract it later.
Another friend working in industry discovered that the head of her lab had falsified the initial piece of data that was the raison d'etre of their project.
A nearby lab published a high profile journal article that other labs had a great deal of difficulty in reproducing the data.
A nearby lab at another place I worked had to retract everything involving one postdoc who had been faking all of her data.
I want to point something out here: falsification of data happens a lot. Sometimes data appears to be falsified when it isn't. I find this stuff completely terrifying - not trusting a colleague on this level is horrible.
effbot:
There are a couple of ways of getting your name on a paper where you don't know much about the main research of a paper - you provide a reagent or a service to the people doing the main part of the paper. For instance, let's say I make a very fancy chemical that Bob really needs to further his research. He calls me and asks for the chemical and I give it to him in exchange for my name on the paper. Theoretically he could pay me, but it is worth more to me to publish it. I write the part of the Materials and Methods section of the paper which pertains to the fancy chemical. If I'm not the PI of the lab giving the chemical, depending on the importance of my contribution, my PI might even get his name on the paper.
Another scenario is that the people doing the main work on the paper need a knockout mouse to show more clearly the effects that they've been seeing in tissue culture. I make them a mouse or give them one that I've previously created. I'm credited for my work but I have little to do with the other research done in the paper.
An additional scenario: I have a really cool piece of equipment that no one else has; it can provide a really nice way of proving a point for some guys putting together a paper. They send me some samples and I do some experiments; I send them a figure, a few paragraphs of the results section and the appropriate Materials and Methods write up.
posted by sciencegeek at 5:01 AM on May 3, 2009 [1 favorite]
Another friend working in industry discovered that the head of her lab had falsified the initial piece of data that was the raison d'etre of their project.
A nearby lab published a high profile journal article that other labs had a great deal of difficulty in reproducing the data.
A nearby lab at another place I worked had to retract everything involving one postdoc who had been faking all of her data.
I want to point something out here: falsification of data happens a lot. Sometimes data appears to be falsified when it isn't. I find this stuff completely terrifying - not trusting a colleague on this level is horrible.
effbot:
There are a couple of ways of getting your name on a paper where you don't know much about the main research of a paper - you provide a reagent or a service to the people doing the main part of the paper. For instance, let's say I make a very fancy chemical that Bob really needs to further his research. He calls me and asks for the chemical and I give it to him in exchange for my name on the paper. Theoretically he could pay me, but it is worth more to me to publish it. I write the part of the Materials and Methods section of the paper which pertains to the fancy chemical. If I'm not the PI of the lab giving the chemical, depending on the importance of my contribution, my PI might even get his name on the paper.
Another scenario is that the people doing the main work on the paper need a knockout mouse to show more clearly the effects that they've been seeing in tissue culture. I make them a mouse or give them one that I've previously created. I'm credited for my work but I have little to do with the other research done in the paper.
An additional scenario: I have a really cool piece of equipment that no one else has; it can provide a really nice way of proving a point for some guys putting together a paper. They send me some samples and I do some experiments; I send them a figure, a few paragraphs of the results section and the appropriate Materials and Methods write up.
posted by sciencegeek at 5:01 AM on May 3, 2009 [1 favorite]
If I'm not the PI of the lab giving the chemical, depending on the importance of my contribution, my PI might even get his name on the paper.
Well, if he can get his name on a paper without even reading it, your field definitely needs better ethics guidelines.
(The biomedical folks have pretty sane authorship guidelines. The problem here is that the system is asymmetrical - as a scientist, being listed as an author in a prestigious journal is worth a small fortune, even if your contributions are marginal, but there's hardly any consequences whatsoever for anyone but the main author if the paper turns out to be explicit fraud. Who cares about ethics when the whole point of the system is to distribute funding?)
posted by effbot at 5:34 AM on May 3, 2009
Well, if he can get his name on a paper without even reading it, your field definitely needs better ethics guidelines.
(The biomedical folks have pretty sane authorship guidelines. The problem here is that the system is asymmetrical - as a scientist, being listed as an author in a prestigious journal is worth a small fortune, even if your contributions are marginal, but there's hardly any consequences whatsoever for anyone but the main author if the paper turns out to be explicit fraud. Who cares about ethics when the whole point of the system is to distribute funding?)
posted by effbot at 5:34 AM on May 3, 2009
My point was that someone could easily make a substantial contribution to a paper without being intimately involved in other parts of it. Think of someone manufacturing a car - you could make the engine but not know that the brakes would fail under specific conditions even if you drove the car before selling it to a consumer. Falsified data isn't necessarily easy to spot unless you yourself repeat the experiments. Reading a paper before submission is only one component of a QC process. The article talks about this in some detail - someone manufacturing data can make it believable. The real QC happens in the lab where the data is being generated. Using my analogy, the guys designing and manufacturing the brakes are more likely to know that there's something wrong with them than the guy who works with engines. You need both to make the car, but they're not going to be perfect at critiquing the other guy's contribution.
posted by sciencegeek at 6:15 AM on May 3, 2009 [1 favorite]
posted by sciencegeek at 6:15 AM on May 3, 2009 [1 favorite]
Think of someone manufacturing a car - you could make the engine but not know that the brakes would fail under specific conditions even if you drove the car before selling it to a consumer.
In industry, component providers are usually not mentioned at all, or at most under "acknowledgements". Not sure your analogy is that good. I'd rather see you argue for why the current "maximize the number of authors so that everyone can get a share of the cred points for funding purposes but don't ever distribute the responsibility if something turns out to be fraud" approach is the best way to do science.
posted by effbot at 6:56 AM on May 3, 2009
In industry, component providers are usually not mentioned at all, or at most under "acknowledgements". Not sure your analogy is that good. I'd rather see you argue for why the current "maximize the number of authors so that everyone can get a share of the cred points for funding purposes but don't ever distribute the responsibility if something turns out to be fraud" approach is the best way to do science.
posted by effbot at 6:56 AM on May 3, 2009
"maximize the number of authors so that everyone can get a share of the cred points for funding purposes but don't ever distribute the responsibility if something turns out to be fraud"
The people listed as authors should have contributed scientifically to the research, the best work involves collaborations between top people in different fields, i.e. a materials lab to make the stuff, a spectroscopy lab to characterize the structure and a biology lab to test it in vivo. These groups are brought together exactly because no one person is likely to be an expert in all these fields.
posted by 445supermag at 7:16 AM on May 3, 2009
The people listed as authors should have contributed scientifically to the research, the best work involves collaborations between top people in different fields, i.e. a materials lab to make the stuff, a spectroscopy lab to characterize the structure and a biology lab to test it in vivo. These groups are brought together exactly because no one person is likely to be an expert in all these fields.
posted by 445supermag at 7:16 AM on May 3, 2009
Would you prefer that someone whose work is necessary to a publication be denied credit? I am talking about credit for things that fit the criteria mentioned in the link. There are credits in the acknowledgment section - but these are usually previously published reagents or services provided by core facilities. The work I'm talking about generally deserves credit. There are written as well as unwritten rules for what you get credit for.
These are not merely components - perhaps I should improve the analogy - we're talking the design of components. Surely you don't think that the guy who designed the engine doesn't deserve credit for its creation even when the brake design guy screws up? Distributing fault isn't a good idea. Yeah, the engine guys are pissed off because they were associated with a crap piece of machinery - there is a taint that extends to others involved, even if they had no reason to know that the brake guy was evil.
I want it understood that I'm not an apologist for falsification of data or people not reading the paper that has their name on it. You have to read the paper before it goes out.
I'm pointing out that you can read a paper and still not know that data was falsified. The reviewers of the paper can read it and not know that the data was falsified. You can read "A Million Little Pieces" and not know that it is a work of semi-fiction. I'll say it again, the only way to know if someone has falsified data is to do the experiments yourself. This is an impossible QC in science. The people best qualified to know if data is crap are the ones who work in the lab with the person who does the experiments. Many of the studies involved in many fields do not have paper trails other than what the experimenter writes down him/herself. If you don't work with humans or animals, your lab notebook and your computer may be the only record of what you're doing.
posted by sciencegeek at 8:07 AM on May 3, 2009 [2 favorites]
These are not merely components - perhaps I should improve the analogy - we're talking the design of components. Surely you don't think that the guy who designed the engine doesn't deserve credit for its creation even when the brake design guy screws up? Distributing fault isn't a good idea. Yeah, the engine guys are pissed off because they were associated with a crap piece of machinery - there is a taint that extends to others involved, even if they had no reason to know that the brake guy was evil.
I want it understood that I'm not an apologist for falsification of data or people not reading the paper that has their name on it. You have to read the paper before it goes out.
I'm pointing out that you can read a paper and still not know that data was falsified. The reviewers of the paper can read it and not know that the data was falsified. You can read "A Million Little Pieces" and not know that it is a work of semi-fiction. I'll say it again, the only way to know if someone has falsified data is to do the experiments yourself. This is an impossible QC in science. The people best qualified to know if data is crap are the ones who work in the lab with the person who does the experiments. Many of the studies involved in many fields do not have paper trails other than what the experimenter writes down him/herself. If you don't work with humans or animals, your lab notebook and your computer may be the only record of what you're doing.
posted by sciencegeek at 8:07 AM on May 3, 2009 [2 favorites]
And then there's Merck, which published a completely bogus "peer reviewed journal" for use in advertising.
posted by five fresh fish at 8:47 AM on May 3, 2009 [1 favorite]
posted by five fresh fish at 8:47 AM on May 3, 2009 [1 favorite]
The people listed as authors should have contributed scientifically to the research.
In Sudbo's case, he published papers that contained no actual research at all. Yet he had no trouble getting a dozen scientists to sign up as co-authors, many of which reappeared on one paper after another.
Would you prefer that someone whose work is necessary to a publication be denied credit?
Who's denying them credit? If their work has any scientific value, let them publish their own paper.
(and before you flame away, try to figure out why you react the way you do when I say "get cred points for your own work by writing your own papers" and what that tells you about the system you work under :-)
posted by effbot at 9:12 AM on May 3, 2009
In Sudbo's case, he published papers that contained no actual research at all. Yet he had no trouble getting a dozen scientists to sign up as co-authors, many of which reappeared on one paper after another.
Would you prefer that someone whose work is necessary to a publication be denied credit?
Who's denying them credit? If their work has any scientific value, let them publish their own paper.
(and before you flame away, try to figure out why you react the way you do when I say "get cred points for your own work by writing your own papers" and what that tells you about the system you work under :-)
posted by effbot at 9:12 AM on May 3, 2009
It's enough to make me suspect that scientists generally don't duplicate experiments to check if they're reproducible.
Well, like others said above, the reason this guy was found faking data is precisely because his experiments were duplicated. Secondly though, it's not like everyone is sitting around with nothing to do. People have their own projects, grants and so on. And you're not very likely to publish a lot of papers confirming that someone else' research is indeed valid.
posted by c13 at 10:34 AM on May 3, 2009 [2 favorites]
Well, like others said above, the reason this guy was found faking data is precisely because his experiments were duplicated. Secondly though, it's not like everyone is sitting around with nothing to do. People have their own projects, grants and so on. And you're not very likely to publish a lot of papers confirming that someone else' research is indeed valid.
posted by c13 at 10:34 AM on May 3, 2009 [2 favorites]
JHarris: It's enough to make me suspect that scientists generally don't duplicate experiments to check if they're reproducible.
In the fields I'm familiar with one of the major components of any research is typically to duplicate previously reported experimental findings that yours is building on. But a methodological difficulty is that if you have difficulty replicating it yourself you can't know from that experience that it isn't replicable. That sounds like a major component here, in that many research groups had difficulties replicating his work, but not in a way that told them it wasn't in principle possible. This is very similar to the fact that if you find a significant effect, you have learned something positive, but if you fail to find a significant effect, you haven't learned something negative (because the effect size might be too small for your methodology or whatever). A very harsh criticism that you hear sometimes of a scientist is "their work isn't replicable", but it takes quite a bit of failure over many attempts and over a range of time before you do hear this criticism.
article: Science was corrected in the Schön case, but not by itself — only because individual scientists made corrections.
Huh? I don't even know what this sentence (from the first link) is supposed to mean. Science doesn't happen unless individual scientists are doing it.
posted by advil at 1:46 PM on May 3, 2009
In the fields I'm familiar with one of the major components of any research is typically to duplicate previously reported experimental findings that yours is building on. But a methodological difficulty is that if you have difficulty replicating it yourself you can't know from that experience that it isn't replicable. That sounds like a major component here, in that many research groups had difficulties replicating his work, but not in a way that told them it wasn't in principle possible. This is very similar to the fact that if you find a significant effect, you have learned something positive, but if you fail to find a significant effect, you haven't learned something negative (because the effect size might be too small for your methodology or whatever). A very harsh criticism that you hear sometimes of a scientist is "their work isn't replicable", but it takes quite a bit of failure over many attempts and over a range of time before you do hear this criticism.
article: Science was corrected in the Schön case, but not by itself — only because individual scientists made corrections.
Huh? I don't even know what this sentence (from the first link) is supposed to mean. Science doesn't happen unless individual scientists are doing it.
posted by advil at 1:46 PM on May 3, 2009
Another example was this example where researchers wrote a paper calming that MDMA caused brain damage. But turns out they had been giving their subjects the wrong drug (they were actually given meth) that had been mislabeled! The research was widely publicized at the time in order to promote "the war on drugs". When it turned out to be false, most people didn't hear about it, and as a result a widespread misconception remained.
The thing is, that was one study about one neurotransmitter, dopamine-- and there are dozens of others [review] about serotoninergic damage that *have not* been retracted.
Admittedly, the early work on serotonin was done by the Ricaurte lab-- but not the work by other people later. In other words, the serotonin stuff (which made a lot more sense, given that MDMA acts directly on serotonin and not on dopamine) has been repeatedly replicated by independent labs.
Given that many users do report depressive symptoms in the days following use, it's not surprising that this would be the case. Of course, what that means and whether there are non-transient effects that are not due to pre-existing differences in users is still not known. But it's unfair to say that all research on negative effects of MDMA is drug war propaganda.
posted by Maias at 6:34 PM on May 3, 2009
My first thought when reading this was of Margaret Yang in Rushmore.
Yeah, I don't know much about science. This has been very enlightening.
posted by grapefruitmoon at 8:07 AM on May 4, 2009
Yeah, I don't know much about science. This has been very enlightening.
posted by grapefruitmoon at 8:07 AM on May 4, 2009
« Older Love M'nuts | On Paper Wings Newer »
This thread has been archived and is closed to new comments
posted by orange swan at 1:27 PM on May 2, 2009