Blots on a field
July 21, 2022 9:28 PM   Subscribe

A neuroscience image sleuth finds signs of fabrication in scores of Alzheimer’s articles, threatening a reigning theory of the disease
posted by latkes (50 comments total) 33 users marked this as a favorite
 
This is shocking for multiple reasons. I'll jump to one people might not otherwise focus on: if you're a leading scientist, your response to suspected fraud in your field should not be to quietly short the company involved!!!!!!!
posted by praemunire at 9:48 PM on July 21 [33 favorites]


Worth noting, the alleged fraud described in this article is attributed to a single scientist. It appears to include a number of important papers supporting the amyloid hypothesis of Alzheimer's disease (i.e., the idea that the disease is caused by an accumulation of amyloid protein plaques), and this is definitely a big deal. But the amyloid hypothesis predates this work, and this isn't the only work supporting that hypothesis.

However, I also believe there's been very good reasons to seriously doubt the amyloid hypothesis for quite some time. I'm not an expert in AD, but even when I was a graduate student ten years ago the case for the amyloid hypothesis was looking pretty weak, and as far as I can tell it's only gotten weaker since. If at least some of the highest-profile research directly supporting the amyloid hypothesis turns out to be fraudulent, as this article alleges, then it's really starting to look like the field has been on the wrong track for the last 15 years.

If true, this fraud may have set the field back by over a decade. That's a lot of dead and senile people who we might have been able to save had we known we should be looking elsewhere for treatments. There are a lot of pressures favoring cheating in science, and few checks to prevent it in the short-term. But in the long run, you will get caught, and if you're working on a disease that impacts millions, you will rightly be remembered as a monster.
posted by biogeo at 10:02 PM on July 21 [29 favorites]


Worth noting, the alleged fraud described in this article is attributed to a single scientist.

No, it isn't. The contracted investigative work performed by Schrage identified a number of scientists likely guilty of using similar methods. There's a Related Story at the bottom that describes the fallout, but in brief this does not seem like a One Bad Egg situation.
posted by ZaphodB at 10:11 PM on July 21 [6 favorites]


Actually, let me clarify that: the main article here describes only fraud associated with one scientist. However there's also an associated article also available at the link discussing potential fraud by scientists associated with the company Cassava, which is developing a drug, Simufilam, to treat Alzheimer's. As far as I can tell these are two distinct, though related, allegations. The Cassava case is the one for which some scientists shorted the company and paid Schrag to investigate, meaning there is a financial conflict of interest on both sides (conflict of interest does not necessarily mean invalid, just that a greater level of scrutiny should be applied). As far as I can tell, Schrag's investigation of Lesne's work on the amyloid hypothesis was done in his role as an academic, so there does not appear to be any financial conflict of interest involved.

On preview: yes, ZaphodB, you're right, you beat me to my correction.
posted by biogeo at 10:13 PM on July 21 [15 favorites]


This is shocking for multiple reasons. I'll jump to one people might not otherwise focus on: if you're a leading scientist, your response to suspected fraud in your field should not be to quietly short the company involved!!!!!!!

This might be a misunderstanding - as I read it, the neuroscientists loudly shorted the stock and sponsored the investigation that led to this disclosure. The didn't quietly sit and wait for things to collapse - they actively worked to uncover the fraud and earned an (IMHO) justified payout for their work.
posted by bbuda at 10:13 PM on July 21 [5 favorites]


Short first, then publicize. As long as you don't sit on the info for an inordinate length of time, this seems fine to me.

(But I'm not a medical scientist, maybe they have their own weird rules?)
posted by ryanrs at 10:16 PM on July 21


As biogeo mentioned, there is indeed quite a controversy around the amyloid model itself, which has been jealousy guarded by a cabal in the field.
posted by Dashy at 10:23 PM on July 21 [12 favorites]


the neuroscientists loudly shorted the stock and sponsored the investigation that led to this disclosure

Can't be that loud--they didn't even name them (unless I missed something).

Running a private investigation for financial benefit rather than immediately raising the issue with the appropriate authorities, while all the while people may actually be in trials, is not cool. By shorting the company, they are creating their own conflict of interest (I suspect also, from the involvement of Labaton Sucharow, that they may be doing SEC whistleblowing, as well).
posted by praemunire at 11:39 PM on July 21 [4 favorites]


By the timeline in the articles, they contacted Schrag to do an expert analysis in August of 2021 and he petitioned the FDA to stop the trials . . . in August 2021. It's not like this points to a long delay before going public.

The FDA, incidentally, has not stopped the trials, so I'm not sure sure what you mean by "appropriate authorities" here. The ORI, maybe, but they are neither fast nor able to directly enforce anything. In practice, most journal fraud gets reported to the journal itself, or (especially if the offenders are academic) the institution that the researchers are affiliated with. And takes ages to resolve.

I get that short selling the stock feels icky but it's not at all clear what other impact there is, other than that it's probably incentivized to get this public faster than otherwise. It's not like they were going to pay for the investigation otherwise, and going public with vague suspicions you don't have the expertise to vet directly isn't "cool" either.

It's also not clear if they detected fraud and shorted the stock, or shorted the stock and then noticed the fraud. In today's climate shorting any company whose only drug is an Alzheimer's in clinical trials is kind of a no-brainer. I would not be surprised if these were industrial researchers who thought this was a crap drug for all sorts of reasons before they found the fraud; this would also mean it would have been tough for them to go public directly (and were already conflicted.)
posted by mark k at 1:00 AM on July 22 [3 favorites]


As someone who has and continues to be a caregiver for family members with Alzheimer’s this is beyond enraging. So much money and time misspent that might have led to viable treatments.
posted by leslies at 1:05 AM on July 22 [15 favorites]


I feel someone should make a map of all of the fields of research that have gone bad, and then look for commonalities among them. Because those are the fields where fraud is most likely to happen.
I feel I can recognize some patterns in the article Dashy posted, but it's just a sense. It might be useful to have a set of things to look for if you are reviewing stuff, or if you find yourself in a situation where it seems common sense has left the building.

It's really hard to find fraud in an article or an application if you are not working in the exact same corner of the exact same field, and then you most likely know the fraudster well and may have good or bad reasons to not expose the fraud.
posted by mumimor at 1:06 AM on July 22 [5 favorites]


I think if you are going to attack such a foundational paper with thousands of citations, it makes sense to do plenty of your own investigating first. The journals and universities probably pay more attention to you when you have solid evidence and externally vetted analysis.
posted by ryanrs at 1:09 AM on July 22 [2 favorites]


murimor, for it's worth, I think fraudulent science is most likely in social psychology, medicine, and nutrition. The research in intrinsically hard, there's money to be made, and there are results people want to believe.
posted by Nancy Lebovitz at 3:14 AM on July 22 [3 favorites]


murimor, for it's worth, I think fraudulent science is most likely in social psychology, medicine, and nutrition.
There is much more out there than you think, but what I was thinking about was more detailed: fields within fields. Within the wider field I work in, some researchers are absolutely solid, locally as well as internationally, the standard is reliably high across all institutions. And then there are sub-fields where the work is always shady, even when it isn't fraudulent. It is often like in this case, where a basic/foundational theory remains uncontested for decades, even though it never provides results that can be useful in practice.
Another marker could be over-funding: in this country, a substantial part of the surplus from the gambling industry has to go to research in the effects of gambling. Since there is a lot of money, and a very narrow field, those researchers have no checks on their work. It might be great, for all I know, but in general fields with overfunding don't do well in terms of integrity, methodology and theoretical consistency.
A third indicator that might point to less rigid research, and I think this might be hinted at in the article, is if there is a strong emphasis on quantity rather than quality in the community. Some people publish more than it is physically possible -- there are only so many hours in the day. I am aware that a lot of fields have a tradition where the professor co-signs the work of their graduate students, and that is all fine. But I once found a professor listing hundreds of articles, in a field where I literally knew every single graduate student (because there were less than 50), and they were not his co-authors. So who were the co-authors? I'm guessing undergraduates, maybe from other fields, maybe just publishing the result of a seminar.
Finally, journals and conferences are not guarantors of quality. A shady field can build their own system of peer review and make them the "highest International standard". Obviously, they know at some level that this is corrupting the whole system, and this is why I think these fields are the most prone to actual fraud. They have already stepped over a threshold. But looking at this from an adjacent field, where you don't have the time or specialist knowledge to go through all the articles and papers, you won't be able to see it.
posted by mumimor at 4:59 AM on July 22 [13 favorites]


I was on a day long presentation of researchers (in the Boston area) presenting to a lay audience, all my sci-spidy sense was these were top folks talking about what is known right now. And basically they got nothing. Amyloid is not correlating like they had hoped, "Tau" is a smidge better but very expensive (multiple MRI's at a minimum) so just really hard to run big studies. And the studies are not years long but decades. They really need study a large population in their 40's say several thousand (10k's would be better but hello budget) MRIS and the results will be a small subset in their 80/90's.

But Amyloid has not been ruled out, perhaps it has not been looked at in the right way.

This call was a bit before the FDA/aducanumab controversy but there was no suggestion that any treatment was to be expected, these were primarily core researchers not drug developers but they are still struggling to identify basic mechanisms let alone think about what could work as a treatment.

And it seemed pretty clear that the treatment needs to start early, "fixing" in the brain when symptoms appear late in life is way too late, but they have no solid indicators to identify who would need treatment early.

Just think about it, they need thousands of folks in middle ages to subscribe to a decades long ambiguous research program that has annoying memory tests and day long MRI's. And funding is not there.

Wonder drug? Not this year decade century.
posted by sammyo at 5:04 AM on July 22 [5 favorites]


I suppose that since we've decided it's okay to go long on your research despite the perverse incentives that provides to fudge your science - something we've seen again and again with many classes of drug from all levels of the pharmaceutical industry - going short is a logical next step.
posted by clawsoon at 5:22 AM on July 22 [4 favorites]


Proving fraud or misconduct is hard. And, the techniques at issue in these papers (Western blots) are particularly fraught. As a PhD student in biochemistry, I tried (and tried) to duplicate the results of a particular published experiment and failed again and again. It could be my inadequacy as a bench chemist, variation in reagents or sources of biologic material, or accumulation of small, subtle differences in a multi-step process of isolation, assay, and recombination of biologic components.

Detecting small differences in amount or activity of a protein or enzymatic reaction using at best semi-quantitative methods such a various ‘blots’ is difficult and error prone, particularly when quantitation involves scanning the ‘blot’ then normalizing the density of a band of interest with a standard band in the same or adjacent lane. Then, there is defining the boundaries of the blot and integrating the total density of the area considering than you are looking at a area of varying density within which are darker areas that may exceed the linear (or quasi-linear) range of your assay. Further, bands are often lumpy or smeared particularly if there is any precipitate in the applied sample. Large differences are easy to see but proving smaller changes, particularly against some baseline background activity, can be very tough. Inter- and intra- experimental variation on Western blots in particular can be 50%+.

Perhaps this is too much inside baseball, but similar considerations of image analysis are found in astronomy and biomedical science. There are algorithms for boundary delineation, normalization, and summing of density. Occasionally, you see these cited but many labs and researchers use ad hoc methods. I would always be very skeptical of the conclusions of papers like these, even my own.
posted by sudogeek at 5:43 AM on July 22 [19 favorites]


So… my main collaborator is pursuing the inflammation angle with AD and a few other disease states. Mouse models. We both know Karen Ashe, in fact she was our keynote speaker for a recent local conference (in which she did say for the record that the FDA/aducanumab kerfluffle was not good for the field and not a promising treatment avenue). This article is not a good look for the UMN neuroscience program. I don’t know what to think here. I can’t believe Ashe would straight up be OK with falsifying data. There is a lot that goes on in a larger lab, and the lead PI quite often just implicitly trusts that their postdocs or junior collaborators are working in good faith and with solid ethics. But a busy, active lab also means the lead PI has very little time to do more than superficial spot checks of the work produced. We’ve all seen shady stuff slip past the senior scientists.

On the flip side maybe my colleague’s grant resubmission with the inflammation focus will be reviewed more kindly? There is some sort of a plus there I guess.

The “cabal” described above is not unique to this field. Any field dominated by strong personalities is likely to have a cabal. Try getting a grant for Gulf War Illness, for example. If you are not spouting the same type of “oh it must have been the vaccines” search for the smoking gun BS that has been going on, fruitlessly, for the past 30 years, you’re not going to get funded. Why? Not because your science is bad, but because the grant will be reviewed by experts in the field, most of whom are the same people who espoused the leading theory to begin with. They don’t want to be told they are wrong and they don’t want the competition. You get even one of those persons on your review panel, and the chance of any contradictory or novel thinking to get funded is near zero.
posted by caution live frogs at 5:44 AM on July 22 [19 favorites]


Also, I drill into the trainees - never, never, EVER do you delete your raw source images. Manipulate the levels, crop as needed, pick the very prettiest shot out of all the samples, but you ALWAYS retain the starting point, unaltered, because if there is ever a question you need to be able to produce it. And with western blots specifically, my colleague has often pointed out “look how closely they cropped this one, you know for sure the rest of the blot is a mess if this is all they show”

(You don’t capitalize western blot. You do capitalize Southern blot, because it was named for Edwin Southern, who invented the technique. The naming for other types of blots use directional names like “eastern” and “western” as a play on his name. FUN FACTS.)
posted by caution live frogs at 5:50 AM on July 22 [48 favorites]


Metafilter: you know for sure the rest of the blot is a mess
posted by sammyo at 6:05 AM on July 22 [3 favorites]


never, never, EVER do you delete your raw source images.

Thanks for being the good practices advocate, caution live frogs.
posted by sammyo at 6:06 AM on July 22 [5 favorites]


Omg, and then while thinking about this I get a new age spam email:

Your Alzheimer’s prevention toolkit has arrived - Watch Episode 3 Here!

Save yourself by buying our "special" organic vitamins and supplements. (scream out loud)
posted by sammyo at 6:51 AM on July 22


This has me wondering, how much of modern science is “we took this raw data into photoshop and then used filters until we could see what we were looking for.” I do this sort of thing all day long with data, ad hoc. I discard my methods and results. But in my imaginary role as Science-Man, I would do everything with repeatable scripts committed to a git repository. Every step from the “get the data from the IEEE-488 interface on the spectrum analyzer” to “apply an 8 percent unsharp mask and then export in CMYK tiff with this colorspace mapping for publication”. You build your paper for publication by generating it from source. The entire process might take days if you need to re run a big map-reduce on a cluster. But you could do it. This is probably why I am not Science-Man.
posted by bigbigdog at 7:22 AM on July 22 [2 favorites]


This has me wondering, how much of modern science is 'we took this raw data into photoshop and then used filters until we could see what we were looking for.'

In broad strokes, more than most people think.
posted by biogeo at 7:46 AM on July 22 [6 favorites]


this is beyond enraging

For whatever it might be worth, there’s a drug gearing up for initial trials right now (like, as we speak) that uses a completely different model, driven by folks who never believed the amyloid data was worth much and thought Aduhelm was a borderline atrocity. My next-door neighbor was doing some of their administrative paperwork for them (he moved or I’d go over and have him remind me wtf their drug is). Some heavy-hitting structural genomics guys iirc.

(Having random neighbors like this is one reason why Silicon Valley is kinda weird.)
posted by aramaic at 8:14 AM on July 22 [2 favorites]


Short first, then publicize. As long as you don't sit on the info for an inordinate length of time, this seems fine to me.

That would be insider trading in virtually any jurisdiction and would surely be severely penalized.
posted by sjswitzer at 8:38 AM on July 22 [1 favorite]


What are the chances of charges being pressed somewhere down the line? It the researcher has been doctoring his data for years and using that data to receive grants it's really just as fraudulent as getting money for grants and then using it for Las Vegas weekends and completely fabricating his data in photoshop.

I am thinking the chances are close to nil.
posted by Jane the Brown at 8:41 AM on July 22


True story: I've intentionally avoided learning Photoshop for exactly that reason.

Which means that I also don't know how to use Illustrator, but whatever.
posted by Dashy at 8:41 AM on July 22 [4 favorites]


That would be insider trading

I should add that what they appear to have done, if I understand correctly, is to have suspected fraudulent research with material impact on the stock value, shorted the stock, then attempted to prove their suspicion, then went public in the hopes of tanking the stock to their benefit. That is arguably different from insider trading, but it seems almost worse.
posted by sjswitzer at 9:00 AM on July 22 [1 favorite]


That would be insider trading in virtually any jurisdiction and would surely be severely penalized.

If we're talking about scientists without a commercial relationship with the company who are simply reaching their own conclusions based on published data to date, it wouldn't be in the U.S. U.S. insider trading laws do not work the way you'd think they might. What they appear to be doing (gathering public data, doing their own analysis) is classic modern short behavior, and legal within certain bounds. It's just, in my opinion, very inappropriate for scientists in health fields in particular, and more generally in science because it inevitably muddies the waters. If you look at the comments on their citizen petition, many of them are variants on "these guys are just evil shorts" rather than any real substantive analysis. Bad enough that, as pointed out above, in pharmaceuticals "going long on your own work" is routine (I think that's less problematic for certain reasons, and I know at least one highly successful chemotherapy drug that only got developed because of the availability of this route, but the risks are obvious).

I worked for a little while in a biology lab in an administrative support position back in the day and it was obvious even then that the PI would have to rely to a substantial degree on the integrity of the postdocs. Though you'd hope that the more unexpected/breakthrough the results were, the more intense the scrutiny would be.
posted by praemunire at 9:07 AM on July 22 [5 favorites]


[eponysterical] I mostly stopped doing neuroscience 25 years ago. I practice geriatric medicine where the care of people with Alzheimer's disease is my daily bread and butter.

I'm here to say: do not use aducanamab (Aduhelm) except maybe as part of a clinical trial.
posted by neuron at 9:11 AM on July 22 [7 favorites]


That would be insider trading in virtually any jurisdiction and would surely be severely penalized.

No, it's not insider information. It's outsider information collected from peer reviewed, published journal articles.
posted by ryanrs at 9:14 AM on July 22 [5 favorites]


My favorite refrain about aducanamab: this is a drug that doesn't help and that kills people.

Ironically, I am supposed to give a research integrity talk for medical students in a couple days. I might just scrap my slides, have them read this article instead.
posted by basalganglia at 9:30 AM on July 22 [6 favorites]


(That was foolish of me, I flagged myself already. Sorry.)
posted by biogeo at 9:39 AM on July 22 [1 favorite]


Amyloid is not correlating like they had hoped, "Tau" is a smidge better but very expensive (multiple MRI's at a minimum) so just really hard to run big studies.

I'm not in the field, but that was my initial thought, that the accepted view for a long time has been that tau misfolding is the primary driver of Alzheimher's.

Fraud in science is a very delicate subject. It's easy to start with a preconceived notion and if the data doesn't back it up, to fudge the results until it does, because your theory is right, right?

Then you have people like all those Stanford scientists who just did sloppy science, and concluded that Covid was nothing more than a head cold.

It's even easier to go on a data fishing trip (and there's lots more data than fish - I'm looking at you, GWAS and systems biology) and find something that superficially looks important, but's just random noise. It's also easy to abuse dimension reduction techniques to make plots that look profound. If I wanted to, I could use t-SNE or UMAP to turn word counts from Ulysses into a picture of Scooby Doo.

Then there's full-blown fraud. Science is a little bit like cycling in the 00's - unless you're at the very top you're on short term contracts, and have huge pressure to deliver, you know that some other people are cheating, but if someone is caught cheating it'll end their career. Someone I know was reading a paper and realised very quickly that the x-ray crystallography results were faked. The PI on the paper lost his job.

More seriously, like cycling, some people accused will go full Lance Armstrong and the accusers get hounded. I know two scientists who are brothers whose dad was also a scientist, who (correctly) accused someone he worked with of blatant fraud, and in the spirit of attack is the best defence was (falsely) accused of fraud in return. He didn't have the same connections as the fraudster, he was ostracised, and it contributed to him taking his own life.

Sorry to end it on a downer, but ultimately, if you give people a huge incentive to cheat, then a certain proportion of people will cheat.
posted by kersplunk at 9:41 AM on July 22 [3 favorites]


I work in biomedical research, in a support role, so nothing exciting. The worst case scenario I can remember from the ethics class I took ages ago was someone falsified data for their PhD, and that data was used to treat patients, and they died. The fraudster lost their PhD, and their advisor also lost theirs as well. One of my friends is involved in a lawsuit against a researcher (as a witness), but that was just for a hideously bad analysis, rather than fraud.
The same friend did an analysis for a study, and turned it in to the PI (Primary Investigator), who replied, "This is great! I can explain exactly what's going on!" Later, he found a mistake, fixed it, and gave the PI the updated analysis. The PI replied, "This is great! I can explain exactly what's going on!" That did not inspire confidence.
posted by Spike Glee at 9:44 AM on July 22 [1 favorite]


The “cabal” described above is not unique to this field. Any field dominated by strong personalities is likely to have a cabal.

I want to underscore this here because it's really true. I've bounced around a number of different fields at this point, and this is one of the throughlines I keep running into: you will often find fields, study topics, and organisms dominated by a single strong personality or small group of personalities. Sometimes that's a function of interest; sometimes it's a function of interpersonal dynamics; often it's a mixture of both.

Also, I drill into the trainees - never, never, EVER do you delete your raw source images. Manipulate the levels, crop as needed, pick the very prettiest shot out of all the samples, but you ALWAYS retain the starting point, unaltered, because if there is ever a question you need to be able to produce it. And with western blots specifically, my colleague has often pointed out “look how closely they cropped this one, you know for sure the rest of the blot is a mess if this is all they show”

Oh yeah. I just wrapped up my commentary in the lab slack before toddling over here--we're in UMN Psychology, so there is a certain level of stunned-it's-personal-now reeling going on--and I had pretty much the same reaction with respect to the lab's grad students: this is why I make all of you keep all the raw data files the behavior chambers output, this is why we always keep raw video and image files, this is why we document every single thing that happens to our data with detailed pipeline readme files so that you can follow every step of the analysis along the way to the conclusion. Document every piece of the analysis and if you can publicly supplement it into your papers, do so.

I'm reeling... a little less, because the Jonathan Pruitt case was much closer to my own networks and I knew both people who had worked with him and people who were intimately involved in the data analysis that documented his history of fraud. (By the way, Pruitt resigned from his McMaster University post last fucking week. The equivalent journalistic reporting of fraud in his work came out two years ago. Nothing about any of this process, or the public assessment of the honesty and careers of the people involved, is going to be fast.) So I did a fair bit of processing "shit, this is real" when other people in my networks were working through his stuff.

I suppose the last thing I want to point out, thinking about it, is that the statistical and image-manipulation analyses that have been used to expose both frauds here are really groundbreaking in their own right. I hate that so many talented people have had to spend so much time and energy identifying fraud while the careers of people engaging in it have been allowed to flower and grow in the short term, though. The opportunity costs are incredible.
posted by sciatrix at 10:18 AM on July 22 [23 favorites]


I did not look at TFA so WTF do I know, but I did scan over the thread and saw relatively little discussion of the more general problem of the reliability of scientific findings.

In short, the guarantees for that are practically all the honor system, once you get out of the most intensively-examined things. It's not really known how much BS is in the books as scientific knowledge. We as a society are terrible at either incentivizing people to stay honest or policing them for dishonesty. That is to say, science-as-a-social-construct falls way short of the ideal of science-as-a-self-correcting-way-of-knowing. It would cost a lot of money to arrange for social construct to more closely approach the ideal, and also probably a lot of people would have to admit to having made mistakes.
posted by Aardvark Cheeselog at 10:57 AM on July 22 [1 favorite]


I think the breakthrough which could lead to effective prevention of Alzheimer's, and quite possibly halt progression as well, may have already taken place a few years ago.

I found the linked article yesterday a few hours before this post was up by clicking on a link in a comment on Derek Lowe’s blog In the Pipeline, but a couple of years ago another commenter there linked to a video (the link starts at 3:07, but the whole thing is well worth watching) featuring Professor Jerold Chun of Stanford, in which he says we now have a population of more than 100K who are over 65 and who have been taking reverse transcriptase inhibitors for years to treat HIV, and that in this large group there is exactly one person who is reported to have developed Alzheimer’s, where thousand would normally have been expected by now. Here is a medicalxpress link discussing another facet of this work, also from several years ago.

I’ve been waiting for this to hit the headlines for years, but it hasn’t! Maybe aramaic's somewhat cryptic allusion to a Silicon Valley study in progress has something to do with this.

If I had reason to believe I was in the process of developing Alzheimer’s, whether from symptoms or because of familial connections, and with the complete absence of effective treatment we’re all lamenting — and I had the resources — I don’t think I’d be waiting around at this point. I'd look for someone who'd be willing to give me an off-label prescription for reverse transcriptase inhibitors.

I’m kind of surprised there aren’t clinics just over the border in Mexico that specialize in such treatment, in fact.
posted by jamjam at 11:16 AM on July 22 [20 favorites]




The article about the amyloid cabal mentions neglected anti viral research. Seems like a promising avenue worth funding! Likely there are others. What a garbage system..
posted by latkes at 1:24 PM on July 22


Fraud in science is a very delicate subject. It's easy to start with a preconceived notion and if the data doesn't back it up, to fudge the results until it does, because your theory is right, right?

One of the ways that we're trying to fight that is through preregistration. The basic idea is that you publicly "define the research questions and analysis plan before observing the research outcomes." That is supposed to reduce the likelihood that someone can "fudge" results in some ways e.g., p-hacking. I don't know how widespread preregistration is in different fields but it appears to be slowly growing in the social sciences although it originated in the physical sciences. I don't recall which ones but I have seen some journals that will review preregistration information and commit to publishing your results regardless of whether you are able to support your hypothesis; the idea is that this helps to combat the publication bias against negative results and ensures that high quality research is published regardless of the findings.
posted by ElKevbo at 1:41 PM on July 22 [10 favorites]


we're in UMN Psychology

Congratulations and welcome to yankeeland!
posted by GCU Sweet and Full of Grace at 1:44 PM on July 22 [3 favorites]


Re: cabals, Planck’s “science advances one funeral at a time” remains grim and true; but even then folks can leave a long shadow of former students and collaborators.
posted by theclaw at 10:59 AM on July 23 [2 favorites]


The Daily Kos performs their own rewrite of the Science.org article, getting some details slightly wrong.
posted by heatherlogan at 6:17 PM on July 23


Astounded. Nothing more to say than astounded. Well, there is more, many thanks to all the insider baseball commenters. I had no idea what it is like to work in a lab, beyond junior high chemistry, or what work went on in scientific research.
posted by oldnumberseven at 9:20 PM on July 24


And it seemed pretty clear that the treatment needs to start early, "fixing" in the brain when symptoms appear late in life is way too late, but they have no solid indicators to identify who would need treatment early.

We do know a population likely to develop it: People with Downs. Ie, the exact population they refuse to let into studies.
posted by pelvicsorcery at 4:20 PM on July 25


Charles Piller: Following my investigation into her work with @Lsylvain, Karen Ashe says her own Nature paper should be retracted.
posted by cendawanita at 10:50 PM on July 25 [4 favorites]


In the Pipeline: Faked Beta-Amyloid Data. What Does It Mean?

Gives a good intro to the history of Alzheimer's research, discussion of the current fraud and its reach, and a few words about scientists who short pharma companies.
posted by ryanrs at 11:39 AM on July 27 [2 favorites]


From Lowe:
What About Peer Review, Damn It All?

Yeah, there’s that. The Lesné stuff should have been caught at the publication stage, but you can say that about every faked paper and every jiggered Western blot. When I review a paper, I freely admit that I am generally not thinking “What if all of this is based on lies and fakery?” It’s not the way that we tend to approach scientific manuscripts.
Fraud detection--in terms of image manipulation and the like--isn't really a peer review thing. The skills to detect manipulated images are the things scientists train for. This is absolutely something that should be done by the editorial staff on the journal. It would be at least one way they could justify their existence.

Scientists can definitely have an intuition about other anomalies, like suspiciously large effect sizes or correlations that are inconsistent with measurement errors or the other literature, and ask about that. But the real forensic work is not usually especially connected to the scientific discipline. (Or easy to do for an unpaid reviewer!)

I think one problem is journals don't seem to take a big reputational hit when they publish stuff like this. The investigators do, the journals don't.
posted by mark k at 12:10 PM on July 27 [1 favorite]


« Older Keeping It Twee   |   Dawn and Dusk Get Accessorized Newer »


This thread has been archived and is closed to new comments