Irreproducible Results (2011)

(kk.org)

35 points | by fsagx 4 days ago ago

16 comments

  • ChadNauseam 8 hours ago

    I like a suggestion I read from Eliezer Yudkowsky - journals should accept or reject papers based on the experiment's preregistration, not based on the results

  • NeuroCoder 6 hours ago

    I had a neuroscience professor in undergrad who did a bunch of experiments where the only variables were things like the material of the cage, bedding, feeder, etc. He systematically tested variations in each separately. Outcomes varied in mice no matter what was changed. I would love to tell you what outcomes he measured, but it convinced me not to go into mice research so it's all just a distant memory.

    On the other hand, I've worked with people since then who have their own mice studies going on. We are always learning new ways to improve the situation. It's just not a very impressive front page so it goes unnoticed by those not into mice research methods.

    • tomcam 2 hours ago

      The implications of the work done by your former professor are so profound I can hardly get my arms around them.

    • stonethrowaway 5 hours ago

      Funny, considering majority of trials posted on the front page end up being studies done on mice.

  • nextos 5 hours ago

    You can see this is a problem if you mine out the distribution of p-values from articles.

    Andrew Gelman had a great post on this topic I can't find now.

    Pre-registration could be a great solution. Negative results are also important.

  • necovek 4 days ago

    This is extremely interesting.

    On top of keeping and publishing "negative outcomes", could we also move to actually requiring verification and validation by another "lab" (or really, an experiment done in different conditions)?

    • tomcam 2 hours ago

      I love that idea, but it would never work in practice. Some thoughts:

      * Funding for any experiment would have to include 100% extra because presumably every experiment done would also have to duplicate another, randomly chosen experiment. The situation would be become something akin to lawyers being required to do pro bono work. It would mean that the randomly chosen experiment to be duplicated would require a different set of skills than the primary experiment.

      * Assuming the above, there would be an extremely high impedance in communications between any two of these experiments because no one could really describe their experiment in a way that would allow independence recreation of it.

      * Smaller institutions would struggle to re-create experiments from better funded institutions.

      * Getting the second experiment funded would always be difficult because you probably wouldn’t be able to go to the same sources.

  • pazimzadeh 8 hours ago

    >> [John Crabbe] performed a series of experiments on mouse behavior in three different science labs: in Albany, New York; Edmonton, Alberta; and Portland, Oregon. Before he conducted the experiments, he tried to standardize every variable he could think of. The same strains of mice were used in each lab, shipped on the same day from the same supplier. The animals were raised in the same kind of enclosure, with the same brand of sawdust bedding. They had been exposed to the same amount of incandescent light, were living with the same number of littermates, and were fed the exact same type of chow pellets. When the mice were handled, it was with the same kind of surgical glove, and when they were tested it was on the same equipment, at the same time in the morning.

    >> The premise of this test of replicability, of course, is that each of the labs should have generated the same pattern of results. “If any set of experiments should have passed the test, it should have been ours,” Crabbe says. “But that’s not the way it turned out.” In one experiment, Crabbe injected a particular strain of mouse with cocaine. In Portland the mice given the drug moved, on average, six hundred centimetres more than they normally did; in Albany they moved seven hundred and one additional centimetres. But in the Edmonton lab they moved more than five thousand additional centimetres. Similar deviations were observed in a test of anxiety. Furthermore, these inconsistencies didn’t follow any detectable pattern. In Portland one strain of mouse proved most anxious, while in Albany another strain won that distinction.

    >> The disturbing implication of the Crabbe study is that a lot of extraordinary scientific data are nothing but noise.

    This wasn't established when the post was written, but mice are sensitive and can align themselves to magnetic fields so if the output is movement the result is not thaaaat surprising. There are a lot of things that can affect mouse behavior, including possibly pheromones/smell of the experimenter. I am guessing that behavior patterns such as anxiety behavior can be socially reinforced as well, which could affect results. I can could come up with another dozen factors if I had to. Were mice tested one at a time? How many mice were tested? Time of day? Gut microbiota? If the effect isn't reproducible without the sun and moon lining up, then it could just a 'weak' effect that can be masked or enhanced by other factors. That doesn't mean it's not real, but that the underlying mechanism is unclear. Their experiment reminds me of the rat park experiment, which apparently did not always reproduce, but doesn't mean the effect isn't real in some conditions: https://en.wikipedia.org/wiki/Rat_Park.

    I think the idea of publishing negative results is a great one. There are already "journals of negative results". However, for each negative result you could also make the case that some small but important experimental detail is the reason why the result is negative. So negative results have to be repeatable too. Otherwise, no one would have time to read all of the negative results that are being generated. And it would probably be a bad idea to not try an experiment just because someone else tried it before and got a negative result once.

    Either way, researchers aren't incentivized to do that. You don't get more points on your grant submission for publishing negative results, unless you also found some neat positive results in the process.

    • lmm 8 hours ago

      > There are a lot of things that can affect mouse behavior, including possibly pheromones/smell of the experimenter. I am guessing that behavior patterns such as anxiety behavior can be socially reinforced as well, which could affect results. I can could come up with another dozen factors if I had to. Were mice tested one at a time? How many mice were tested? Time of day? Gut microbiota? If the effect isn't reproducible without the sun and moon lining up, then it could just a 'weak' effect that can be masked or enhanced by other factors. That doesn't mean it's not real, but that the underlying mechanism is unclear.

      I think it does mean the claimed causal link is not real, or at least not proven. Certainly if the error bars from two "reproductions" of the same experiment do not overlap, you can't and mustn't really say that the experiment found anything.

  • smitty1e 7 hours ago
  • emmelaich 7 hours ago

    (2011)

    • dang 4 hours ago

      Added. Thanks!

  • stonethrowaway 8 hours ago

    [flagged]

    • dang 4 hours ago

      Can you please not break the site guidelines like this? We want curious conversation here.

      https://news.ycombinator.com/newsguidelines.html

    • emmelaich 7 hours ago

      It's not clear to me whether this 'current' refers to. The old or new?

      Agreed that nothing has changed though. Un-reproduced experiments have always been dubious.