In a quick skim, I couldn't actually find the complaint.
The freakonomics guys have been proven wrong before, and update their texts when they can to reflect those things. I'm sure if some contrary evidence were presented to them, they would gladly consider it, and maybe even have this person on their podcast to refute it!
> In a quick skim, I couldn't actually find the complaint.
I agree (I think) with what you're getting at, but then I reread TFA and I've come to see it from the author's point of view.
That is, reading between the lines of what you've written, I think you're saying that TFA doesn't actually talk in specifics about (a) what Ellen Langer's research purports to say, and (b) what the particular objections to the conclusions of her research actually are. And I totally agree that I think talking in specifics, giving real examples (and beyond just links to dense, 18-page studies and research where the point is buried somewhere within), etc. makes it much easier for the reader to actually figure out why I should care about this in the first place.
But on the other hand, I think the author of TFA is purposefully trying to refrain from "getting into the weeds" as he puts it because his main point is really along the lines of "extraordinary results should require extraordinary evidence". That is, he gives quotes from Steven Levitt about how Langer's research is completely contrary to what he would expect, but then Levitt barely even challenges how she got such unusual results to begin with. I.e. his point is rather than give an unfiltered bullhorn to a researcher with questionable (or at least controversial) results, why aren't you pushing back by at least asking how she would respond to her critics?
So the article is really about "How and why you should be skeptical of unexpected results", and much less so about any singular instance of unexpected results. That said, again I agree that going into more detail of a singular instance would have helped the author's argument immensely.
This is Steven Levitt's whole schtick IMO. He finds contrarian results, strips out any nuance, and presents them as if they cover a much broader domain than they really do. After listening to freakenomics and his personal podcast for years, I have come to the conclusion he is more misleading than educational unless people follow up and read the actual papers.
> Steven Levitt's whole schtick IMO. He finds contrarian results, strips out any nuance, and presents them as if they cover a much broader domain than they really do
Isn't that pretty similar to what you just did? Reduced his entire ouvre to a snappy criticism without nuance and then used it to dismiss him.
im presenting a personal opinion on an internet forum and OK with that. If someone wants more substance than that from their podcasts, i suggest they look elsewhere
>Freakonomics team has never backed down on many ridiculous causes they have promoted, including the innumerate claim that beautiful parents are 36% more likely to have girls and some climate change denial.
>And, as I’ve said many times before, Freakonomics has so much good stuff. That’s why I’m disappointed, first when they lower their standards and second when they don’t acknowledge or wrestle with their past mistakes. It’s not too late! They could still do a few shows—or even write a book!—on various erroneous claims they’ve promoted over the years. It would be interesting, it would fit their brand, it could be educational and also lots of fun.
That paper is actually worth the read (and if you don't want/have time to read the whole thing, ChatGPT does a great job of summarizing it IMO). Langer's research appears to generally be in the "mind over matter" genre, and this genre seems to be especially rife with the misuse of statistics. It actually seems very similar to me to what happened with Amy Cuddy's "power posing" research (made famous by this TED talk, https://www.ted.com/talks/amy_cuddy_your_body_language_may_s...), which was eventually pretty thoroughly disproven and even repudiated by some of the original coauthors of Cuddy's papers.
The other question I have is that the paper you linked gives makes some very clear, "no gray area" arguments about why some T-values that Langer calculated for one of her papers is just flat out wrong. He's saying "I'm a statistician, and you did the statistics wrong." I'm very curious if Langer ever responded to this, because the argument seems pretty black-and-white.
Freakonomics has a history (IMO) of being way too uncritical of the podcast's guests and PIMA seems worse about it. Personally, I think the article is a bit too harsh, but it's in the same direction as why I stopped listening.
> First, Levitt starts out by accepting that a certain suspect claim “actually has been replicated a number of times.” Going with your interviewee can make sense in a podcast, but, again, it’s counter to Levitt’s earlier goal of asking, “How do you know whether you should believe surprising results?”
I haven't listened to the episode, but Levitt really should have pushed back a bit more. The conversation went (paraphrased) "Here's a surprising result" "Really? Has that been replicated?" "Yes, many times sort of, now here's another surprising result from myself". She really should have been asked about replications done by other groups at least. The replication question kind of gets dodged and then they drop it for the rest of the hour-long discussion (according to the transcript: https://freakonomics.com/podcast/pay-attention-your-body-wil... ), which is a bit odd when he made “How do you know whether you should believe surprising results?” a theme of the episode. If her work had good replications, it would be more believable.
I stopped listening over a decade ago for this exact reason. Dr. Oz (I prefer to think of him as Mr. Oz) has been a friend of the show for years, but this is the episode that put me over the edge:
This guy claims he can cure Parkinson's with a fecal transplant and they just let him frame the conversion and the medical community's skepticism however he wants like it's a given that he is a misunderstood genius. It was so obvious the hosts lacked the basic tools of skepticism. Now if you look up that doctor 13 years later he's been promoting ivermectin as a cure for COVID which is totally on brand.
If if you click through for meta analysis of most of the drugs listed, it claims a statistically significant reduction in mortality. That seems incredible.
1. The exercise, diet, sunlight, zinc, and vitamin d results all seem plausible. They are well known to improve your immune system.
2. Remdesivir was originally recommended for covid19 but then they stopped using it. I'm surprised the plasma has such a poor result because that was heavily hailed originally too.
3. Paxlovid and HCQ are both widely recommended, in line with the results presented.
So as far as I can tell (which isn't far!) it seems like a reasonable analysis.
The main point I wanted to make is Borody is far from alone in being interested in c19 and ivermectin.
The author disagrees with Harvard psychologist Ellen Langer, whom Levitt interviewed on his podcast, People I Mostly Admire, which is part of the Freakonomics radio network but not the main show, Freakonomics. The author thinks Levitt should have been more critical of his guest. Perhaps, but this is a podcast and not a peer review.
In my opinion, Levitt didn't even say he agreed with Langer, although he did compliment her work.
Disclaimer: I'm a hug fan on all the Freakonomics shows. I appreciate the author pointing out some opposing views and think the post is well-written, although exaggerated and overly emotional.
> The author disagrees with Harvard psychologist Ellen Langer,
He might, he might not. What he definitely does think is that there have been several in-depth critiques of Langer's work, and that it does the listener of Freakonomics a disservice by apparently not taking them into account in an way (certainly not mentioning them).
The critiques are not of the form "Langer is wrong". They are of the form "the experimental design, sample size and statistical analysis do not support the claims Langer is making".
> What he definitely does think is that there have been several in-depth critiques of Langer's work
And one of those in-depth critiques, which is linked to in the post, is by the author of the article himself.
> The critiques are not of the form "Langer is wrong". They are of the form "the experimental design, sample size and statistical analysis do not support the claims Langer is making".
This seems like a distinction that's not really worth making. The author of the post is a statistician, and he's published a detailed critique (see https://news.ycombinator.com/item?id=41974050 ) that says that Langer did the stats wrong. So sure, he is saying "the experimental design, sample size and statistical analysis do not support the claims Langer is making", which seems equivalent to "she is wrong" when you're a statistician.
Gelman leaves the door reasonably ajar on the possibility that Langer is right about effects in the world, but firmly closes it on the possibility that the statistical analysis Langer presents supports this belief.
Well, we'll just have to reasonably disagree with the final interpretation, then. I will say that from reading this closing section of Gelman's paper, it's about as harsh a condemnation as I've ever seen in an academic paper - he essentially says it's not science that's masquerading as science. Written from one academic to another, that's basically the equivalent of "you're full of shit":
> 4.4. Statistical and conceptual problems go together
> We have focused our inquiry on the Aungle and Langer (2023) paper, which, despite the evident care that went into it, has many problems that we have often seen elsewhere in the human sciences: weak theory, noisy data, a data structure necessitating a complicated statistical analysis that was done wrong, uncontrolled researcher degrees of freedom, lack of preregistration or replication, and
an uncritical reliance on a literature that also has all these problems.
> Any one or two of these problems would raise a concern, but we argue that it is no coincidence that they all have happened together in one paper, and, as we noted earlier, this was by no means the only example we could have chosen to illustrate these issues. Weak theory often goes with noisy data: it is hard to know to collect relevant data to test a theory that is not well specified. Such studies often have a scattershot flavor with many different predictors and outcomes being
measured in the hope that something will come up, thus yielding difficult data structures requiring complicated analyses with many researcher degrees of freedom. When underlying effects are small and highly variable, direct replications are often unsuccessful, leading to literatures that are full of unreplicated studies that continue to get cited without qualification. This seems to be a particular
problem with claims about the potentially beneficial effects of emotional states on physical health outcomes; indeed, one of us found enough material for an entire Ph.D. dissertation on this topic (N. J. L. Brown, 2019).
> Finally, all of this occurs in the context of what we believe is a sincere and highly motivated research program. The work being done in this literature can feel like science: a continual refinement of hypotheses in light of data, theory, and previous knowledge. It is through a combination of statistics (recognizing the biases and uncertainty in estimates in the context of variation and selection effects) and reality checks (including direct replications) that we have learned that this work, which looks and feels so much like science, can be missing some crucial components. This is why we believe there is general value in the effort taken in the present article to look carefully at the details of what went wrong in this one study and in the literature on which it is based.
I'm no fan of Freakonomics and their whole genre of "wow isn't that surprising" pop science (Ariely, Gladwell et. al.) but I agree with you here.
He goes through several paragraphs of criticising Levitt for believing his interviewee without mentioning what claim the interviewee is making. So some chambermaids were told their work is exercise, didn't change their behaviour and then.... What?
Isn't most science in the "wow isn't that surprising" genre? I feel like my whole life science has pretty much been "hmm... wouldn't have guessed that". Basic facts like fire burning oxygen -- I remember literally as a kid thinking -- of all the things it consumes what we also breathe? Or gravity being a function of mass -- that one still trips me a little bit.
I think what these authors do is apply science to human interactions -- a tilt toward social science -- but science to me is usually surprising. (Or I'm just really bad at science).
Freakonomics is less “check out this surprising science!” and more “economist makes spurious but cute connections based on way too little information over and over—the book!”
Have to push back on this. When you are learning a subject, you learn a lot of surprising things. But you learn enough that you have a good model in your head and a good intuition, then things that are surprising to you remain improbable.
The complaint seems to be that Freaknomics and related Podcasts, which are created for the sake of entertainment, are not the peer-reviewed journals he wishes they were.
This is an utter cop-out that is a hair's length away from "this interviewer was combative and biased". In fact, it is entirely possible to thread the needle of being engaging without also being a total rube.
It’s not a cop-out to complain, but one has to remember that complaints only mean so much. McDonald’s isn’t becoming a Michelin Star restaurant because you complain its food isn’t fancy enough either.
It not like there can only be one Podcast. If you see that the world is missing something, create it! Nobody else is going to do it. If they would, you already wouldn’t have found it lacking.
The cop-out in question is not the complaint but the rote and glib response that a podcast isn't an academic venue. The two things are not at cross purposes, and quite a lot of popular media, including outlets in which Gelman has relayed his concerns about Langer, where the goal is quite specifically to critique dubious claims. And even otherwise, people form their view of the world based on the media they consume. Levitt knows this and leverages this, along with being an academic, to boost his platform, which in turn is used to boost the profile of his guests. The argument that he bears no responsibility at all for what and how he presents to his audience is dubious.
>It not like there can only be one Podcast. If you see that the world is missing something, create it! Nobody else is going to do it. If they would, you already wouldn’t have found it lacking.
I would have thought the past decade would have put this marketplace of ideas hokum to rest, but here we are.
Yep, it seems like a rant fitting of Hacker news where they break apart someone's text and pinpoint tiny sections to pick at without actually making a point in general.
Not sure why this rant was posted to HN.
Also the postcast wasn't freakonomics, it was an offshoot one where they don't critique a users work, they just interview them as a friendly conversation.
Getting upset like the author did indicates that the author doesn't know the difference between a podcast and an academic paper.
I understand the complaints, but I don't think the Freakonomics podcast should necessarily be expected to meet the high standard of peer review. Is it really that bad for Levitt to take published research at face value, trusting that gatekeepers upriver have done their due diligence?
> Is it really that bad for Levitt to take published research at face value, trusting that gatekeepers upriver have done their due diligence?
Yes, it is. There is a reason the reproducibility/replication crisis, especially in the social sciences, is such a hot topic. The podcast doesn't need to "meet the high standard of peer review", but there are plenty of published objections and discussions about Langer's unexpected results, and Levitt should have reviewed that and brought that up before essentially saying "Wow, your results are so unexpected! OK I'm pretty sold!"
>there are plenty of published objections and discussions about Langer's unexpected results, and Levitt should have reviewed that and brought that up
Is that expected of Freakonomics? I don't know how much rigor they do with their interview subjects, nor how much of a subect matter expert they are when it comes to pushing back.
They like to entertain crazy theories, but there’s a cost, as has been observed multiple times in the past. I do still like to listen to Steven.
I think the whole problem is how he presents the podcast as being very factual, data driven and scientific and on the other end he just lack rigour in some cases - like this one.
Basic research has become rare in journalism, but they either should stop pretending to be data driven or should do their homework.
The Frakonomics brand leans more into the info side of infotainment. Having listened to the show, they also lean into their academic backgrounds, so yes. This isn't WTF with Marc Maron, but even he famously excused himself to do some research when he found out he was interviewing the "other" Kevin McDonald.
Umm, of course? Shouldn't that be expected of any interviewer? I mean, they invited a guest onto their show specifically because they keep coming up with unexpected results - shouldn't they have done at least a little bit of their homework to see why a gaggle of people are condemning their results as non-reproducible?
No? Imagine how ridiculous that would become if interviewers actually followed that logic. "Great gameplay out there, <insert professional sports star>, but nevermind the sport we are all watching, my research identified that you erroneously wrote 1+1=3 in Kindergarten. What was your thought process?"
The podcast in question is known as "People I (Mostly) Admire" from the Freakonomics podcast network. The name should tell you that it is going to be about the people, not diving deep into their work. Perhaps there is room for a Podcast that scrutinizes the work of scientists, but one that literally tells you right in its name that it is going to be about people is probably not it.
Your example completely and ridiculously mischaracterizes my point.
A better example, to piggyback off your sports analogy: Suppose a podcast titled "People I (Mostly) Admire" decided to interview Barry Bonds, and the interviewer asked "Wow, how did you get to be so good in the second half of your career?" and Bonds responded "Just a lot of hard work!" Yeah, I would totally expect the interviewer to push back at that point and say "So, your steroid use didn't have anything to do with it?"
Point being, I'm not asking the interviewer to be knowledgeable about the subject's kindergarten grades. I do think they should do some basic, cursory research about the specific topic and subject they brought the interviewer on to talk about in the first place.
> I would totally expect the interviewer to push back
Are you confusing expectation with desire? I can understand why you might prefer to listen to a podcast like that – and nothing says you can't – but that isn't necessarily on brand with the specific product in question.
In the same vein, you might prefer fine dining, but you wouldn't expect McDonalds to offer you fine dining. It is quite clearly not the product they sell.
So, I guess the question is: What is it about "People I (Mostly) Admire" that has given you the impression that it is normally the metaphorical fine dining restaurant and not the McDonalds it turned out to be here?
Are you like the king of awful, straw-man analogies or something? Will just say I think your attempt to redefine this podcast and the Freakonomics brand to just "light, fluffy entertainment" is BS. These other comments put it better:
> Are you like the king of awful, straw-man analogies or something?
Yes...? Comes with not understanding the subject very well. I mean, logically, if I were an expert I wouldn't be here wasting my time talking about what I already know, would I? That would be a pointless waste of time. Obviously if I am going to talk about something I am going to struggle to talk about it in an effort to learn.
> These other comments put it better:
These other comments don't even try to answer the question...? Wrong links? Perhaps I didn't explain myself well enough? I can try again: What is it about this particular podcast that has given you the impression that it normally asks the hard hitting questions? Be specific.
The type of journalism that involves saying "This person makes an incredible claim" and then goes on to allow the person to present said claims uncritically is called "tabloid journalism[1]." Yes, I would expect a podcast hosted by a NYT Journalist and University of Chicago Economist to have higher standards, particularly in the field of academic research.
That's a fun tangent, but doesn't answer the question. What in particular about this podcast has indicated that it is not "tabloid journalism"? You clearly recognize that tabloid journalism exists, so you know that this podcast could theoretically intend to be. But what, specifically, has indicated that it normally isn't?
The background of the people involved is irrelevant to the nature of the product. Someone who works on developing a cure for cancer by day can very well go home and build a fart app at night. There is no reason why you have to constrain yourself to just one thing.
There's a lot of ground between "the high standard of peer review" and "tak[ing] published research at face value."
The former is impractical for a lot of formats (ie podcasts) but the latter is clearly harmful in the context of a popular podcast or some other medium that amplifies the dubious message.
What use is the value of reading a journal if I cannot be certain that the material is reliably peer reviewed?
I'm not sure why the podcast author is being held to a standard that should be levied to other matter experts, that come way before he ever reaches out for an interview.
It is factual that Langer performed a study in which X was done, Y was measured and Z was concluded.
What is less clear is whether X was good experimental design, whether the measurements of Y were appropriate, relevant and correct, and thus whether or not Z can be concluded.
> "I’ve got a model in my head of how the world works — a broad framework for making sense of the world around me. I’m sure you’ve got one, too."
Anyone with scientific training should know that you should have multiple working hypothesis, you shouldn't wed yourself to one preferred model (which leads to idee fixe, rejecting evidence that doesn't fit your model and even inventing evidence which does). People who fall into this trap start seeing their mental model in the world around them, thinking they're engaging in pattern recognition when they're really doing pattern projection. Their emotions, ego and pride all converge at this point - there are dozens of examples throughout scientific history of people falling into this trap, who end up shaking their fists at experimental data that upsets their apple cart.
It's not that hard to hold two conflicting models in your mind at the same time, or more, without ending up emotionally attached to any of them.
The "model" under discussion here is much more fundamental than I think you're taking into consideration.
Specifically, Langer has suggested that merely thinking about things can lead to physical changes in the world. This is at odds with not just some specific model of something, but with the broadest conception of post-Rennaisance science.
> merely thinking about things can lead to physical changes in the world
Her claims are hardly akin to moving objects through telekinesis. Your brain is part of your nervous system which has massive control over your body. What happens in your brain obvious leads to changes in your body. Beyond the obvious "I think about moving my arm and then my arm moves" there is a ton of hard research to back up the ability of thoughts and moods to influence the autonomic systems of the body. Why are the things Langer is suggesting fundamentally different?
Clearly more evidence is needed to prove many of her specific claims, and many of them may turn out to be noise, but the basic premise hardly seems worth dismissing. Descartes was 400 years ago.
They are indeed fascinating ideas. But they are so fascinating and so challenging that they deserve (much) better investigation than what Langer appears to have managed so far. Extraordinary claims require extraordinary evidence, and all that.
Your brain does burn more glucose when it's working hard and this should cause a noticeable if slight temperature increase in the surrounding environment - so a physical change does take place, just by thinking. Other than that, some extraordinary evidence would be required.
These problems in popular science communication seem to come down to a disconnect between what gets audiences excited (unexpected results! surprising data!) and what the process of science actually looks like (you need to take extra care with unexpected results and surprising data). The process of doing good science and the process of getting audiences excited about new science seem to be fundamentally at odds. This isn't new - the challenges where the same when I was reading Scientific American as a kid 30 years ago.
It's tough, because communicating science in all it's depth and uncertainty is tough. You want to communicate the beauty and excitement, but don't want to mislead people, and the balance there just seems super hard to find.
Yet his latest book on free will exclusively depended on an reductionist viewpoint.
While I don't know his motivations for those changes, the fact that the paper he mentioned was so extremely unpopular that I was only one of a handful that read it surely provided some incentive:
> "REDUCTIONISM AND VARIABILITY IN DATA: A META-ANALYSIS ROBERT SAPOLSKY and STEVEN BALT"
Or you can go back to math and look at the Brouwer–Hilbert controversy, which was purely about if we should, universally, accept PEM a priori, which Church, Post, Gödel, and others proved wasn't a safe in many problems.
Luckily ZFC helped with some of that, but Hilbert won that war of of words. Where even suggesting a constructivist approach produces so much cognitive dissonance that it is often branded as heresy.
Fortunately with the Curry–Howard–Lambek correspondence you can shift to types or categories with someone who understands them to avoid that land mine, but even on here people get frustrated when people say something is 'undecidable' and then go silent. It is not that labeling it as 'undecidable' wins an argument, but that it is so painful to move on because from Plato onward PEM was part of the trinity of thought that is sacrosanct.
To be clear, I am not a strict constructivist, but view this as horses for courses, with the reductionist view being insanely useful for many needs.
If you look at the link that jeffbee the mention of "garden of forking paths" is a way of stepping on egg shells around the above.
Overfitting and underfitting are often explained as symptoms of the bias-variance trade-off, and even with PHDs it is hard to invoke indecomposablity, decidability, or non-triviality; all of which should be easy to explain as when PEM doesn't hold for some reason.
While mistaking the map for the territory is an easy way for the Freakonomics authors to make a living, it can be viewed as an unfortunate outcome due to the assumption of PEM and abuse of the principle of sufficient reason.
While there are most certainly other approaches, and obviously not everything can be proven or even found with the constructivist approach, whenever something is found that is surprising, there should be an attempt to not accept PEM before making a claim that something is not just epistemically possible but
epistemically necessary.
To me this is just checking your assumptions, obviously the staunchly anti-constructivist viewpoint has members that are far smarter and knowledgeable then I will ever be.
IMHO for profit or donation based Pop science will always look for the man bites dog stories... I do agree that sharing the beauty while avoiding misleading is challenging and important.
But the false premise that you either do or do not accept constructive mathematics also blocks the ease in which you could show that these type of farcical claims the authors make are false.
That simply doesn't exist today where the many worlds ideas are popular in the press, but pointing out that many efforts appear to be an attempt to maintain the illusion of Laplacian determinism, which we know has counterexamples, is so counter to the Platonic zeitgeist that most people bite their tongues when they should be providing counterexamples to help find a better theory.
I know that the true believers in any camp help drive things forward, and they need to be encouraged too.
But the point is that there is a real deeper problem that is helping drive this particular communication problem and something needs to change so that we can move forward with the majority of individuals having larger toolboxes vs dogmatic schools of thought.
One of the commenters on that article says how they are supposed to be entertaining. But I’ll add one more: they’re shitposters. It’s entertaining to a certain audience who don’t take them seriously, but causes an outrage in others who are trying to do proper work.
I appreciate this is intellectually lazy but is there a tl;dr? It’s a clickbait headline followed by a conversation that’s, at least initially, not good at conveying context.
They had a guest on who has a history of surprising results published from studies with flaws in methodology (although the author of the post is clearly a little biased). The complaint is about the podcast not being very critical of her, while framing the discussion with the question “How do you know whether you should believe surprising results?”.
Can anyone suggest a good [1] title that is more specific? I've taken a crack at it above but it seems lame.
A specific title is important because otherwise specific discussion (i.e. about what's different in this article) is preferable to generic discussion. We get plenty of the latter in any case, but it's best if it doesn't dominate the thread.
[1] in this context 'good' := accurate, neutral, and preferably using representative language from the article
> If the findings consistently surprise you, and they seriously challenge the beliefs of mainstream science, then maybe you should more seriously consider the possibility that these findings are wrong!
It seems that "You should seriously consider surprising findings" could be a good title. Maybe "results" instead of "findings" but the article doesn't actually use that word outside quotes.
It's more or less the conclusion of the critique that's mentioned in the current title.
"How do you know whether you should believe surprising [scientific] results?"
The article comes back around to that question a couple of times, in trying to describe that Levitt should not trust Langur's results because either there's not enough evidence, or the evidence doesn't support the conclusion
Economics is boring. What if we could instead apply simplistic econometric models to cute problems and explain how crime is actually caused by the adoption of the metric system?
That's basically the Freakonomics approach. It's bad science, but it appeals to "skeptics" and "contrarian" midwit liberals.
Pandering to the over-estimated intelligence of contrarians is the problem. People who self-describe as contrarians, independents, or skeptics are consistently associated with lower education, lower information, and lower analytical skills than others without these self descriptions. This extends to politics where people who say they are moderates or independents also have these markers of lower information compared to people who will describe themselves as having a position, no matter what side that position is on. But in popular communications we have for some reason decided to exalt and praise the skeptic and the independent.
> People who self-describe as contrarians, independents, or skeptics are consistently associated with lower education, lower information, and lower analytical skills than others without these self descriptions.
This is an awfully large claim to make without any backing research. Some quick googling hasn't led me to anything backing this up, but I would be curious to read anything that says so.
> Pandering to the over-estimated intelligence of contrarians is the problem.
It makes them feel smart. Which I'm sure feel's good, but often the signal takes precedence over being correct which leads to the obvious issues. It's the same mechanism that makes conspiracy theories so appealing to another group of people - knowing how things really work is seductive.
HN has it's own form that if you've been around long enough, I'm sure you can identify too.
In a quick skim, I couldn't actually find the complaint.
The freakonomics guys have been proven wrong before, and update their texts when they can to reflect those things. I'm sure if some contrary evidence were presented to them, they would gladly consider it, and maybe even have this person on their podcast to refute it!
> In a quick skim, I couldn't actually find the complaint.
I agree (I think) with what you're getting at, but then I reread TFA and I've come to see it from the author's point of view.
That is, reading between the lines of what you've written, I think you're saying that TFA doesn't actually talk in specifics about (a) what Ellen Langer's research purports to say, and (b) what the particular objections to the conclusions of her research actually are. And I totally agree that I think talking in specifics, giving real examples (and beyond just links to dense, 18-page studies and research where the point is buried somewhere within), etc. makes it much easier for the reader to actually figure out why I should care about this in the first place.
But on the other hand, I think the author of TFA is purposefully trying to refrain from "getting into the weeds" as he puts it because his main point is really along the lines of "extraordinary results should require extraordinary evidence". That is, he gives quotes from Steven Levitt about how Langer's research is completely contrary to what he would expect, but then Levitt barely even challenges how she got such unusual results to begin with. I.e. his point is rather than give an unfiltered bullhorn to a researcher with questionable (or at least controversial) results, why aren't you pushing back by at least asking how she would respond to her critics?
So the article is really about "How and why you should be skeptical of unexpected results", and much less so about any singular instance of unexpected results. That said, again I agree that going into more detail of a singular instance would have helped the author's argument immensely.
This is Steven Levitt's whole schtick IMO. He finds contrarian results, strips out any nuance, and presents them as if they cover a much broader domain than they really do. After listening to freakenomics and his personal podcast for years, I have come to the conclusion he is more misleading than educational unless people follow up and read the actual papers.
> Steven Levitt's whole schtick IMO. He finds contrarian results, strips out any nuance, and presents them as if they cover a much broader domain than they really do
Isn't that pretty similar to what you just did? Reduced his entire ouvre to a snappy criticism without nuance and then used it to dismiss him.
> Isn't that pretty similar to what you just did? Reduced his entire ouvre to a snappy criticism without nuance and then used it to dismiss him.
We don't hold short HN discussion comments to the same standard as full articles.
Perhaps we should? Isn't this a place where people come for thoughtful, intelligent discussion?
No, we shouldn't. Article-writing is not discussion. It's monologuing, and requires more rigour.
im presenting a personal opinion on an internet forum and OK with that. If someone wants more substance than that from their podcasts, i suggest they look elsewhere
>Freakonomics team has never backed down on many ridiculous causes they have promoted, including the innumerate claim that beautiful parents are 36% more likely to have girls and some climate change denial.
>And, as I’ve said many times before, Freakonomics has so much good stuff. That’s why I’m disappointed, first when they lower their standards and second when they don’t acknowledge or wrestle with their past mistakes. It’s not too late! They could still do a few shows—or even write a book!—on various erroneous claims they’ve promoted over the years. It would be interesting, it would fit their brand, it could be educational and also lots of fun.
https://statmodeling.stat.columbia.edu/2024/09/14/freakonomi...
The author has a long-standing beef with the statistically insignificant and irreproducible claims of the subject Langer. See for example this latest paper: https://stat.columbia.edu/~gelman/research/unpublished/heali...
That paper is actually worth the read (and if you don't want/have time to read the whole thing, ChatGPT does a great job of summarizing it IMO). Langer's research appears to generally be in the "mind over matter" genre, and this genre seems to be especially rife with the misuse of statistics. It actually seems very similar to me to what happened with Amy Cuddy's "power posing" research (made famous by this TED talk, https://www.ted.com/talks/amy_cuddy_your_body_language_may_s...), which was eventually pretty thoroughly disproven and even repudiated by some of the original coauthors of Cuddy's papers.
The other question I have is that the paper you linked gives makes some very clear, "no gray area" arguments about why some T-values that Langer calculated for one of her papers is just flat out wrong. He's saying "I'm a statistician, and you did the statistics wrong." I'm very curious if Langer ever responded to this, because the argument seems pretty black-and-white.
Freakonomics has a history (IMO) of being way too uncritical of the podcast's guests and PIMA seems worse about it. Personally, I think the article is a bit too harsh, but it's in the same direction as why I stopped listening.
> First, Levitt starts out by accepting that a certain suspect claim “actually has been replicated a number of times.” Going with your interviewee can make sense in a podcast, but, again, it’s counter to Levitt’s earlier goal of asking, “How do you know whether you should believe surprising results?”
I haven't listened to the episode, but Levitt really should have pushed back a bit more. The conversation went (paraphrased) "Here's a surprising result" "Really? Has that been replicated?" "Yes, many times sort of, now here's another surprising result from myself". She really should have been asked about replications done by other groups at least. The replication question kind of gets dodged and then they drop it for the rest of the hour-long discussion (according to the transcript: https://freakonomics.com/podcast/pay-attention-your-body-wil... ), which is a bit odd when he made “How do you know whether you should believe surprising results?” a theme of the episode. If her work had good replications, it would be more believable.
I stopped listening over a decade ago for this exact reason. Dr. Oz (I prefer to think of him as Mr. Oz) has been a friend of the show for years, but this is the episode that put me over the edge:
https://freakonomics.com/podcast/the-power-of-poop
This guy claims he can cure Parkinson's with a fecal transplant and they just let him frame the conversion and the medical community's skepticism however he wants like it's a given that he is a misunderstood genius. It was so obvious the hosts lacked the basic tools of skepticism. Now if you look up that doctor 13 years later he's been promoting ivermectin as a cure for COVID which is totally on brand.
https://en.m.wikipedia.org/wiki/Thomas_Borody
Cure is a strong word but there's plenty of research which supports ivermectin being effective against COVID.
https://c19ivm.org/
The medical commmunity will almost always be skeptical of new theories, sometimes this is well placed but sometimes it isn't.
That's neither here nor there though, the main takeaway is you probably shouldn't go to economists for medical advice.
If if you click through for meta analysis of most of the drugs listed, it claims a statistically significant reduction in mortality. That seems incredible.
I'm not an expert but gut checking against this chart https://c19early.org/plot/bpall.png
So as far as I can tell (which isn't far!) it seems like a reasonable analysis.The main point I wanted to make is Borody is far from alone in being interested in c19 and ivermectin.
Please don’t spread ivermectin conspiracy theories on HN.
I'm mostly just trying to provide Thomas Borody the benefit of the doubt. I'm sure he would do a much better job of defending himself.
The author disagrees with Harvard psychologist Ellen Langer, whom Levitt interviewed on his podcast, People I Mostly Admire, which is part of the Freakonomics radio network but not the main show, Freakonomics. The author thinks Levitt should have been more critical of his guest. Perhaps, but this is a podcast and not a peer review.
In my opinion, Levitt didn't even say he agreed with Langer, although he did compliment her work.
Disclaimer: I'm a hug fan on all the Freakonomics shows. I appreciate the author pointing out some opposing views and think the post is well-written, although exaggerated and overly emotional.
> The author disagrees with Harvard psychologist Ellen Langer,
He might, he might not. What he definitely does think is that there have been several in-depth critiques of Langer's work, and that it does the listener of Freakonomics a disservice by apparently not taking them into account in an way (certainly not mentioning them).
The critiques are not of the form "Langer is wrong". They are of the form "the experimental design, sample size and statistical analysis do not support the claims Langer is making".
> What he definitely does think is that there have been several in-depth critiques of Langer's work
And one of those in-depth critiques, which is linked to in the post, is by the author of the article himself.
> The critiques are not of the form "Langer is wrong". They are of the form "the experimental design, sample size and statistical analysis do not support the claims Langer is making".
This seems like a distinction that's not really worth making. The author of the post is a statistician, and he's published a detailed critique (see https://news.ycombinator.com/item?id=41974050 ) that says that Langer did the stats wrong. So sure, he is saying "the experimental design, sample size and statistical analysis do not support the claims Langer is making", which seems equivalent to "she is wrong" when you're a statistician.
Gelman leaves the door reasonably ajar on the possibility that Langer is right about effects in the world, but firmly closes it on the possibility that the statistical analysis Langer presents supports this belief.
Well, we'll just have to reasonably disagree with the final interpretation, then. I will say that from reading this closing section of Gelman's paper, it's about as harsh a condemnation as I've ever seen in an academic paper - he essentially says it's not science that's masquerading as science. Written from one academic to another, that's basically the equivalent of "you're full of shit":
> 4.4. Statistical and conceptual problems go together
> We have focused our inquiry on the Aungle and Langer (2023) paper, which, despite the evident care that went into it, has many problems that we have often seen elsewhere in the human sciences: weak theory, noisy data, a data structure necessitating a complicated statistical analysis that was done wrong, uncontrolled researcher degrees of freedom, lack of preregistration or replication, and an uncritical reliance on a literature that also has all these problems.
> Any one or two of these problems would raise a concern, but we argue that it is no coincidence that they all have happened together in one paper, and, as we noted earlier, this was by no means the only example we could have chosen to illustrate these issues. Weak theory often goes with noisy data: it is hard to know to collect relevant data to test a theory that is not well specified. Such studies often have a scattershot flavor with many different predictors and outcomes being measured in the hope that something will come up, thus yielding difficult data structures requiring complicated analyses with many researcher degrees of freedom. When underlying effects are small and highly variable, direct replications are often unsuccessful, leading to literatures that are full of unreplicated studies that continue to get cited without qualification. This seems to be a particular problem with claims about the potentially beneficial effects of emotional states on physical health outcomes; indeed, one of us found enough material for an entire Ph.D. dissertation on this topic (N. J. L. Brown, 2019).
> Finally, all of this occurs in the context of what we believe is a sincere and highly motivated research program. The work being done in this literature can feel like science: a continual refinement of hypotheses in light of data, theory, and previous knowledge. It is through a combination of statistics (recognizing the biases and uncertainty in estimates in the context of variation and selection effects) and reality checks (including direct replications) that we have learned that this work, which looks and feels so much like science, can be missing some crucial components. This is why we believe there is general value in the effort taken in the present article to look carefully at the details of what went wrong in this one study and in the literature on which it is based.
I'm no fan of Freakonomics and their whole genre of "wow isn't that surprising" pop science (Ariely, Gladwell et. al.) but I agree with you here.
He goes through several paragraphs of criticising Levitt for believing his interviewee without mentioning what claim the interviewee is making. So some chambermaids were told their work is exercise, didn't change their behaviour and then.... What?
Isn't most science in the "wow isn't that surprising" genre? I feel like my whole life science has pretty much been "hmm... wouldn't have guessed that". Basic facts like fire burning oxygen -- I remember literally as a kid thinking -- of all the things it consumes what we also breathe? Or gravity being a function of mass -- that one still trips me a little bit.
I think what these authors do is apply science to human interactions -- a tilt toward social science -- but science to me is usually surprising. (Or I'm just really bad at science).
Freakonomics is less “check out this surprising science!” and more “economist makes spurious but cute connections based on way too little information over and over—the book!”
Have to push back on this. When you are learning a subject, you learn a lot of surprising things. But you learn enough that you have a good model in your head and a good intuition, then things that are surprising to you remain improbable.
The complaint is that they're consistently irresponsibly gullible.
The complaint seems to be that Freaknomics and related Podcasts, which are created for the sake of entertainment, are not the peer-reviewed journals he wishes they were.
This is an utter cop-out that is a hair's length away from "this interviewer was combative and biased". In fact, it is entirely possible to thread the needle of being engaging without also being a total rube.
It’s not a cop-out to complain, but one has to remember that complaints only mean so much. McDonald’s isn’t becoming a Michelin Star restaurant because you complain its food isn’t fancy enough either.
It not like there can only be one Podcast. If you see that the world is missing something, create it! Nobody else is going to do it. If they would, you already wouldn’t have found it lacking.
The cop-out in question is not the complaint but the rote and glib response that a podcast isn't an academic venue. The two things are not at cross purposes, and quite a lot of popular media, including outlets in which Gelman has relayed his concerns about Langer, where the goal is quite specifically to critique dubious claims. And even otherwise, people form their view of the world based on the media they consume. Levitt knows this and leverages this, along with being an academic, to boost his platform, which in turn is used to boost the profile of his guests. The argument that he bears no responsibility at all for what and how he presents to his audience is dubious.
>It not like there can only be one Podcast. If you see that the world is missing something, create it! Nobody else is going to do it. If they would, you already wouldn’t have found it lacking.
I would have thought the past decade would have put this marketplace of ideas hokum to rest, but here we are.
Yep, it seems like a rant fitting of Hacker news where they break apart someone's text and pinpoint tiny sections to pick at without actually making a point in general.
Not sure why this rant was posted to HN.
Also the postcast wasn't freakonomics, it was an offshoot one where they don't critique a users work, they just interview them as a friendly conversation.
Getting upset like the author did indicates that the author doesn't know the difference between a podcast and an academic paper.
I understand the complaints, but I don't think the Freakonomics podcast should necessarily be expected to meet the high standard of peer review. Is it really that bad for Levitt to take published research at face value, trusting that gatekeepers upriver have done their due diligence?
> Is it really that bad for Levitt to take published research at face value, trusting that gatekeepers upriver have done their due diligence?
Yes, it is. There is a reason the reproducibility/replication crisis, especially in the social sciences, is such a hot topic. The podcast doesn't need to "meet the high standard of peer review", but there are plenty of published objections and discussions about Langer's unexpected results, and Levitt should have reviewed that and brought that up before essentially saying "Wow, your results are so unexpected! OK I'm pretty sold!"
>there are plenty of published objections and discussions about Langer's unexpected results, and Levitt should have reviewed that and brought that up
Is that expected of Freakonomics? I don't know how much rigor they do with their interview subjects, nor how much of a subect matter expert they are when it comes to pushing back.
They like to entertain crazy theories, but there’s a cost, as has been observed multiple times in the past. I do still like to listen to Steven.
I think the whole problem is how he presents the podcast as being very factual, data driven and scientific and on the other end he just lack rigour in some cases - like this one.
Basic research has become rare in journalism, but they either should stop pretending to be data driven or should do their homework.
The Frakonomics brand leans more into the info side of infotainment. Having listened to the show, they also lean into their academic backgrounds, so yes. This isn't WTF with Marc Maron, but even he famously excused himself to do some research when he found out he was interviewing the "other" Kevin McDonald.
> Is that expected of Freakonomics?
Umm, of course? Shouldn't that be expected of any interviewer? I mean, they invited a guest onto their show specifically because they keep coming up with unexpected results - shouldn't they have done at least a little bit of their homework to see why a gaggle of people are condemning their results as non-reproducible?
> Shouldn't that be expected of any interviewer?
No? Imagine how ridiculous that would become if interviewers actually followed that logic. "Great gameplay out there, <insert professional sports star>, but nevermind the sport we are all watching, my research identified that you erroneously wrote 1+1=3 in Kindergarten. What was your thought process?"
The podcast in question is known as "People I (Mostly) Admire" from the Freakonomics podcast network. The name should tell you that it is going to be about the people, not diving deep into their work. Perhaps there is room for a Podcast that scrutinizes the work of scientists, but one that literally tells you right in its name that it is going to be about people is probably not it.
Your example completely and ridiculously mischaracterizes my point.
A better example, to piggyback off your sports analogy: Suppose a podcast titled "People I (Mostly) Admire" decided to interview Barry Bonds, and the interviewer asked "Wow, how did you get to be so good in the second half of your career?" and Bonds responded "Just a lot of hard work!" Yeah, I would totally expect the interviewer to push back at that point and say "So, your steroid use didn't have anything to do with it?"
Point being, I'm not asking the interviewer to be knowledgeable about the subject's kindergarten grades. I do think they should do some basic, cursory research about the specific topic and subject they brought the interviewer on to talk about in the first place.
> I would totally expect the interviewer to push back
Are you confusing expectation with desire? I can understand why you might prefer to listen to a podcast like that – and nothing says you can't – but that isn't necessarily on brand with the specific product in question.
In the same vein, you might prefer fine dining, but you wouldn't expect McDonalds to offer you fine dining. It is quite clearly not the product they sell.
So, I guess the question is: What is it about "People I (Mostly) Admire" that has given you the impression that it is normally the metaphorical fine dining restaurant and not the McDonalds it turned out to be here?
Are you like the king of awful, straw-man analogies or something? Will just say I think your attempt to redefine this podcast and the Freakonomics brand to just "light, fluffy entertainment" is BS. These other comments put it better:
https://news.ycombinator.com/item?id=41975615
https://news.ycombinator.com/item?id=41975342
> Are you like the king of awful, straw-man analogies or something?
Yes...? Comes with not understanding the subject very well. I mean, logically, if I were an expert I wouldn't be here wasting my time talking about what I already know, would I? That would be a pointless waste of time. Obviously if I am going to talk about something I am going to struggle to talk about it in an effort to learn.
> These other comments put it better:
These other comments don't even try to answer the question...? Wrong links? Perhaps I didn't explain myself well enough? I can try again: What is it about this particular podcast that has given you the impression that it normally asks the hard hitting questions? Be specific.
The type of journalism that involves saying "This person makes an incredible claim" and then goes on to allow the person to present said claims uncritically is called "tabloid journalism[1]." Yes, I would expect a podcast hosted by a NYT Journalist and University of Chicago Economist to have higher standards, particularly in the field of academic research.
1: https://en.wikipedia.org/wiki/Tabloid_journalism
That's a fun tangent, but doesn't answer the question. What in particular about this podcast has indicated that it is not "tabloid journalism"? You clearly recognize that tabloid journalism exists, so you know that this podcast could theoretically intend to be. But what, specifically, has indicated that it normally isn't?
The background of the people involved is irrelevant to the nature of the product. Someone who works on developing a cure for cancer by day can very well go home and build a fart app at night. There is no reason why you have to constrain yourself to just one thing.
Great comedy show
There's a lot of ground between "the high standard of peer review" and "tak[ing] published research at face value."
The former is impractical for a lot of formats (ie podcasts) but the latter is clearly harmful in the context of a popular podcast or some other medium that amplifies the dubious message.
What is the value in listening to an educational podcast if I cannot be certain that the material is factual?
What use is the value of reading a journal if I cannot be certain that the material is reliably peer reviewed?
I'm not sure why the podcast author is being held to a standard that should be levied to other matter experts, that come way before he ever reaches out for an interview.
This is my main point. Seems like gripes about the quality of published research should be directed toward the publisher.
It is factual that Langer performed a study in which X was done, Y was measured and Z was concluded.
What is less clear is whether X was good experimental design, whether the measurements of Y were appropriate, relevant and correct, and thus whether or not Z can be concluded.
Certainty is too high of a bar.
There is a lot of space left till the high standards of peer review. Some would call what they are doing spreading misinformation lol.
Warning sign right here:
> "I’ve got a model in my head of how the world works — a broad framework for making sense of the world around me. I’m sure you’ve got one, too."
Anyone with scientific training should know that you should have multiple working hypothesis, you shouldn't wed yourself to one preferred model (which leads to idee fixe, rejecting evidence that doesn't fit your model and even inventing evidence which does). People who fall into this trap start seeing their mental model in the world around them, thinking they're engaging in pattern recognition when they're really doing pattern projection. Their emotions, ego and pride all converge at this point - there are dozens of examples throughout scientific history of people falling into this trap, who end up shaking their fists at experimental data that upsets their apple cart.
It's not that hard to hold two conflicting models in your mind at the same time, or more, without ending up emotionally attached to any of them.
The "model" under discussion here is much more fundamental than I think you're taking into consideration.
Specifically, Langer has suggested that merely thinking about things can lead to physical changes in the world. This is at odds with not just some specific model of something, but with the broadest conception of post-Rennaisance science.
> merely thinking about things can lead to physical changes in the world
Her claims are hardly akin to moving objects through telekinesis. Your brain is part of your nervous system which has massive control over your body. What happens in your brain obvious leads to changes in your body. Beyond the obvious "I think about moving my arm and then my arm moves" there is a ton of hard research to back up the ability of thoughts and moods to influence the autonomic systems of the body. Why are the things Langer is suggesting fundamentally different?
Clearly more evidence is needed to prove many of her specific claims, and many of them may turn out to be noise, but the basic premise hardly seems worth dismissing. Descartes was 400 years ago.
They are indeed fascinating ideas. But they are so fascinating and so challenging that they deserve (much) better investigation than what Langer appears to have managed so far. Extraordinary claims require extraordinary evidence, and all that.
Your brain does burn more glucose when it's working hard and this should cause a noticeable if slight temperature increase in the surrounding environment - so a physical change does take place, just by thinking. Other than that, some extraordinary evidence would be required.
Could a person cause the physical change in the world of a slightly different configuration of myelin in their body, just with their thoughts?
These problems in popular science communication seem to come down to a disconnect between what gets audiences excited (unexpected results! surprising data!) and what the process of science actually looks like (you need to take extra care with unexpected results and surprising data). The process of doing good science and the process of getting audiences excited about new science seem to be fundamentally at odds. This isn't new - the challenges where the same when I was reading Scientific American as a kid 30 years ago.
It's tough, because communicating science in all it's depth and uncertainty is tough. You want to communicate the beauty and excitement, but don't want to mislead people, and the balance there just seems super hard to find.
Nit: It goes a bit deeper than that.
Note this talk by another Pop Sci personality Robert Sapolsky, where he talks about the limitations of western reductionism.
https://www.youtube.com/watch?v=_njf8jwEGRo
Yet his latest book on free will exclusively depended on an reductionist viewpoint.
While I don't know his motivations for those changes, the fact that the paper he mentioned was so extremely unpopular that I was only one of a handful that read it surely provided some incentive:
> "REDUCTIONISM AND VARIABILITY IN DATA: A META-ANALYSIS ROBERT SAPOLSKY and STEVEN BALT"
Or you can go back to math and look at the Brouwer–Hilbert controversy, which was purely about if we should, universally, accept PEM a priori, which Church, Post, Gödel, and others proved wasn't a safe in many problems.
Luckily ZFC helped with some of that, but Hilbert won that war of of words. Where even suggesting a constructivist approach produces so much cognitive dissonance that it is often branded as heresy.
Fortunately with the Curry–Howard–Lambek correspondence you can shift to types or categories with someone who understands them to avoid that land mine, but even on here people get frustrated when people say something is 'undecidable' and then go silent. It is not that labeling it as 'undecidable' wins an argument, but that it is so painful to move on because from Plato onward PEM was part of the trinity of thought that is sacrosanct.
To be clear, I am not a strict constructivist, but view this as horses for courses, with the reductionist view being insanely useful for many needs.
If you look at the link that jeffbee the mention of "garden of forking paths" is a way of stepping on egg shells around the above.
https://stat.columbia.edu/~gelman/research/unpublished/heali...
Overfitting and underfitting are often explained as symptoms of the bias-variance trade-off, and even with PHDs it is hard to invoke indecomposablity, decidability, or non-triviality; all of which should be easy to explain as when PEM doesn't hold for some reason.
While mistaking the map for the territory is an easy way for the Freakonomics authors to make a living, it can be viewed as an unfortunate outcome due to the assumption of PEM and abuse of the principle of sufficient reason.
While there are most certainly other approaches, and obviously not everything can be proven or even found with the constructivist approach, whenever something is found that is surprising, there should be an attempt to not accept PEM before making a claim that something is not just epistemically possible but epistemically necessary.
To me this is just checking your assumptions, obviously the staunchly anti-constructivist viewpoint has members that are far smarter and knowledgeable then I will ever be.
IMHO for profit or donation based Pop science will always look for the man bites dog stories... I do agree that sharing the beauty while avoiding misleading is challenging and important.
But the false premise that you either do or do not accept constructive mathematics also blocks the ease in which you could show that these type of farcical claims the authors make are false.
That simply doesn't exist today where the many worlds ideas are popular in the press, but pointing out that many efforts appear to be an attempt to maintain the illusion of Laplacian determinism, which we know has counterexamples, is so counter to the Platonic zeitgeist that most people bite their tongues when they should be providing counterexamples to help find a better theory.
I know that the true believers in any camp help drive things forward, and they need to be encouraged too.
But the point is that there is a real deeper problem that is helping drive this particular communication problem and something needs to change so that we can move forward with the majority of individuals having larger toolboxes vs dogmatic schools of thought.
</rant>
Recent and (coincidentally) related:
The Mindlessness of Ostensibly Thoughtful Action (1978) [pdf] - https://news.ycombinator.com/item?id=41947985 - Oct 2024 (7 comments)
For those interested in hearing more about the criticism behind Freakonomics, there is an If Books Could Kill podcast episode going over it: https://open.spotify.com/episode/5wHpooGMRsSBrUHhQZbOZp
I don't agree with all of their criticism but it contains many valid points
Funny podcast in general and great if you are looking for one that is highly critical of whatever book they are reviewing.
One of the commenters on that article says how they are supposed to be entertaining. But I’ll add one more: they’re shitposters. It’s entertaining to a certain audience who don’t take them seriously, but causes an outrage in others who are trying to do proper work.
I wouldn't really call the podcast "shitpost-y". It's not hyper-serious, but it's not a big joke typically.
I appreciate this is intellectually lazy but is there a tl;dr? It’s a clickbait headline followed by a conversation that’s, at least initially, not good at conveying context.
They had a guest on who has a history of surprising results published from studies with flaws in methodology (although the author of the post is clearly a little biased). The complaint is about the podcast not being very critical of her, while framing the discussion with the question “How do you know whether you should believe surprising results?”.
The tl;dr
You should be skeptical of surprising results, and see to disconfirm them rather than accepting and repeating them at face value
[flagged]
not only is it intellectually lazy, its just plain lazy. this is hackernews, surely you've heard of artificial intelligence?
Can anyone suggest a good [1] title that is more specific? I've taken a crack at it above but it seems lame.
A specific title is important because otherwise specific discussion (i.e. about what's different in this article) is preferable to generic discussion. We get plenty of the latter in any case, but it's best if it doesn't dominate the thread.
[1] in this context 'good' := accurate, neutral, and preferably using representative language from the article
Using a quote from the article:
> If the findings consistently surprise you, and they seriously challenge the beliefs of mainstream science, then maybe you should more seriously consider the possibility that these findings are wrong!
It seems that "You should seriously consider surprising findings" could be a good title. Maybe "results" instead of "findings" but the article doesn't actually use that word outside quotes.
It's more or less the conclusion of the critique that's mentioned in the current title.
A quote of the article quoting the interview:
"How do you know whether you should believe surprising [scientific] results?"
The article comes back around to that question a couple of times, in trying to describe that Levitt should not trust Langur's results because either there's not enough evidence, or the evidence doesn't support the conclusion
It's a good quote but too generic for the title of this thread.
Economics is boring. What if we could instead apply simplistic econometric models to cute problems and explain how crime is actually caused by the adoption of the metric system?
That's basically the Freakonomics approach. It's bad science, but it appeals to "skeptics" and "contrarian" midwit liberals.
I mean, they were right to be fans of Sulfur dioxide injection into the atmosphere. And so far ahead of their time!
Contrarianism for the sake of contrarianism is the ultimate bad smell in my book but it sure will get you on Rogan’s show.
Pandering to the over-estimated intelligence of contrarians is the problem. People who self-describe as contrarians, independents, or skeptics are consistently associated with lower education, lower information, and lower analytical skills than others without these self descriptions. This extends to politics where people who say they are moderates or independents also have these markers of lower information compared to people who will describe themselves as having a position, no matter what side that position is on. But in popular communications we have for some reason decided to exalt and praise the skeptic and the independent.
> People who self-describe as contrarians, independents, or skeptics are consistently associated with lower education, lower information, and lower analytical skills than others without these self descriptions.
This is an awfully large claim to make without any backing research. Some quick googling hasn't led me to anything backing this up, but I would be curious to read anything that says so.
> Pandering to the over-estimated intelligence of contrarians is the problem.
It makes them feel smart. Which I'm sure feel's good, but often the signal takes precedence over being correct which leads to the obvious issues. It's the same mechanism that makes conspiracy theories so appealing to another group of people - knowing how things really work is seductive.
HN has it's own form that if you've been around long enough, I'm sure you can identify too.
Yeah I'm going to need to see some citations for these claims.