> It’s exactly the sort of community which can potentially outperform whole large fields, because of the median researcher problem. On the other hand, that does not mean that those fields are going to recognize LessWrong as a thought-leader or whatever.
Oooooh, now I see what's going on here. The good ol' "the reason no one listens to us because we're too smart for them".
Unfortunately, the fact that some smart people was ignored by their peers doesn't mean that "being ignored" suddenly becomes evidence for your thoughts being "beyond the median". It could also be that it's just not that groundbreaking.
(Disclaimer: I've only read a few LessWrong articles over years and I don't have that strong opinion of their community. Mostly basing this comment on just this post.)
“ We have a user base with probably-unusually-high intelligence, community norms which require basically everyone to be familiar with statistics and economics, we have fuzzier community norms explicitly intended to avoid various forms of predictable stupidity, and we definitely have our own internal meme population. It’s exactly the sort of community which can potentially outperform whole large fields, because of the median researcher problem.”
Citation definitely needed.
Also, the problem of bad research isn’t an IQ problem. The corporate university model creates terrible incentives. Science has the problem too, but metrics gaming does less damage because it’s harder to get away with publishing actual wrong answers.
The reasons there’s more shitty research in “soft” fields are not a problem with the IQs of researchers but:
* more bikeshedding at all levels, from creates, peers, and the public. High-IQ people can be horrific bikeshedders, and tend to be just as oppressive in their mediocrity when they go into territory they know nothing about.
* lack of external options for hangers-on. Mediocre CS researchers can easily get jobs at FAANG and earn 5x more than actual good ones who stay in, so the non-serious people get pulled away. That doesn’t happen as much in the social sciences.
My highly opinionated take from working in a field with replication issues -- the people publishing unreproducible results simply want to either establish or reinforce high social standing.
They are highly intelligent and skilled in the sense that they can progress their career through complex political moves within funding agencies, journal editorial boards, conference organization, and university departments. Proper statistical analysis and experimental design are absent because it's a nuisance in the way of success, not due to lack of understanding or low intelligence. There's still room for rigorous scientists to succeed, but it's becoming untenable for many to stay.
However well intentioned, I can't figure out what this blog post is attempting to do.
On one hand, the author seems not up to date with how standards have changed at flagship journals post replication crisis. There are new editorial teams, new standards, and a new culture of avoiding past mistakes. Does this mean things are perfect? No, but that's any human endeavor.
The author also doesn't define "memeticity" but implies that it is bad. However, given that science is a slow-moving conversation where outdated ideas are jettisoned and current ideas are explored and investigated... So some of amount of "memeticity" is to be expected. It seems like the author's issue is that some papers aren't ambitious enough?
At the same time, the author is also saying that the median person in any endeavor is not as skilled as the most skilled people. This is true by definition. Indeed, a small dedicated team of high-performing people, with the right leadership and clarity of vision, can indeed have an outsized impact. That's what most of us know or at least suspect in startup land.
> On one hand, the author seems not up to date with how standards have changed at flagship journals post replication crisis. There are new editorial teams, new standards, and a new culture of avoiding past mistakes. Does this mean things are perfect? No, but that's any human endeavor.
But are the replication rates actually improving? That's all that matters. There are no points for effort here.
I thought the second part of this comment was particularly insightful. https://www.lesswrong.com/posts/vZcXAc6txvJDanQ4F/the-median... It's pretty easy to punch down when you're writing a lot of loosely connected blog posts that don't require real evidence or falsifiable claims, and certainly nobody will ever go back to check.
I mean no offense to the Big Yud cult, but the whole community basically believes that they have 'way higher IQ' than people like Yann LeCunn because he does not agree that Super AGI well take over the Galaxy in the next 20 years and that we should just focus on dying with dignity.
(The weird obsession with IQ is really grating to anyone who knows anything about humans)
It's well known in a narrow field as an atomic unit of {environmental | cultural} information. - Not so popular now but it had some serious debate back in the day when Dawson published The Selfish Gene through the following decades.
Outside of the academic context it was picked up in narrow groups as mass forums such as Reddit grew in size and exchanging memes became a thing.
IMHO it's not widely known, middling at best, but not exclusive to one subgroup or libertarians.
> A small research community of unusually smart/competent/well-informed people can relatively-easily outperform a whole field, by having better internal memetic selection pressures.
Sure, that's true.
> In particular, LessWrong sure seems like such a community.
HOLD ON THERE. The first thing you said: easily true. The second thing: show your proof!
Here it is:
> We have a user base with probably-unusually-high intelligence, community norms which require basically everyone to be familiar with statistics and economics, we have fuzzier community norms explicitly intended to avoid various forms of predictable stupidity, and we definitely have our own internal meme population.
Uh, let's rework that into a more plausible, pliable form - "steelman" if you will.
> We have a user base with probably-unusually-high intelligence, community norms which require basically everyone to be familiar with statistics and economics, and fuzzier community norms explicitly intended to avoid various forms of predictable stupidity.
Line by line:
> We have a user base with probably-unusually-high intelligence
What, for like... all fields? This needs to be compared with specific fields. And shown in a less hand-wavy way how lesswrong scores meaningfully better than academic researchers with more subject matter expertise.
> community norms which require basically everyone to be familiar with statistics and economics
The problem is right there in the text you cited: memeticity [on lesswrong] is mostly determined, not by the most competent researchers, but instead by roughly-median researchers. Also "familiar" != learned well.
> fuzzier community norms explicitly intended to avoid various forms of predictable stupidity
I award zero points for this, especially when compared to an academic community which has similar (plausibly better) training, which better understands the pitfalls of data collection in their field.
Is your steelman intentionally just removing "we definitely have our own internal meme population"? Because that part is true for every community I have ever encountered, including such groups as "me and a friend". I don't see how removing it makes the statement stronger or more plausible
I didn't think it made the case any stronger, but reasonable people can disagree on this, and in retrospect I should've included it. There is probably more backstory & context to meme population than I realized.
I am gobsmacked by the unintentional irony in this post. The claim is that LessWrong is full of smarty-pants researchers who can outperform entire fields because they value statistics and scientific rigor. Yet the post itself is ludicrously unscientific!
- The author acknowledges that they don't provide good evidence for their claim, relying instead on intuition. I mean... come on, man. I don't think the claim is actually true!
- The idea that median researchers are not intelligent enough to understand p-hacking is just absurd: it is not a sophisticated topic. I imagine the median researcher in fact has a robust and cynical understanding of p-hacking because they can do it to their own data. Such a researcher may be cowardly and dishonest, but their intelligence is not the problem. This is the crux of my disagreement with the post: the replication crisis is a social problem, not a cognitive problem.
- They badly misstated the results of that IQ study, ignoring outliers like philosophy and economics which have poor reproducibility. The correlation between IQ and major is much better understood as indicating which undergrads will go on to academia, versus fields like biology and psychology where most students plan to enter the workforce after college. Replicability is incidental. (They also ignored that the study itself is probably not replicable! I believe the root cause of the replication crisis is motivated reasoning and laziness, both of which are certainly on display here.)
In general this post is the combination of undeserved arrogance and jaw-dropping ignorance that I expect from LessWrong. It is a community for narcissistic blowhards.
> The idea that median researchers are not intelligent enough to understand p-hacking is just absurd: it is not a sophisticated topic. I imagine the median researcher in fact has a robust and cynical understanding of p-hacking because they can do it to their own data. Such a researcher may be cowardly and dishonest, but their intelligence is not the problem. This is the crux of my disagreement with the post: the replication crisis is a social problem, not a cognitive problem.
It's 100% pure navel gazing. I didn't realize it was possible to generate that level of purity, but those geniuses at LessWrong managed to overcome the azeotrope maximum!
I don't understand how Less Wrong can be a small researcher community of any kind whatsoever. Either the community members are actually researchers, which means they're part of the real research communities, or they're possibly quite intelligent, well-read people who speculate about causal hypotheses related to publicly-known current day science, but don't conduct any research.
Putting forth an idea to an audience without rigorous proof does have intrinsic value.
Further, if said idea implies that the audience is exceptionally good, you don't really need the proof, do you? The value for the audience is obvious :)
> It’s exactly the sort of community which can potentially outperform whole large fields, because of the median researcher problem. On the other hand, that does not mean that those fields are going to recognize LessWrong as a thought-leader or whatever.
Oooooh, now I see what's going on here. The good ol' "the reason no one listens to us because we're too smart for them".
Unfortunately, the fact that some smart people was ignored by their peers doesn't mean that "being ignored" suddenly becomes evidence for your thoughts being "beyond the median". It could also be that it's just not that groundbreaking.
(Disclaimer: I've only read a few LessWrong articles over years and I don't have that strong opinion of their community. Mostly basing this comment on just this post.)
“ We have a user base with probably-unusually-high intelligence, community norms which require basically everyone to be familiar with statistics and economics, we have fuzzier community norms explicitly intended to avoid various forms of predictable stupidity, and we definitely have our own internal meme population. It’s exactly the sort of community which can potentially outperform whole large fields, because of the median researcher problem.”
Citation definitely needed.
Also, the problem of bad research isn’t an IQ problem. The corporate university model creates terrible incentives. Science has the problem too, but metrics gaming does less damage because it’s harder to get away with publishing actual wrong answers.
The reasons there’s more shitty research in “soft” fields are not a problem with the IQs of researchers but:
* more bikeshedding at all levels, from creates, peers, and the public. High-IQ people can be horrific bikeshedders, and tend to be just as oppressive in their mediocrity when they go into territory they know nothing about.
* lack of external options for hangers-on. Mediocre CS researchers can easily get jobs at FAANG and earn 5x more than actual good ones who stay in, so the non-serious people get pulled away. That doesn’t happen as much in the social sciences.
My highly opinionated take from working in a field with replication issues -- the people publishing unreproducible results simply want to either establish or reinforce high social standing.
They are highly intelligent and skilled in the sense that they can progress their career through complex political moves within funding agencies, journal editorial boards, conference organization, and university departments. Proper statistical analysis and experimental design are absent because it's a nuisance in the way of success, not due to lack of understanding or low intelligence. There's still room for rigorous scientists to succeed, but it's becoming untenable for many to stay.
However well intentioned, I can't figure out what this blog post is attempting to do.
On one hand, the author seems not up to date with how standards have changed at flagship journals post replication crisis. There are new editorial teams, new standards, and a new culture of avoiding past mistakes. Does this mean things are perfect? No, but that's any human endeavor.
The author also doesn't define "memeticity" but implies that it is bad. However, given that science is a slow-moving conversation where outdated ideas are jettisoned and current ideas are explored and investigated... So some of amount of "memeticity" is to be expected. It seems like the author's issue is that some papers aren't ambitious enough?
At the same time, the author is also saying that the median person in any endeavor is not as skilled as the most skilled people. This is true by definition. Indeed, a small dedicated team of high-performing people, with the right leadership and clarity of vision, can indeed have an outsized impact. That's what most of us know or at least suspect in startup land.
> On one hand, the author seems not up to date with how standards have changed at flagship journals post replication crisis. There are new editorial teams, new standards, and a new culture of avoiding past mistakes. Does this mean things are perfect? No, but that's any human endeavor.
But are the replication rates actually improving? That's all that matters. There are no points for effort here.
I thought the second part of this comment was particularly insightful. https://www.lesswrong.com/posts/vZcXAc6txvJDanQ4F/the-median... It's pretty easy to punch down when you're writing a lot of loosely connected blog posts that don't require real evidence or falsifiable claims, and certainly nobody will ever go back to check.
I mean no offense to the Big Yud cult, but the whole community basically believes that they have 'way higher IQ' than people like Yann LeCunn because he does not agree that Super AGI well take over the Galaxy in the next 20 years and that we should just focus on dying with dignity.
(The weird obsession with IQ is really grating to anyone who knows anything about humans)
Is "memetic" a word that is widely known, or is it like a shibboleth for a specific subgroup of very online libertarian types?
It's a word, like many, with a double life.
It's well known in a narrow field as an atomic unit of {environmental | cultural} information. - Not so popular now but it had some serious debate back in the day when Dawson published The Selfish Gene through the following decades.
Outside of the academic context it was picked up in narrow groups as mass forums such as Reddit grew in size and exchanging memes became a thing.
IMHO it's not widely known, middling at best, but not exclusive to one subgroup or libertarians.
There's a kind of bait-and-switch going on here.
> A small research community of unusually smart/competent/well-informed people can relatively-easily outperform a whole field, by having better internal memetic selection pressures.
Sure, that's true.
> In particular, LessWrong sure seems like such a community.
HOLD ON THERE. The first thing you said: easily true. The second thing: show your proof!
Here it is:
> We have a user base with probably-unusually-high intelligence, community norms which require basically everyone to be familiar with statistics and economics, we have fuzzier community norms explicitly intended to avoid various forms of predictable stupidity, and we definitely have our own internal meme population.
Uh, let's rework that into a more plausible, pliable form - "steelman" if you will.
> We have a user base with probably-unusually-high intelligence, community norms which require basically everyone to be familiar with statistics and economics, and fuzzier community norms explicitly intended to avoid various forms of predictable stupidity.
Line by line:
> We have a user base with probably-unusually-high intelligence
What, for like... all fields? This needs to be compared with specific fields. And shown in a less hand-wavy way how lesswrong scores meaningfully better than academic researchers with more subject matter expertise.
> community norms which require basically everyone to be familiar with statistics and economics
The problem is right there in the text you cited: memeticity [on lesswrong] is mostly determined, not by the most competent researchers, but instead by roughly-median researchers. Also "familiar" != learned well.
> fuzzier community norms explicitly intended to avoid various forms of predictable stupidity
I award zero points for this, especially when compared to an academic community which has similar (plausibly better) training, which better understands the pitfalls of data collection in their field.
Is your steelman intentionally just removing "we definitely have our own internal meme population"? Because that part is true for every community I have ever encountered, including such groups as "me and a friend". I don't see how removing it makes the statement stronger or more plausible
I didn't think it made the case any stronger, but reasonable people can disagree on this, and in retrospect I should've included it. There is probably more backstory & context to meme population than I realized.
I am gobsmacked by the unintentional irony in this post. The claim is that LessWrong is full of smarty-pants researchers who can outperform entire fields because they value statistics and scientific rigor. Yet the post itself is ludicrously unscientific!
- The author acknowledges that they don't provide good evidence for their claim, relying instead on intuition. I mean... come on, man. I don't think the claim is actually true!
- The idea that median researchers are not intelligent enough to understand p-hacking is just absurd: it is not a sophisticated topic. I imagine the median researcher in fact has a robust and cynical understanding of p-hacking because they can do it to their own data. Such a researcher may be cowardly and dishonest, but their intelligence is not the problem. This is the crux of my disagreement with the post: the replication crisis is a social problem, not a cognitive problem.
- They badly misstated the results of that IQ study, ignoring outliers like philosophy and economics which have poor reproducibility. The correlation between IQ and major is much better understood as indicating which undergrads will go on to academia, versus fields like biology and psychology where most students plan to enter the workforce after college. Replicability is incidental. (They also ignored that the study itself is probably not replicable! I believe the root cause of the replication crisis is motivated reasoning and laziness, both of which are certainly on display here.)
In general this post is the combination of undeserved arrogance and jaw-dropping ignorance that I expect from LessWrong. It is a community for narcissistic blowhards.
> The idea that median researchers are not intelligent enough to understand p-hacking is just absurd: it is not a sophisticated topic. I imagine the median researcher in fact has a robust and cynical understanding of p-hacking because they can do it to their own data. Such a researcher may be cowardly and dishonest, but their intelligence is not the problem. This is the crux of my disagreement with the post: the replication crisis is a social problem, not a cognitive problem.
That doesn't seem true: See Figure 1 of https://www.sciencedirect.com/science/article/pii/S105353570...? and the original results associated with the Linda problem.
Statistics is difficult and unintuitive.
It's 100% pure navel gazing. I didn't realize it was possible to generate that level of purity, but those geniuses at LessWrong managed to overcome the azeotrope maximum!
I don't understand how Less Wrong can be a small researcher community of any kind whatsoever. Either the community members are actually researchers, which means they're part of the real research communities, or they're possibly quite intelligent, well-read people who speculate about causal hypotheses related to publicly-known current day science, but don't conduct any research.
“Defending that claim isn’t really the main focus of this post, but a couple pieces of legible evidence which are weakly in favor”
Putting forth an idea to an audience without rigorous proof does have intrinsic value.
Further, if said idea implies that the audience is exceptionally good, you don't really need the proof, do you? The value for the audience is obvious :)
Yes, but it seemed a little ironic in context. But now you mention it, the piece does stroke its core audience nicely ;-)