there are a lot of contexts where i'd be pretty bummed to find out most of the content was written by a computer, or feel like i lost something tangible or meaningful because of that change
in the case of linkedin, i lose nothing. before AI the posts all seemed like they were written by weird robots anyway. it actually reassures me that a human didn't write some of the stuff i read, because i pray that no self-aware human would have written that thing into the internet.
LLMS shines in places where low-effort slop was already acceptable. It's undeniable that the biggest impact the current AI wave is having on society is the amount of slop everyone has to contend with.
> Post Length Has Increased by 107% Since Chat-GPT Launched
Oh, joy.
LinkedIn lists their "highest performing posts".[1] #1 is "Marketers, stop making these 4 measurement mistakes" All ten of them read like they were generated by a program. Not even an LLM, something dumber like a template spam generator.
My own LinkedIn entry says "See my Github." Haven't updated LinkedIn in years. Hadn't looked in months. If anybody wants to talk to me, my email address is available.
images on articles were used to call out an important piece or to fill space in printed pages and make the paginator (webmaster equivalent of layout machine operator pre-desktop publishing days) life easier to fill pages with text columns.
using images on the header of online articles is literary a cargo cult people do just because they saw it on magazines growing up.
don't even get me started on the use of "eyes" (the larger text repeating a part of the article out of place) on digital media...
Exactly. Those churning out such posts on LinkedIn, would very much prefer if other people did not even carefully read the actual content, but rather simply assumed “Wow, this person is capable of generating a wall of text day in and day out, he/she must be a subject-matter expert and have great English skills”.
If I were a LinkedIn employee. Just to sabotage the site, I’d propose features like this which may seem appealing to management but create an awful user experience for everyone.
LinkedIn is by far the most useless repository of written drivel in the history of humanity. It's pretty much baked in - all social media is performative, but for LinkedIn it's performative on a site specifically designed to connect people who want to sell their labor for money for people willing to pay.
The only good thing to come out of the the LinkedIn feed was r/LinkedInLunatics.
Now I wonder could you use LinkedIn posts to train AI to identify content like that and use it as negative filter, well for absolutely anything. Any content that matches it should probably be flagged and ignored...
Hm. I was anticipating AI slob to take over, but this actually has me thinking. If I was to write a long post for LinkedIn (which I would never do), I would probably ask ChatGPT to proofread this. Never actively checked this, but I already have a sort-of mental filter for LinkedIn low-effort posts. For the posts I actually look at, I'm not sure I would mind if ChatGPT had a part in proofreading this. Wonder what the specificity of the AI detector is for detecting AI-written post vis-a-vis AI-edited posts.
So called Content farming has been a thing since early 10s. There is 15 years of history of (successful) companies selling filler auto-generated blog and post content.
Trying it out, it's completely wrong. As we know all AI detectors are. This is just an advertisement for their poor AI detector, confusing people into believing this stuff works.
These are likely from the 99%+ you wouldn't have wanted to find anyway. Part of the issue might be that so many of them are ESL and casting too wide of a net. The more general the market you're going after, the less specific your proposals will be.
Even ignoring the AI detection, their simple graph of average word count over time is incredibly suspicious. I can't think of any explanation for that other than rampant AI usage.
What about the algorithm changing over time to favor longer posts and content creators on the platform adapting to the change? I suspect you’d see the same pattern with the average length of popular non-music YouTube videos over time.
Good point, that's a good explanation. I think the timing with ChatGPT and how consistent it was for 5+ years before that make for very strong circumstantial evidence, but you're right that there is at least one other good possibility.
I feel like we must eventually reach an age where people have to pay (significant money) just to post.
Why are social networks allowing people to just broadcast to massive audiences for free?
I’m curious if content would be more satisfying if only the most motivated people were publishing content and not just spammers spewing AI drivel to grow their brand.
The social networks already sort of work like this though, right? The algorithms constrain your reach considerably. And then you can pay to boost your content. I live in a country where people still use FB heavily and you need to use it to get local info. Most items on my feed creep toward things I don't follow or from people I'm not friends with. However, my feed reverts to showing items I opted into if I "hide" anything I don't recognize and then refresh. That seems to reset the algorithm for my feed.
there are a lot of contexts where i'd be pretty bummed to find out most of the content was written by a computer, or feel like i lost something tangible or meaningful because of that change
in the case of linkedin, i lose nothing. before AI the posts all seemed like they were written by weird robots anyway. it actually reassures me that a human didn't write some of the stuff i read, because i pray that no self-aware human would have written that thing into the internet.
I was asked by an employer to post about how excited I was for the new role, and they pointed me to some examples.
Not AI generated, but template text isn't exactly human generated either.
Yikes. That’s a pretty uncomfortable ask.
LLMS shines in places where low-effort slop was already acceptable. It's undeniable that the biggest impact the current AI wave is having on society is the amount of slop everyone has to contend with.
Hahaha that’s so true. Your employer wants you to act like a robot anyway so give them what they want.
It’s a sad race to the bottom but at least it’s kinda funny.
> Post Length Has Increased by 107% Since Chat-GPT Launched
Oh, joy.
LinkedIn lists their "highest performing posts".[1] #1 is "Marketers, stop making these 4 measurement mistakes" All ten of them read like they were generated by a program. Not even an LLM, something dumber like a template spam generator.
My own LinkedIn entry says "See my Github." Haven't updated LinkedIn in years. Hadn't looked in months. If anybody wants to talk to me, my email address is available.
[1] https://www.linkedin.com/business/marketing/blog/social-medi...
What was the baseline for AI generated posts prior to ChatGPT in 2022? Isn’t that a suspicious stat, that it only went up by 107%?
I have a hard time reading any article with AI generated images these days. Especially of robots. Please no.
images on articles were used to call out an important piece or to fill space in printed pages and make the paginator (webmaster equivalent of layout machine operator pre-desktop publishing days) life easier to fill pages with text columns.
using images on the header of online articles is literary a cargo cult people do just because they saw it on magazines growing up.
don't even get me started on the use of "eyes" (the larger text repeating a part of the article out of place) on digital media...
[dead]
There's a literal built-in feature, no one is surprised its AI.
Why is LinkedIn trying to drive their userbase away from engaging on the site?
Because linkedin's value proposition is to make you look good (superficially) to future employers.
Nobody wants to interact on linkedin, they want to look impressive. This accomplishes that.
Exactly. Those churning out such posts on LinkedIn, would very much prefer if other people did not even carefully read the actual content, but rather simply assumed “Wow, this person is capable of generating a wall of text day in and day out, he/she must be a subject-matter expert and have great English skills”.
If I were a LinkedIn employee. Just to sabotage the site, I’d propose features like this which may seem appealing to management but create an awful user experience for everyone.
Because games will keep them haha
“The release of the popular AI chatbot, ChatGPT at the end of 2022 likely led to a 189% surge in AI usage in LinkedIn posts.”
189%, eh? This stat makes me believe the entire article is made up.
LinkedIn is by far the most useless repository of written drivel in the history of humanity. It's pretty much baked in - all social media is performative, but for LinkedIn it's performative on a site specifically designed to connect people who want to sell their labor for money for people willing to pay.
The only good thing to come out of the the LinkedIn feed was r/LinkedInLunatics.
Now I wonder could you use LinkedIn posts to train AI to identify content like that and use it as negative filter, well for absolutely anything. Any content that matches it should probably be flagged and ignored...
Sounds like a fun project I’d like to contribute to
it's a filter for finding people willing to be performative
Nothing on Reddit can be good.
Hm. I was anticipating AI slob to take over, but this actually has me thinking. If I was to write a long post for LinkedIn (which I would never do), I would probably ask ChatGPT to proofread this. Never actively checked this, but I already have a sort-of mental filter for LinkedIn low-effort posts. For the posts I actually look at, I'm not sure I would mind if ChatGPT had a part in proofreading this. Wonder what the specificity of the AI detector is for detecting AI-written post vis-a-vis AI-edited posts.
This comment is all organic, no AI ingredients :)
And the other half is written by insane people
this claims that over 5% of LinkedIn posts in 2018 (pre GPT-2) were AI-generated
I find that hard to believe
So called Content farming has been a thing since early 10s. There is 15 years of history of (successful) companies selling filler auto-generated blog and post content.
Dead internet theory is coming true
... and 100 percent of long posts on LinkedIn are useless drivel from idiots engaged in clumsy self-promotion. So?
Or the way people talk on LinkedIn isn't sufficiently different than what an AI randomly spews out.
Trying it out, it's completely wrong. As we know all AI detectors are. This is just an advertisement for their poor AI detector, confusing people into believing this stuff works.
Shitpost in, shitpost out.
100% of Upwork proposals are too, removing any differentiation, and means I’ll probably never use the platform again to find people
These are likely from the 99%+ you wouldn't have wanted to find anyway. Part of the issue might be that so many of them are ESL and casting too wide of a net. The more general the market you're going after, the less specific your proposals will be.
I mean if you’re out there reading long posts on LinkedIn, idk what you expect. It’s not like the human written ones were overflowing with knowledge.
...according to this company's AI detector, so not validated or validatable by anyone else.
Even ignoring the AI detection, their simple graph of average word count over time is incredibly suspicious. I can't think of any explanation for that other than rampant AI usage.
What about the algorithm changing over time to favor longer posts and content creators on the platform adapting to the change? I suspect you’d see the same pattern with the average length of popular non-music YouTube videos over time.
Good point, that's a good explanation. I think the timing with ChatGPT and how consistent it was for 5+ years before that make for very strong circumstantial evidence, but you're right that there is at least one other good possibility.
I feel like we must eventually reach an age where people have to pay (significant money) just to post.
Why are social networks allowing people to just broadcast to massive audiences for free?
I’m curious if content would be more satisfying if only the most motivated people were publishing content and not just spammers spewing AI drivel to grow their brand.
Because content used to be expensive. User generated content was free and drove engagement that is people returning on site.
Now that content generation is very cheap, this might change... See marketing and moves by Twitter...
The social networks already sort of work like this though, right? The algorithms constrain your reach considerably. And then you can pay to boost your content. I live in a country where people still use FB heavily and you need to use it to get local info. Most items on my feed creep toward things I don't follow or from people I'm not friends with. However, my feed reverts to showing items I opted into if I "hide" anything I don't recognize and then refresh. That seems to reset the algorithm for my feed.
This only works if people have a limited amount of money. Since some people have pretty much infinite money, money is not the way to limit things.