A lot of fair criticisms of the splash page here. But, I'll say the slide deck has a nice comprehensive review of research headlines over the year, at least.
And their highlights conveniently ignored any and all negatives except for the one they could spin into a positive. It's almost like it's in their best interest to sell you a future so bright you gotta' wear million dollar shades.
Quite a comprehensive report. But things look too rosy when the phenomenon is at the peak of hype cycle. So I was curious to see what the Predictions tab has to say. It has disappointed me with the "current-state" news again, not really any predictions.
What profession do you have that made you a Luddite based on the current state of LLMs and AI?
I'm an artist, programmer and musician, and is no closer to being a Luddite today than five years ago, not sure why others would either. Anti-capitalist or Anti-fascist I'd understand, considering the state of the world and the current direction.
Worth keeping in mind this is made by a "AI investor", so obviously comes with a lot of bias. It's also a relatively tiny survey, seems only 1.2K people answered.
An example of the bias:
> shows that 95% of professionals now use AI at work or home
Obviously 95% of professionals don't use AI at work or home, and these results are heavily skewed.
The question just says "Do you use generative AI tools in your work?", which would probably include 100% of office workers today, directly or indirectly.
Maybe the 33 people who said "No" doesn't know the implementation details so they assume it's not used anywhere in their daily professional life.
Okay I do toy with local image generation when I get extremely bored...
But other than that only AI use is when google forces it on me. And then gets things wrong... Which is easily found out by comparing it's output and synopsis on the links it give...
I agree there is some implicit bias in this reporting, particularly because Nathan is colleagues (or at the very least previous colleagues) with Ian Hogarth, who is currently the chair of the UK AI Safety Institute, recently renamed to the "AI Security Institute".
So, I would have to take reporting on safety with a grain of salt. That said, I do think there are a lot of other interesting insights throughout the presentation.
just a quick point here; 1.2K is highly statistically significant, even for a national level poll/survey. The issue here is the potential for selection bias, which seems primarily to be driven by people who want to do the survey not sure how this ultimately skews the results but 1.2K is easily an adequate sample size
> Produced by AI investor
I'll pass
good to know on the front page though, thanks
Yeah, even without taking that into account, the bias is obvious and extreme.
They bought a cool url gotta give em that.
looking up stateof.lol
A lot of fair criticisms of the splash page here. But, I'll say the slide deck has a nice comprehensive review of research headlines over the year, at least.
And their highlights conveniently ignored any and all negatives except for the one they could spin into a positive. It's almost like it's in their best interest to sell you a future so bright you gotta' wear million dollar shades.
I find the deck remarkably comprehensive and in-depth for what it is.
The negativity here is a bit shocking. I mean, we're talking about a deck!
The state of AI as an end user is that despite language being its primary tool, AI is a pretty terrible writer
Even the best models write like mediocre fiction writers at best
I love how delusional normalized users get. 3 years ago the idea of an AI writing as a mediocre writer was world changing
It still is.
Quite a comprehensive report. But things look too rosy when the phenomenon is at the peak of hype cycle. So I was curious to see what the Predictions tab has to say. It has disappointed me with the "current-state" news again, not really any predictions.
Meta observation: this has to be the third most hated rally I have seen, only topped by Tesla and EVs in 2020 and crude oil in 2007
When one's job is potentially on the line one becomes a Luddite rather quickly.
lol this. No one wants to admit this when it comes to explaining why AI adoption is slow
Everyone wants to use AI themselves so they can work less. No one wants their boss to use AI to replace them.
Exactly. Which is why businesses want to get rid of middle management, since middle managers thrive on having HC
But then upper management becomes middle management. Or headcount is consolidated and still quantifies a mega-middlemanagers worth.
Gotta start at some tree depth..
Butlerian, surely.
What profession do you have that made you a Luddite based on the current state of LLMs and AI?
I'm an artist, programmer and musician, and is no closer to being a Luddite today than five years ago, not sure why others would either. Anti-capitalist or Anti-fascist I'd understand, considering the state of the world and the current direction.
The state of Ai: perplexity replaced google.
Anecdotally perplexity scaled back their plans to sell ads, because no one was interested in..
Opening with is the most widely read and trusted analysis of key developments in AI.
Automatic and instant reject.
The headline is flawed, nothing that exists today is relatively close to “AI” unfortunately.
We are reading this report in the best tech circles of IIT Mumbai. E = MC2+ AI
Context: https://old.reddit.com/r/LinkedInLunatics/comments/13tbfqm/w...
It does hold for AI==0. I like it!
> E = MC2+ AI
this is so cringy
On the contrary; it is prominent to leverage the synergy of ontological orthogonalities to maximize a diverse approach.
This is peak pseudo-intellectualism.
da hype is real
This is fantastic.
Highly recommended reading for anyone here interested in the state of AI.
It covers multiple fronts, including research, applications, politics, and safety.
Thank you for sharing this on HN!
Worth keeping in mind this is made by a "AI investor", so obviously comes with a lot of bias. It's also a relatively tiny survey, seems only 1.2K people answered.
An example of the bias:
> shows that 95% of professionals now use AI at work or home
Obviously 95% of professionals don't use AI at work or home, and these results are heavily skewed.
And what does it mean to "use AI at home or work". Firing off the occasional ChatGPT? Using one of the many chatbots that's integrated everywhere?
There's a big difference between using it like Google and really enhancing your workflow with it by automating parts of your work.
The question just says "Do you use generative AI tools in your work?", which would probably include 100% of office workers today, directly or indirectly.
Maybe the 33 people who said "No" doesn't know the implementation details so they assume it's not used anywhere in their daily professional life.
Okay I do toy with local image generation when I get extremely bored...
But other than that only AI use is when google forces it on me. And then gets things wrong... Which is easily found out by comparing it's output and synopsis on the links it give...
I agree there is some implicit bias in this reporting, particularly because Nathan is colleagues (or at the very least previous colleagues) with Ian Hogarth, who is currently the chair of the UK AI Safety Institute, recently renamed to the "AI Security Institute".
So, I would have to take reporting on safety with a grain of salt. That said, I do think there are a lot of other interesting insights throughout the presentation.
just a quick point here; 1.2K is highly statistically significant, even for a national level poll/survey. The issue here is the potential for selection bias, which seems primarily to be driven by people who want to do the survey not sure how this ultimately skews the results but 1.2K is easily an adequate sample size
> not sure how this ultimately skews the results but 1.2K is easily an adequate sample size
I'd wager at least 90% of the survey respondents are Americans or live in the US, so that already skews the data a ton!
In my circles it is obviously 100%.
I mean, does a Google search count as "using AI"?