I just wrote a reply to a comment talking about the AI tells this writing has, but it got flagged so my comment disappeared when I hit post. I'll rephrase out of spite:
My first thought upon reading this was that an LLM had been instructed to add a pithy meme joke to each paragraph. They don't make sense in context, and while some terminally online people do speak in memes, those people aren't quoting doge in 2025.
There's also a sense of incoherence in the whole piece. For instance, this section:
"- after: 22 million videos + 1 million images (now we're talking)
they basically hoovered up everything: something-something v2, kinetics, howto100m, and a billion youtube videos"
Was it a billion vids or 22m? It turns out the latter sentence is just rephrasing the list of sources in a cool casual way, and the last one is called YT-Temporal-1B. That's a billion frames of video, not a billion videos.
Looks like it was trained on Shaolin Drunken Fist videos. Does it look drunk because of the videos or because there's a discrepancy between videos and it not accounting for gravity and physics in general?
I genuinely wish there was a cost estimation feature built into them. Doesn't even have to be even remotely close to the true cost if it's anything like the meetings I attend, there will be enough people and it will go on for long enough to make up for it.
I worked as consultant. And started billing at normal hourly rates for meetings. You will be surprised how fast the company desire for my participation in them decreased.
My "lawyer" (gpt4o) claims that since YouTube is merely a non-exclusive licensee of the user content upload to their service, even if they have such restrictions in their ToS (they do), they likely would not hold up in court, citing [0]. Something about that non-exclusivity meaning they cannot constrain the copyright further on their own terms. Which I guess makes sense?
And since scraping of publicly available data is not illegal (in the US, according to the aforementioned "lawyer"), it seems like it's okay?
Who cares at this point? No one is stopping ML sets from being primarily pirated. The current power is effectively dismantling copyright for AI related work.
The "Big Beautiful Bill" contains a clause that prohibits state "AI" legislation.
Trump has a "Crypto and AI czar" who is very active in promoting "AI" on his YouTube propaganda outlet. The same czar also promoted, pre-election of course, accelerated peace with Russia and then stopped talking about the subject altogether.
Anyone who has a shred of integrity. I'm not a fan of overreaching copyright laws, but they've been strictly enforced for years now. Decades, even. They've ruined many lives, like how they killed Aaron Swartz.
But now, suddenly, violating copyright is totally okay and carries no consequences whatsoever because the billionaires decided that's how they can get richer now?
If you want to even try to pretend you don't live in a plutocracy and that the rule of law matters at all these developments should concern you.
This is interesting for generalized problems ("make me a sandwich") but not useful for most real world functions ("perform x within y space at z cost/speed"). I think the number of people on the humanoid bandwagon trying to implement generalized applications is staggering right now. The physics tells you they will never be as fast as purpose-built devices, nor as small, nor as cheap. That's not to say there's zero value there, but really we're - uh - grasping at straws...
Very good point! This area faces a similar misalignment of goals in that it tries to be a generic fit-all solution that is rampant with today's LLMs.
We made a sandwich but it cost you 10x more than it would a human and slower might slowly become faster and more efficient but by the time you get really good at it, its simply not transferable unless the model is genuinely able to make the leap across into other domains that humans naturally do.
I'm afraid this is where the barrier of general intelligence and human intelligence lies and with enough of these geospatial motor skill database, we might get something that mimics humans very well but still run into problems at the edge, and this last mile problem really is a hinderance to so many domains where we come close but never complete.
I wonder if this will change with some sort of computing changes as well as how we interface with digital systems (without mouse or keyboard), then this might be able to close that 'last mile gap'.
Well, there’s a middle ground, kinda. Using more specialized hardware (ex: cobots) but deploy state-of-art Physical AI (ML/Computer Vision) on them. We’re building one such startup at ko-br (https://ko-br.com/) :))
I wonder if a generalized machine would have an advantage from scale, and then putting all the specialized stuff into software. We have seen this play out before.
> the core insight: predict in representation space, not pixels
We've been doing this since 2014? Not only that, others have been doing it at a similar scale. e.g. Nvidia's world foundation models (although those are generative).
> zero-shot generalization (aka the money shot)
This is easily beaten by flow-matching imitation learning models like what Pi has.
> accidentally solved robotics
They're doing 65% success on very simple tasks.
The research is good. This article however misses a lot of other work in the literature. I would recommend you don't read it as an authoritative source.
Hello there! As a fellow gen-z douchebag, the article looks authentic, albeit a bit slim on Discord screencaps. Will be fun(?) to be proven wrong though.
I just wrote a reply to a comment talking about the AI tells this writing has, but it got flagged so my comment disappeared when I hit post. I'll rephrase out of spite:
My first thought upon reading this was that an LLM had been instructed to add a pithy meme joke to each paragraph. They don't make sense in context, and while some terminally online people do speak in memes, those people aren't quoting doge in 2025.
There's also a sense of incoherence in the whole piece. For instance, this section:
"- after: 22 million videos + 1 million images (now we're talking)
they basically hoovered up everything: something-something v2, kinetics, howto100m, and a billion youtube videos"
Was it a billion vids or 22m? It turns out the latter sentence is just rephrasing the list of sources in a cool casual way, and the last one is called YT-Temporal-1B. That's a billion frames of video, not a billion videos.
This was a bit hard to read. It would be good to have a narrative structure and more clear explanation of concepts.
Very intentional. Their response would be: “if you need narrative structure and clear explanation of concepts, yngmi”.
This article contains so many falsehoods and history rewrites that it's pretty painful to read.
IMO, VideoMimic is a better proof-of-concept
https://www.videomimic.net/
https://www.videomimic.net/page1.html
Looks like it was trained on Shaolin Drunken Fist videos. Does it look drunk because of the videos or because there's a discrepancy between videos and it not accounting for gravity and physics in general?
Someone watched 'Devs' ?
if you havent - highly recommended.
Do you have a link or a less generic search term?
It’s a TV show made by Adam Garland https://m.imdb.com/title/tt8134186/ It’s pretty good sci fi IMHO
Bro chatgpt exist.
Do we have a “let me ChatGPT that for you..” site yet?
Friendly unit conversion man at your service: 114 years.
How much is that in football fields?
If you accept 30 years as the average lifespan of an nfl stadium, 3.8
So a half zoom meeting... or 1/3 Teams one.
I genuinely wish there was a cost estimation feature built into them. Doesn't even have to be even remotely close to the true cost if it's anything like the meetings I attend, there will be enough people and it will go on for long enough to make up for it.
I worked as consultant. And started billing at normal hourly rates for meetings. You will be surprised how fast the company desire for my participation in them decreased.
Why would you do anything but that? You want to just chat with me forever the rate is the rate.
Does YouTube allow massive scraping like this in their ToS?
My "lawyer" (gpt4o) claims that since YouTube is merely a non-exclusive licensee of the user content upload to their service, even if they have such restrictions in their ToS (they do), they likely would not hold up in court, citing [0]. Something about that non-exclusivity meaning they cannot constrain the copyright further on their own terms. Which I guess makes sense?
And since scraping of publicly available data is not illegal (in the US, according to the aforementioned "lawyer"), it seems like it's okay?
Not legal advice.
[0] https://www.skadden.com/insights/publications/2024/05/distri...
I don't think they can legally prevent it
They don't and neither do I allow my site - whose content I found on Gemini -scraped
Probably not.
Who cares at this point? No one is stopping ML sets from being primarily pirated. The current power is effectively dismantling copyright for AI related work.
> The current power is effectively dismantling copyright for AI related work.
Out of the loop apparently, could you elaborate? By "the current power" I take you mean the current US administration?
Trump fired the head of the copyright office:
https://www.heise.de/en/news/After-criticism-of-AI-training-...
The "Big Beautiful Bill" contains a clause that prohibits state "AI" legislation.
Trump has a "Crypto and AI czar" who is very active in promoting "AI" on his YouTube propaganda outlet. The same czar also promoted, pre-election of course, accelerated peace with Russia and then stopped talking about the subject altogether.
Oh wow okay, genuinely missed these. Thanks.
> Who cares at this point
Anyone who has a shred of integrity. I'm not a fan of overreaching copyright laws, but they've been strictly enforced for years now. Decades, even. They've ruined many lives, like how they killed Aaron Swartz.
But now, suddenly, violating copyright is totally okay and carries no consequences whatsoever because the billionaires decided that's how they can get richer now?
If you want to even try to pretend you don't live in a plutocracy and that the rule of law matters at all these developments should concern you.
What ToS
https://www.youtube.com/static?template=terms ?
I wonder how much language does this model understand. If we pan across text will it fill in sensible next word? How good will it be?
This is interesting for generalized problems ("make me a sandwich") but not useful for most real world functions ("perform x within y space at z cost/speed"). I think the number of people on the humanoid bandwagon trying to implement generalized applications is staggering right now. The physics tells you they will never be as fast as purpose-built devices, nor as small, nor as cheap. That's not to say there's zero value there, but really we're - uh - grasping at straws...
Very good point! This area faces a similar misalignment of goals in that it tries to be a generic fit-all solution that is rampant with today's LLMs.
We made a sandwich but it cost you 10x more than it would a human and slower might slowly become faster and more efficient but by the time you get really good at it, its simply not transferable unless the model is genuinely able to make the leap across into other domains that humans naturally do.
I'm afraid this is where the barrier of general intelligence and human intelligence lies and with enough of these geospatial motor skill database, we might get something that mimics humans very well but still run into problems at the edge, and this last mile problem really is a hinderance to so many domains where we come close but never complete.
I wonder if this will change with some sort of computing changes as well as how we interface with digital systems (without mouse or keyboard), then this might be able to close that 'last mile gap'.
Note that the username here is a Korean derogatory term for Chinese people.
Well, there’s a middle ground, kinda. Using more specialized hardware (ex: cobots) but deploy state-of-art Physical AI (ML/Computer Vision) on them. We’re building one such startup at ko-br (https://ko-br.com/) :))
Quite a few startups in your space. Many deployed with customers. Good luck finding a USP!
analogy: a CPU is more expensive, more complicated, more energy demanding than custom made circuitry, in most cases.
I wonder if a generalized machine would have an advantage from scale, and then putting all the specialized stuff into software. We have seen this play out before.
Solved??? Where?
https://news.ycombinator.com/item?id=44073183
Extremely oversold article.
> the core insight: predict in representation space, not pixels
We've been doing this since 2014? Not only that, others have been doing it at a similar scale. e.g. Nvidia's world foundation models (although those are generative).
> zero-shot generalization (aka the money shot)
This is easily beaten by flow-matching imitation learning models like what Pi has.
> accidentally solved robotics
They're doing 65% success on very simple tasks.
The research is good. This article however misses a lot of other work in the literature. I would recommend you don't read it as an authoritative source.
[dead]
[flagged]
[flagged]
I have never seen "ngmi" before, I wonder in which subculture it is common
It's the second most common four-letter acronym in crypto hype threads right after hfsp.
Seen budding lot in Ivy League hacker subculture 15 years ago when I was there
not sure, by my college friend group uses it occasionally
> gen z douchebag
Hello there! As a fellow gen-z douchebag, the article looks authentic, albeit a bit slim on Discord screencaps. Will be fun(?) to be proven wrong though.