Cool, but if only Apple could get their video thumbnails on macOS to use the same frame as their video thumbnails on iOS, it would be a huge step forward. Should not even be hard, no AI needed. It's absolutely nuts that this has been broken for so many years.
Another crazy thing that I’ve seen in the past - in Photos on Mac, if you have a video and do Set as Poster Frame (or something like that to set the thumbnail), that duplicates the whole video. So if you have a 20GB video in Photos and you just want change the thumbnail, it’s gonna cost you another 20GB disk space
This is quite cool. I believe they use the same system to choose photos for memories and wallpaper suggestions. And it works well - what they choose really looks great!
Would be interested in the tech behind that (surely machine learning will be involved somehow).
It seems that this solution selects the most "visually pleasing" frames, which ffmpeg on its own doesn't do. You could probably hack around it to arrive at a decent solution[1], but something like this would require an advanced filter or an external tool.
I wonder if this will work on a mac that can’t ever phone home to Apple to download models.
Apple has this horrible tendency of not actually shipping things inside the OS these days, and instead downloading them at runtime. Examples include rosetta/commandline tools, local speech recognition models, dictionaries, and USB connection device support for new iOS devices.
I want to be able to use all of the features without iCloud and without HTTP requests to the mothership (or without internet at all).
Cool, but if only Apple could get their video thumbnails on macOS to use the same frame as their video thumbnails on iOS, it would be a huge step forward. Should not even be hard, no AI needed. It's absolutely nuts that this has been broken for so many years.
Another crazy thing that I’ve seen in the past - in Photos on Mac, if you have a video and do Set as Poster Frame (or something like that to set the thumbnail), that duplicates the whole video. So if you have a 20GB video in Photos and you just want change the thumbnail, it’s gonna cost you another 20GB disk space
It's fixed in macOS Sequoia and iOS 18.
This is a CLI Rust tool for that: https://github.com/yury/cidre/blob/main/cidre/examples/vn-th...
Maybe somebody will find it useful.
That’s based on similarity. While that is useful, this is based on aesthetic quality which is quite different. This is ML based.
This is quite cool. I believe they use the same system to choose photos for memories and wallpaper suggestions. And it works well - what they choose really looks great!
Would be interested in the tech behind that (surely machine learning will be involved somehow).
Does this do anything ffmpeg can’t do?
It seems that this solution selects the most "visually pleasing" frames, which ffmpeg on its own doesn't do. You could probably hack around it to arrive at a decent solution[1], but something like this would require an advanced filter or an external tool.
[1]: https://superuser.com/q/538112
Anyone know if there is something similar in the cloud? Like an API based thing? Seems like a nice feature for a CMS
Great gift for YouTubers
I wonder if this will work on a mac that can’t ever phone home to Apple to download models.
Apple has this horrible tendency of not actually shipping things inside the OS these days, and instead downloading them at runtime. Examples include rosetta/commandline tools, local speech recognition models, dictionaries, and USB connection device support for new iOS devices.
I want to be able to use all of the features without iCloud and without HTTP requests to the mothership (or without internet at all).