I’m thankful that Meta still contributes to open source and shares models like this. I know there’s several reasons to not like the company, but actions like this are much appreciated and benefit everyone.
Not of fan of the company for the social media but have to appreciate all the open sourcing. none of the other top labs release thier models like meta.
They're not doing it out of the goodness of their heart, they're deploying a classic strategy known as "Commoditize Your Complement"[1], to ward off threats from OpenAI and Anthropic. It's only a happy accident that the little guy benefits in this instance.
Facebook is a deeply scummy company[2] and their stranglehold on online advertising spend (along with Google) allows them to pour enormous funds into side bets like this.
Not even closely OK with facebook. But none of the other companies do this. And Mark has been open about it. I remember him saying in an interview the same very openly. Something oddly respectable about NOT sugar coating with good PR and marketing. Unlike OpenAI.
Agreed. The community orientation is great now. I had mixed feelings about them after finding and reporting a live vuln (medium-severity) back in 2005 or so.[1] I'm not really into social media but it does seem like they've changed their culture for the better.
[1] I didn't take them up on the offer to interview in the wake of that and so it will be forever known as "I've made a huge mistake."
First impressions are that this model is extremely good - the "zero-shot" text prompted detection is a huge step ahead of what we've seen before (both compared to older zero-shot detection models and to recent general purpose VLMs like Gemini and Qwen). With human supervision I think it's even at the point of being a useful teacher model.
I put together a YOLO tune for climbing hold detection a while back (trained on 10k labels) and this is 90% as good out of the box - just misses some foot chips and low contrast wood holds, and can't handle as many instances. It would've saved me a huge amount of manual annotation though.
As someone that works on a platform users have used for labeling 1B images, I'm bullish SAM 3 can automate at least 90% of the work. Data prep is flipped to models being human-assisted instead of humans being model-assisted (see "autolabel" https://blog.roboflow.com/sam3/). I'm optimistic majority of users can now start deploying a model to then curate data instead of the inverse.
The 3D mesh generator is really cool too: https://ai.meta.com/sam3d/ It's not perfect, but it seems to handle occlusion very well (e.g. a person in a chair can be separated into a person mesh and a chair mesh) and it's very fast.
Are you sure about that? They say "full 3D shape geometry, texture, and layout" which doesn't preclude it being a splat but maybe they just use splats for visualization?
On their paper they mentioned using an "latent 3D grid" internally, which can be converted to mesh/gs using a decoder. The spatial layout of the points shown in the demo doesn’t resemble a Gaussian splat either
The linked article of the grandparent says "mesh or splats" a bunch, and as you said their examples wouldn't work if it were splats. I feel they are clearly illustrating it's ability to export meshes.
Like the models before it it struggles with my use case of tracing circuit board features. It's great with a pony on the beach but really isn't made for more rote industrial type applications. With proper fine-tuning it would probably work much better but I haven't tried that yet. There are good examples on line though.
I don't have anything specific to link to but you could try it yourself with line art. Try something like a mandala or a coloring book type image. The model is trying to capture something that encompasses an entity. It isn't interested in the subfeatures of the thing. Like with a mandala it wants to segment the symbol in its entirety. It will segment some subfeatures like a leaf shaped piece but it doesn't want to segment just the lines such that it is a stencil.
I hope this makes sense and I'm using terms loosely. It is an amazing model but it doesn't work for my use case, that's all!
Thanks for taking the time to try that out and sharing it! Our problem is with defects on the order of 50 to 100 microns on bare boards. Defects that only a trained tech with a microscope can see - even then it's very difficult.
To answer your question: no but we haven't looked because Sam is sota. Trained our own model with limited success (I'm no expert).
We are pursuing a classical computer vision approach. At some level segmenting a monochrome image resembles or is actually an old fashioned flood fill - very generally. This fantastic sam model is maybe not the right fit for our application.
This is a "classic" machine vision task that has traditionally been solved with non-learning algorithms. (That in part enabled the large volume, zero defect productions in electronics we have today.) There are several off-the-shelf commercial MV tools for that.
Deep Learning-based methods will absolutely have a place in this in the future, but today's machines are usually classic methods. Advantages are that the hardware is much cheaper and requires less electric and thermal management. This changes these days with cheaper NPUs, but with machine lifetimes measured in decades, it will take a while.
My initial thought on hearing about this was it being used for learning. It would be cool to be able to talk to an LLM about how a circuit works, what the different components are, etc.
SAM3 seems to less precisely trace the images — it'll discard kids drawing out the lines a bit, which is okay, but then it also seems to struggle around sharp corners and includes a bit of the white page that I'd like cut out.
Of course, SAM3 is significantly more powerful in that it does much more than simply cut out images. It seems to be able to identify what these kids' drawings represent. That's very impressive, AI models are typically trained on photos and adult illustrations — they struggle with children's drawings. So I could perhaps still use this for identifying content, giving kids more freedom to draw what they like, but then unprompted attach appropriate behavior to their drawings in-game.
I know it may be not what you are looking for, but most of such models generate multiple-scale image features through an image encoder, and those can be very easily fine-tuned for a particular task, like some polygon prediction for your use case. I understand the main benefit of such promptable models to reduce/remove this kind of work in the first place, but could be worth and much more accurate if you have a specific high-load task !
Curious about background removal with BiRefNet. Would you consider it the best model currently available? What other options exist that are popular but not as good?
I'm far from an expert in this area. I've also tried Bria RMBG 1.4, Bria RMBG 2.0, older BiRefNet versions, and I think another I forgot the name of. The fact I'm removing backgrounds that are predominantly white (a sheet of paper) in first place probably changes things significantly. So it's hard to extrapolate my results to general background removal.
BiRefNet 2 seems to do a much better job of correctly removing backgrounds in between the contents outline. So like hands on hips, that region that's fully enclosed but you want removed. It's not just that though, some other models will remove this, but they'll be overly aggressive and remove white areas where kids haven't coloured in perfectly — or like the intentionally left blank whites of eyes for example.
I'm putting these images in a game world once they're cut out, so if things are too transparent, they look very odd.
For my use case, segmentation is all about 3D segmentation of volumes in medical imaging. SAM 2 was tried, mostly using a 2D slice approach, but I don't think it was competitive with the current gold standard nn-unet[1]
[1. https://github.com/MIC-DKFZ/nnUNet]
Agreed that Unet has been the most used model for medical imaging for the last 10 years since the initial Unet paper. I think a combination of Llm+VLMs could be a way forward for medical imaging. I tried it out here and it works great. https://chat.vlm.run/c/e062aa6d-41bb-4fc2-b3e4-7e70b45562cf
SAM3 is cool - you can already do this more interactively on chat.vlm.run [1], and do much more.
It's built on our new Orion [2] model; we've been able to integrate with SAM and several other computer-vision models in a truly composable manner. Video segmentation and tracking is also coming soon!
Didn't see where you got those numbers, but surely that's just a problem of throwing more compute at it? From the blog post:
> This excellent performance comes with fast inference — SAM 3 runs in 30 milliseconds for a single image with more than 100 detected objects on an H200 GPU.
For the first SAM model, you needed to encode the input image which took about 2 seconds (on a consumer GPU), but then any detection you did on the image was on the order of milliseconds. The blog post doesn't seem too clear on this, but I'm assuming the 30ms is for the encoder+100 runs of the detector.
Even if it was 4s, you can always parallelize the frames to do it “realtime”, just the latency for the output will be 4s (provided you can get a cluster with 120 or 240 GPUs to do 4s of frames going in parallel (if it’s 30ms per image then you only need 2 GPUs to do 60fps on a video stream).
We (Roboflow) have had early access to this model for the past few weeks. It's really, really good. This feels like a seminal moment for computer vision. I think there's a real possibility this launch goes down in history as "the GPT Moment" for vision.
The two areas I think this model is going to be transformative in the immediate term are for rapid prototyping and distillation.
Two years ago we released autodistill[1], an open source framework that uses large foundation models to create training data for training small realtime models. I'm convinced the idea was right, but too early; there wasn't a big model good enough to be worth distilling from back then. SAM3 is finally that model (and will be available in Autodistill today).
We are also taking a big bet on SAM3 and have built it into Roboflow as an integral part of the entire build and deploy pipeline[2], including a brand new product called Rapid[3], which reimagines the computer vision pipeline in a SAM3 world. It feels really magical to go from an unlabeled video to a fine-tuned realtime segmentation model with minimal human intervention in just a few minutes (and we rushed the release of our new SOTA realtime segmentation model[4] last week because it's the perfect lightweight complement to the large & powerful SAM3).
We also have a playground[5] up where you can play with the model and compare it to other VLMs.
SAM3 is probably a great model to distill from when training smaller segmentation models, but isn't their DINOv2 a better example of a large foundation model to distill from for various computer vision tasks? I've seen it used for as starting point for models doing segmentation and depth estimation. Maybe there's a v3 coming soon?
I was trying to figure out from their examples, but how are you breaking up the different "things" that you can detect in the image? Are you just running it with each prompt individually?
This is an incredible model. But once again, we find an announcement for a new AI model with highly misleading graphs. That SA-Co Gold graph is particularly bad. Looks like I have another bad graph example for my introductory stats course...
Ok, I tried convert body to 3d, which is seems to do well, but it just gives me the image, I see no way to export or use this image. I can rotate it, but that's it.
Is there some functionality I'm missing? I've tried Safari and Firefox.
I didn't look too close but it wouldn't surprise me if this was intentional. Many of these Meta/Facebook projects don't have open licenses so they never graduate from web demos. Their voice cloning model was the same.
Is it possible to prompt this model with two or more texts for each image and get masks for each?
Something like this inputs = processor(images=images, text=["cat", "dog"], return_tensors="pt").to(device)?
There has been a slow progress in computer vision in the last ~5 years. We are still not close to human performance. This is in contrast to language understanding which has been solved - LLMs understand text on a human level (even if they have other limitations). But vision isn't solved. Foundation models struggle to segment some objects, they don't generalize to domains such as scientific images, etc. I wonder what's missing with models. We have enough data in videos. Is it compute? Is the task not informative enough? Do we need agency in 3D?
I’m not an expert in the field but intuitively from my own experience I’d say what’s missing is a world model. By trying to be more conscious about my own vision I’ve started to notice how common it is that I fail to recognize a shape and then use additional knowledge, context and extrapolations to deduce what it can be.
A few examples I encountered recently: If I take a picture of my living room many random object would be impossible to identify by a stranger but easy by the household members. Or when driving, say at night I see a big dark shape coming from the side of the road? If I’m a local I’ll know there are horses in that field and it is fenced, or I might have read a warning sign before that’ll make me able to deduce what I’m seeing a few minutes later.
People are usually not conscious about this but you can try to block the additional informations to only see and process only what’s really coming from your eyes, and realize how soon it gets insufficient.
The problem is the data. LLM data is self supervised. Vision data is very sparsly annotated in the real world. Going a step further robotics data is is much sparser. So getting these models to improve on this long tail distribution will take time.
I can't wait until it is easy to rotoscope / greenscreen / mask this stuff out accessibly for videos. I had tried Runway ML but it was... lacking, and the webui for fixing parts of it had similar issues.
I'm curious how this works for hair and transparent/translucent things. Probably not the best, but does not seem to be mentioned anywhere? Presumably it's just a straight line or vector rather than alpha etc?
The SAM models are great. I used the latest version when building VideoVanish ( https://github.com/calledit/VideoVanish ) a video-editing GUI for removing or making objects vanish from videos.
That used SAM 2, and in my experience SAM 2 was more or less perfect—I didn’t really see the need for a SAM 3. Maybe it could have been better at segmenting without input.
But the new text prompt input seams nice; much easier to automate stuff using text input.
Promising looking tool. It would be useful to add a performance section to the readme for some ballpark of what to expect even if it is just a reference point of one gpu.
I've been considering building something similar but focused on static stuff like watermarks so just single masks. From that diffueraser page it seems performance is brutally slow with less than 1 fps on 720p.
For watermarks you can use ffmpeg blur which will of course be super fast and looks good on certain kinds of content that are mostly uniform like a sky but terrible and very obvious for most backgrounds. I've gotten really good results with videos shot with static cameras generating a single inpainted frame and then just using that as the "cover" cropped and blurred over the watermark or any object really. Even better results with completely stabilizing the video and balancing the color if it is changing slightly over time. This of course only works if nothing moving intersects with the removed target or if the camera is moving then you need every frame inpainted.
Thus far all full video inpainting like this has been so slow as to not be practically useful for example to casually remove watermarks on videos measured in tens of minutes instead of seconds where i would really want processing to be close to realtime. I've wondered what knobs can be turned if any to sacrifice quality in order to boost performance. My main ideas are to try to automate detecting and applying that single frame technique to as much of the video as possible and then separately process all the other chunks with diffusion scaling to some really small size like 240p and then use ai based upscaling on those chunks which seems to be fairly fast these days compared to diffusion.
"Krita plugin Smart Segments lets you easily select objects using Meta’s Segment Anything Model (SAM v2). Just run the tool, and it automatically finds everything on the current layer. You can click or shift-click to choose one or more segments, and it converts them into a selection."
I think DaVinci Resolve probably have the best, professional-grade usage of ML models today, but they're not "AI Features Galore" about it when it's there. They might mention it as "Paint Out Unwanted Objects" or similar. From the latest release (https://www.blackmagicdesign.com/products/davinciresolve/wha...), I think I could spot 3-4 features at least that are using ML underneath, but aren't highlighted as "AI" at all. Still very useful stuff.
* Does Adobe have their version of this for use within Photoshop, with all of the new AI features they're releasing? Or are they using this behind the scenes?
* If so, how does this compare?
* What's the best-in-class segmentation model on the market?
If this is whats in the consumer space I'd imagine the government has something much more advanced. Its probably a foregone conclusion that they are recording the entire country (maybe the world) and storing everyone's movements or are getting close to it.
I just check and it seems to commercial permissiable.Companies like vlm.run and roboflow are using for commercial use as show by thier comments below. So i guess it can be used for commercial purposes.
Yes. But also note that redistribution of SAM 3 requires using the same SAM 3 license downstream. So libraries that attempt to, e.g., relicense the model as AGPL are non-compliant.
This model is incredibly impressive. Text is definitely the right modality, and now the ability to intertwine it with an LLM creates insane unlocks - my mind is already storming with ideas of projects that are now not only possible, but trivial.
The native support for streaming in SAM3 is awesome. Especially since it should also remove some of the memory accumulation for long sequences.
I used SAM2 for tracking tumors in real-time MRI images. With the default SAM2 and loading images from the da, we could only process videos with 10^2 - 10^3 frames before running out of memory.
By developing/adapting a custom version (1) based on a modified implementation with real (almost) stateless streaming (2) we were able to increase that to 10^5 frames. While this was enough for our purposes, I spend way too much time debugging/investigating tiny differences between SAM2 versions. So it’s great that the canonical version now supports streaming as well.
(Side note: I also know of people using SAM2 for real-time ultrasound imaging.)
A brief history. SAM 1 - Visual prompt to create pixel-perfect masks in an image. No video. No class names. No open vocabulary. SAM 2 - Visual prompting for tracking on images and video. No open vocab. SAM 3 - Open vocab concept segmentation on images and video.
Roboflow has been long on zero / few shot concept segmentation. We've opened up a research preview exploring a SAM 3 native direction for creating your own model: https://rapid.roboflow.com/
I’m thankful that Meta still contributes to open source and shares models like this. I know there’s several reasons to not like the company, but actions like this are much appreciated and benefit everyone.
Does everyone forget 2023 when someone leaked the llama weights to 4chan?? Then meta started issuing takedowns on the leaks trying to stop it.
Meta took the open path because their initial foray into AI was compromised so they have been doing their best to kneecap everyone else since then.
I like the result but let’s not pretend it’s for gracious intent.
There is so much malice in the world, let’s just pretend for once it is gracious intent. Feels better.
Not of fan of the company for the social media but have to appreciate all the open sourcing. none of the other top labs release thier models like meta.
They're not doing it out of the goodness of their heart, they're deploying a classic strategy known as "Commoditize Your Complement"[1], to ward off threats from OpenAI and Anthropic. It's only a happy accident that the little guy benefits in this instance.
Facebook is a deeply scummy company[2] and their stranglehold on online advertising spend (along with Google) allows them to pour enormous funds into side bets like this.
[1] https://gwern.net/complement
[2] https://en.wikipedia.org/wiki/Careless_People
Not even closely OK with facebook. But none of the other companies do this. And Mark has been open about it. I remember him saying in an interview the same very openly. Something oddly respectable about NOT sugar coating with good PR and marketing. Unlike OpenAI.
Well, when your incentives happen to align with those of a faceless mega-corporstion, you gotta take what you can get.
You dont have to thank them for it though.
Among the top 10 tech companies and beyond, they have the most successful open source program.
These projects come to my mind:
SAM segment anything.
PyTorch
LLama
...
Open source datacenters and server blueprints.
the following instead comes from grok.com
Meta’s open-source hall of fame (Nov 2025)
---------------------
Llama family (2 → 3.3) – 2023-2025 >500k total stars · powers ~80% of models on Hugging Face Single-handedly killed the closed frontier model monopoly
---------------------
PyTorch – 2017 85k+ stars · the #1 ML framework in research TensorFlow is basically dead in academia now
---------------------
React + React Native – 2013/2015 230k + 120k stars Still the de-facto UI standard for web & mobile
---------------------
FAISS – 2017 32k stars · used literally everywhere (even inside OpenAI) The vector similarity search library
---------------------
Segment Anything (SAM 1 & 2) – 2023-2024 55k stars Revolutionized image segmentation overnight
---------------------
Open Compute Project – 2011 Entire open-source datacenter designs (servers, racks, networking, power) Google, Microsoft, Apple, and basically the whole hyperscaler industry build on OCP blueprints
---------------------
Zstandard (zstd) – 2016 Faster than gzip · now in Linux kernel, NVIDIA drivers, Cloudflare, etc. The new compression king
---------------------
Buck2 – 2023 Rust build system, 3-5× faster than Buck1 Handles Meta’s insane monorepo without dying
---------------------
Prophet – 2017 · 20k stars Go-to time-series forecasting library for business
---------------------
Hydra – 2020 · 9k stars Config management that saved the sanity of ML researchers
---------------------
Docusaurus – 2017 · 55k stars Powers docs for React, Jest, Babel, etc.
---------------------
Velox – 2022 C++ query engine · backbone of next-gen Presto/Trino
---------------------
Sapling – 2023 Git replacement that actually works at 10M+ file scale
---------------------
Meta’s GitHub org is now >3 million stars total — more than Google + Microsoft + Amazon combined.
---------------------
Bottom line: if you’re using modern AI in 2025, there’s a ~90% chance you’re running on something Meta open-sourced for free.
I dont think it's open source. It says SAM license. Most likely source available.
Agreed. The community orientation is great now. I had mixed feelings about them after finding and reporting a live vuln (medium-severity) back in 2005 or so.[1] I'm not really into social media but it does seem like they've changed their culture for the better.
[1] I didn't take them up on the offer to interview in the wake of that and so it will be forever known as "I've made a huge mistake."
First impressions are that this model is extremely good - the "zero-shot" text prompted detection is a huge step ahead of what we've seen before (both compared to older zero-shot detection models and to recent general purpose VLMs like Gemini and Qwen). With human supervision I think it's even at the point of being a useful teacher model.
I put together a YOLO tune for climbing hold detection a while back (trained on 10k labels) and this is 90% as good out of the box - just misses some foot chips and low contrast wood holds, and can't handle as many instances. It would've saved me a huge amount of manual annotation though.
As someone that works on a platform users have used for labeling 1B images, I'm bullish SAM 3 can automate at least 90% of the work. Data prep is flipped to models being human-assisted instead of humans being model-assisted (see "autolabel" https://blog.roboflow.com/sam3/). I'm optimistic majority of users can now start deploying a model to then curate data instead of the inverse.
I'm guessing you worked on the Stokt app or something similar! It's certainly become one of the best established apps in climbing.
The 3D mesh generator is really cool too: https://ai.meta.com/sam3d/ It's not perfect, but it seems to handle occlusion very well (e.g. a person in a chair can be separated into a person mesh and a chair mesh) and it's very fast.
It's very impressive. Do they let you export a 3D mesh, though? I was only able to export a video. Do you have to buy tokens or something to export?
I couldn't download it. Model appears to be comparable to Sparc3D, Huyunan, etc but w/o download, who can say? It is much faster though.
you can download it at https://github.com/facebookresearch/sam3. for 3d https://github.com/facebookresearch/sam-3d-objects
I actually found the easiest way was to run it for free to see if it works for my use case of person deidentification https://chat.vlm.run/chat/63953adb-a89a-4c85-ae8f-2d501d30a4...
The model is open weights, so you can run it yourself.
The models it creates are gaussian splats, so if you are looking for traditional meshes you'd need a tool that can create meshes from splats.
Are you sure about that? They say "full 3D shape geometry, texture, and layout" which doesn't preclude it being a splat but maybe they just use splats for visualization?
On their paper they mentioned using an "latent 3D grid" internally, which can be converted to mesh/gs using a decoder. The spatial layout of the points shown in the demo doesn’t resemble a Gaussian splat either
The linked article of the grandparent says "mesh or splats" a bunch, and as you said their examples wouldn't work if it were splats. I feel they are clearly illustrating it's ability to export meshes.
Like the models before it it struggles with my use case of tracing circuit board features. It's great with a pony on the beach but really isn't made for more rote industrial type applications. With proper fine-tuning it would probably work much better but I haven't tried that yet. There are good examples on line though.
I would try to take DINO v3 [1] for a spin, for that specific use-case. Or, don't laugh, the Nano Banana [2]
[1]: https://github.com/facebookresearch/dinov3 [2]: https://imgeditor.co/
Wow that sounds like a really interesting use-case for this. Can you link to some of those examples?
I don't have anything specific to link to but you could try it yourself with line art. Try something like a mandala or a coloring book type image. The model is trying to capture something that encompasses an entity. It isn't interested in the subfeatures of the thing. Like with a mandala it wants to segment the symbol in its entirety. It will segment some subfeatures like a leaf shaped piece but it doesn't want to segment just the lines such that it is a stencil.
I hope this makes sense and I'm using terms loosely. It is an amazing model but it doesn't work for my use case, that's all!
Actually a combination of LLM and VLMs could work in such cases. I just tested on some circuit boards. https://chat.vlm.run/c/f0418b26-af20-4b3d-a873-ff954f5117af
Thanks for taking the time to try that out and sharing it! Our problem is with defects on the order of 50 to 100 microns on bare boards. Defects that only a trained tech with a microscope can see - even then it's very difficult.
Seems like a exciting problem. Hope you get tools in the future which can solve it well.
It looks like there could be a lot of potential here for learning/repair/debugging/reverse engineering. Really cool application of this stuff!
Generally this is called automated anomaly detection.
Have you found any models that work better for your use case?
To answer your question: no but we haven't looked because Sam is sota. Trained our own model with limited success (I'm no expert). We are pursuing a classical computer vision approach. At some level segmenting a monochrome image resembles or is actually an old fashioned flood fill - very generally. This fantastic sam model is maybe not the right fit for our application.
Edit: answered the question
This is a "classic" machine vision task that has traditionally been solved with non-learning algorithms. (That in part enabled the large volume, zero defect productions in electronics we have today.) There are several off-the-shelf commercial MV tools for that.
Deep Learning-based methods will absolutely have a place in this in the future, but today's machines are usually classic methods. Advantages are that the hardware is much cheaper and requires less electric and thermal management. This changes these days with cheaper NPUs, but with machine lifetimes measured in decades, it will take a while.
My initial thought on hearing about this was it being used for learning. It would be cool to be able to talk to an LLM about how a circuit works, what the different components are, etc.
For background removal (at least my niche use case of background removal of kids drawings — https://breaka.club/blog/why-were-building-clubs-for-kids) I think birefnet v2 is still working slightly better.
SAM3 seems to less precisely trace the images — it'll discard kids drawing out the lines a bit, which is okay, but then it also seems to struggle around sharp corners and includes a bit of the white page that I'd like cut out.
Of course, SAM3 is significantly more powerful in that it does much more than simply cut out images. It seems to be able to identify what these kids' drawings represent. That's very impressive, AI models are typically trained on photos and adult illustrations — they struggle with children's drawings. So I could perhaps still use this for identifying content, giving kids more freedom to draw what they like, but then unprompted attach appropriate behavior to their drawings in-game.
I know it may be not what you are looking for, but most of such models generate multiple-scale image features through an image encoder, and those can be very easily fine-tuned for a particular task, like some polygon prediction for your use case. I understand the main benefit of such promptable models to reduce/remove this kind of work in the first place, but could be worth and much more accurate if you have a specific high-load task !
Curious about background removal with BiRefNet. Would you consider it the best model currently available? What other options exist that are popular but not as good?
I'm far from an expert in this area. I've also tried Bria RMBG 1.4, Bria RMBG 2.0, older BiRefNet versions, and I think another I forgot the name of. The fact I'm removing backgrounds that are predominantly white (a sheet of paper) in first place probably changes things significantly. So it's hard to extrapolate my results to general background removal.
BiRefNet 2 seems to do a much better job of correctly removing backgrounds in between the contents outline. So like hands on hips, that region that's fully enclosed but you want removed. It's not just that though, some other models will remove this, but they'll be overly aggressive and remove white areas where kids haven't coloured in perfectly — or like the intentionally left blank whites of eyes for example.
I'm putting these images in a game world once they're cut out, so if things are too transparent, they look very odd.
For my use case, segmentation is all about 3D segmentation of volumes in medical imaging. SAM 2 was tried, mostly using a 2D slice approach, but I don't think it was competitive with the current gold standard nn-unet[1] [1. https://github.com/MIC-DKFZ/nnUNet]
Same. My use case is ultrasound segmentation. These models struggle, understandably so, with medical imaging.
Agreed that Unet has been the most used model for medical imaging for the last 10 years since the initial Unet paper. I think a combination of Llm+VLMs could be a way forward for medical imaging. I tried it out here and it works great. https://chat.vlm.run/c/e062aa6d-41bb-4fc2-b3e4-7e70b45562cf
SAM3 is cool - you can already do this more interactively on chat.vlm.run [1], and do much more. It's built on our new Orion [2] model; we've been able to integrate with SAM and several other computer-vision models in a truly composable manner. Video segmentation and tracking is also coming soon!
[1] https://chat.vlm.run
[2] https://vlm.run/orion
Wow this is actually pretty cool, I was able to segment out the people and dog in the same chat. https://chat.vlm.run/chat/cba92d77-36cf-4f7e-b5ea-b703e612ea...
Even works with long range shots. https://chat.vlm.run/chat/e8bd5a29-a789-40aa-ae31-a510dc6478...
Nice, that's pretty neat.
With a avg latency of 4 seconds, this still couldn't be used in real-time video, correct?
[Update: should have mentioned I got the 4 second from the roboflow.com links in this thread]
Didn't see where you got those numbers, but surely that's just a problem of throwing more compute at it? From the blog post:
> This excellent performance comes with fast inference — SAM 3 runs in 30 milliseconds for a single image with more than 100 detected objects on an H200 GPU.
For the first SAM model, you needed to encode the input image which took about 2 seconds (on a consumer GPU), but then any detection you did on the image was on the order of milliseconds. The blog post doesn't seem too clear on this, but I'm assuming the 30ms is for the encoder+100 runs of the detector.
Even if it was 4s, you can always parallelize the frames to do it “realtime”, just the latency for the output will be 4s (provided you can get a cluster with 120 or 240 GPUs to do 4s of frames going in parallel (if it’s 30ms per image then you only need 2 GPUs to do 60fps on a video stream).
We (Roboflow) have had early access to this model for the past few weeks. It's really, really good. This feels like a seminal moment for computer vision. I think there's a real possibility this launch goes down in history as "the GPT Moment" for vision. The two areas I think this model is going to be transformative in the immediate term are for rapid prototyping and distillation.
Two years ago we released autodistill[1], an open source framework that uses large foundation models to create training data for training small realtime models. I'm convinced the idea was right, but too early; there wasn't a big model good enough to be worth distilling from back then. SAM3 is finally that model (and will be available in Autodistill today).
We are also taking a big bet on SAM3 and have built it into Roboflow as an integral part of the entire build and deploy pipeline[2], including a brand new product called Rapid[3], which reimagines the computer vision pipeline in a SAM3 world. It feels really magical to go from an unlabeled video to a fine-tuned realtime segmentation model with minimal human intervention in just a few minutes (and we rushed the release of our new SOTA realtime segmentation model[4] last week because it's the perfect lightweight complement to the large & powerful SAM3).
We also have a playground[5] up where you can play with the model and compare it to other VLMs.
[1] https://github.com/autodistill/autodistill
[2] https://blog.roboflow.com/sam3/
[3] https://rapid.roboflow.com
[4] https://github.com/roboflow/rf-detr
[5] https://playground.roboflow.com
SAM3 is probably a great model to distill from when training smaller segmentation models, but isn't their DINOv2 a better example of a large foundation model to distill from for various computer vision tasks? I've seen it used for as starting point for models doing segmentation and depth estimation. Maybe there's a v3 coming soon?
https://dinov2.metademolab.com/
DINOv3 was released earlier this year: https://ai.meta.com/dinov3/
I'm not sure if the work they did with DINOv3 went into SAM3. I don't see any mention of it in the paper, though I just skimmed it.
We used DINOv2 as the backbone of our RF-DETR model, which is SOTA on realtime object detection and segmentation: https://github.com/roboflow/rf-detr
It makes a great target to distill SAM3 to.
I was trying to figure out from their examples, but how are you breaking up the different "things" that you can detect in the image? Are you just running it with each prompt individually?
The model supports batch inference, so all prompts are sent to the model, and we parse the results.
Thanks for the linkes! Can we run rf-detr in the browser for background removal? This wasn't clear to me from the docs
We have a JS SDK that supports RF-DETR: https://docs.roboflow.com/deploy/sdks/web-browser
This is an incredible model. But once again, we find an announcement for a new AI model with highly misleading graphs. That SA-Co Gold graph is particularly bad. Looks like I have another bad graph example for my introductory stats course...
Check out the new grok 4.1 graphs. They're even worse
> *Core contributor (Alphabetical, Equal Contribution), Intern, †Project leads, §Equal Contribution
I like seeing this
I wonder if we'll get an updated DeepSeek-OCR that incorporates this. Would be very cool!
for document layout! did you have success understanding document layout using SAM
Ok, I tried convert body to 3d, which is seems to do well, but it just gives me the image, I see no way to export or use this image. I can rotate it, but that's it.
Is there some functionality I'm missing? I've tried Safari and Firefox.
If you open inspect element you can download the blob there. It is a .ply file and you can view it in any splat viewer.
I didn't look too close but it wouldn't surprise me if this was intentional. Many of these Meta/Facebook projects don't have open licenses so they never graduate from web demos. Their voice cloning model was the same.
Is it possible to prompt this model with two or more texts for each image and get masks for each? Something like this inputs = processor(images=images, text=["cat", "dog"], return_tensors="pt").to(device)?
There has been a slow progress in computer vision in the last ~5 years. We are still not close to human performance. This is in contrast to language understanding which has been solved - LLMs understand text on a human level (even if they have other limitations). But vision isn't solved. Foundation models struggle to segment some objects, they don't generalize to domains such as scientific images, etc. I wonder what's missing with models. We have enough data in videos. Is it compute? Is the task not informative enough? Do we need agency in 3D?
I’m not an expert in the field but intuitively from my own experience I’d say what’s missing is a world model. By trying to be more conscious about my own vision I’ve started to notice how common it is that I fail to recognize a shape and then use additional knowledge, context and extrapolations to deduce what it can be.
A few examples I encountered recently: If I take a picture of my living room many random object would be impossible to identify by a stranger but easy by the household members. Or when driving, say at night I see a big dark shape coming from the side of the road? If I’m a local I’ll know there are horses in that field and it is fenced, or I might have read a warning sign before that’ll make me able to deduce what I’m seeing a few minutes later.
People are usually not conscious about this but you can try to block the additional informations to only see and process only what’s really coming from your eyes, and realize how soon it gets insufficient.
> LLMs understand text on a human level (even if they have other limitations).
Limitations like understanding...
The problem is the data. LLM data is self supervised. Vision data is very sparsly annotated in the real world. Going a step further robotics data is is much sparser. So getting these models to improve on this long tail distribution will take time.
I can't wait until it is easy to rotoscope / greenscreen / mask this stuff out accessibly for videos. I had tried Runway ML but it was... lacking, and the webui for fixing parts of it had similar issues.
I'm curious how this works for hair and transparent/translucent things. Probably not the best, but does not seem to be mentioned anywhere? Presumably it's just a straight line or vector rather than alpha etc?
I tried it on transparent glass mugs, and it does pretty well. At least better than other available models: https://i.imgur.com/OBfx9JY.png
Curious if you find interesting results - https://playground.roboflow.com
I'm pretty sure davinci resolve does this already, you can even track it, idk if it's available in the free version.
The SAM models are great. I used the latest version when building VideoVanish ( https://github.com/calledit/VideoVanish ) a video-editing GUI for removing or making objects vanish from videos.
That used SAM 2, and in my experience SAM 2 was more or less perfect—I didn’t really see the need for a SAM 3. Maybe it could have been better at segmenting without input.
But the new text prompt input seams nice; much easier to automate stuff using text input.
Promising looking tool. It would be useful to add a performance section to the readme for some ballpark of what to expect even if it is just a reference point of one gpu.
I've been considering building something similar but focused on static stuff like watermarks so just single masks. From that diffueraser page it seems performance is brutally slow with less than 1 fps on 720p.
For watermarks you can use ffmpeg blur which will of course be super fast and looks good on certain kinds of content that are mostly uniform like a sky but terrible and very obvious for most backgrounds. I've gotten really good results with videos shot with static cameras generating a single inpainted frame and then just using that as the "cover" cropped and blurred over the watermark or any object really. Even better results with completely stabilizing the video and balancing the color if it is changing slightly over time. This of course only works if nothing moving intersects with the removed target or if the camera is moving then you need every frame inpainted.
Thus far all full video inpainting like this has been so slow as to not be practically useful for example to casually remove watermarks on videos measured in tens of minutes instead of seconds where i would really want processing to be close to realtime. I've wondered what knobs can be turned if any to sacrifice quality in order to boost performance. My main ideas are to try to automate detecting and applying that single frame technique to as much of the video as possible and then separately process all the other chunks with diffusion scaling to some really small size like 240p and then use ai based upscaling on those chunks which seems to be fairly fast these days compared to diffusion.
These models have been super cool and it'd be nice if they made it into some editing program. Is there anything consumer focused that has this tech?
https://news.ycombinator.com/item?id=44736202
"Krita plugin Smart Segments lets you easily select objects using Meta’s Segment Anything Model (SAM v2). Just run the tool, and it automatically finds everything on the current layer. You can click or shift-click to choose one or more segments, and it converts them into a selection."
I think DaVinci Resolve probably have the best, professional-grade usage of ML models today, but they're not "AI Features Galore" about it when it's there. They might mention it as "Paint Out Unwanted Objects" or similar. From the latest release (https://www.blackmagicdesign.com/products/davinciresolve/wha...), I think I could spot 3-4 features at least that are using ML underneath, but aren't highlighted as "AI" at all. Still very useful stuff.
Here are two plugins for After Effects - https://aescripts.com/mask-prompter/ https://aescripts.com/depth-scanner-lite/
ComfyUI addon for Krita is pretty close I think.
Couple of questions for people in-the-know:
* Does Adobe have their version of this for use within Photoshop, with all of the new AI features they're releasing? Or are they using this behind the scenes? * If so, how does this compare? * What's the best-in-class segmentation model on the market?
Does the license allow for commercial purposes?
Yes. It's a custom license with an Acceptable Use Policy preventing military use and export restrictions. The custom license permits commercial use.
If this is whats in the consumer space I'd imagine the government has something much more advanced. Its probably a foregone conclusion that they are recording the entire country (maybe the world) and storing everyone's movements or are getting close to it.
I just check and it seems to commercial permissiable.Companies like vlm.run and roboflow are using for commercial use as show by thier comments below. So i guess it can be used for commercial purposes.
Yes. But also note that redistribution of SAM 3 requires using the same SAM 3 license downstream. So libraries that attempt to, e.g., relicense the model as AGPL are non-compliant.
Yes, the license allows you to grift for your “AI startup”
This model is incredibly impressive. Text is definitely the right modality, and now the ability to intertwine it with an LLM creates insane unlocks - my mind is already storming with ideas of projects that are now not only possible, but trivial.
Dang that seems like it would work great for game asset generation regarding 3D
Curious if anyone has done anything meaningful with SAM2 and streaming. SAM3 has built-in streaming support which is very exciting.
I’ve seen versions where people use an in-memory FS to write frames of stream with SAM2. Maybe that is good enough?
The native support for streaming in SAM3 is awesome. Especially since it should also remove some of the memory accumulation for long sequences.
I used SAM2 for tracking tumors in real-time MRI images. With the default SAM2 and loading images from the da, we could only process videos with 10^2 - 10^3 frames before running out of memory.
By developing/adapting a custom version (1) based on a modified implementation with real (almost) stateless streaming (2) we were able to increase that to 10^5 frames. While this was enough for our purposes, I spend way too much time debugging/investigating tiny differences between SAM2 versions. So it’s great that the canonical version now supports streaming as well.
(Side note: I also know of people using SAM2 for real-time ultrasound imaging.)
1 https://github.com/LMUK-RADONC-PHYS-RES/mrgrt-target-localiz...
2 https://github.com/Gy920/segment-anything-2-real-time
Probably still can't get past a Google Captcha when on a VPN. Do I click the square with the shoe of the person who's riding the motorcycle?
There are services you can get that will bypass those with a browser extension for you.
This thing rocks. i can imagine soo many uses for it. I really like the 3d pose estimation especially
This would be good for video editor
Can it detect the speed of a vehicle on any video unsupervised ?
can anyone confirm if this fits in a 3090? the files look about 3.5GB, but I can't work out what the memory needs will be overall.
Yes, it should.
thanks!
Obligatory xkcd: https://xkcd.com/1425/
That comic doesn't appear to be dated but I'm sure it's been at least 5 years, so that checks out.
It's from 2014, over a decade old.
Relevant to that comic specifically: https://www.reddit.com/r/xkcd/comments/mi725t/yeardate_a_com...
A brief history. SAM 1 - Visual prompt to create pixel-perfect masks in an image. No video. No class names. No open vocabulary. SAM 2 - Visual prompting for tracking on images and video. No open vocab. SAM 3 - Open vocab concept segmentation on images and video.
Roboflow has been long on zero / few shot concept segmentation. We've opened up a research preview exploring a SAM 3 native direction for creating your own model: https://rapid.roboflow.com/