Turn a single image into a navigable 3D Gaussian Splat with depth

(lab.revelium.studio)

79 points | by ytpete a day ago ago

40 comments

  • smusamashah 20 hours ago

    If this model is so good at estimating depth from single image, shouldn't it also be able to take multiple images as input and estimate even better? But searching a bit it looks like this is supposed to be a single image to 3D only. I don't understand why it does not (can not?) work with multiple images.

    • milleramp 19 hours ago

      It's using Apple's SHARP method, which is monocular. https://apple.github.io/ml-sharp/

    • MillionOClock 19 hours ago

      I also feel like an heavily multimodal model could be very nice for this: allow multiple images from various angles, optionally some true depth data even if imperfect (like what a basic phone LIDAR would output), why not even photos of the same place even if it comes from other sources at other times (just to gather more data), and based on that generate a 3D scene you can explore, using generative AI for filling with plausible content what is missing.

    • voodooEntity 20 hours ago

      If you have multiple images you could use photogrammetry.

      At the end, if you want to "fill in the blanks" llm will always "make up" stuff, based on all of its training data.

      With a technology like photogrammetry you can get much better results, therefor if you have multiple angled images and dont really need to make up stuff, its better to use such

      • TeMPOraL 19 hours ago

        You could use both. Photogrammetry requires you to have a lot of additional information, and/or to make a lot of assumptions (e.g. about camera, specific lens properties, medium properties, material composition and properties, etc. - and what are reasonable range for values in context), if you want it to work well for general cases, as otherwise the problem you're solving is underspecified. In practice, even enumerating those assumptions is a huge task, much less defending them. That's why photogrammetry applications tend to be used for solving very specific problems in select domains.

        ML models, on the other hand, are in a big way, intuitive assumption machines. Through training, they learn what's likely and what's not, given both the input measurements and the state of the world. They bake in knowledge for what kind of cameras exist, what kind of measurements are being made, what results make sense in the real world.

        In the past I'd say that for best results, we should combine the two approaches - have AI supply assumptions and estimates for otherwise explicitly formal, photogrammetric approach. Today, I'm no longer convinced it's the case - because relative to the fuzzy world modeling part, the actual math seems trivial and well within capabilities of ML models to do correctly. The last few years demonstrated that ML models are capable of internally modeling calculations and executing them, so I now feel it's more likely that a sufficiently trained model will just do photogrammetry calculations internally. See also: the Bitter Lesson.

      • esafak 20 hours ago

        Surely this is not an LLM?

    • shrinks99 20 hours ago

      I'm going to guess this is because the image to depth data, while good, is not perfectly accurate and therefore cannot be a shared ground truth between multiple images. At that point what you want is a more traditional structure from motion workflow, which already exists and does a decent job.

    • SequoiaHope 18 hours ago

      Multi-view approaches tend to have a very different pipeline.

    • echelon 20 hours ago

      Also, are we allowed to use this model? Apple had a very restrictive licence, IIRC?

  • brk 21 hours ago

    Tried a few random images and scenes, overall wasn't that impressive. Maybe I'm using the wrong kinds of input images or something, but for the most part once I moved more than a small amount, the rendering was mostly noise. To be fair, I didn't really expect much more.

    Neat demo, but feels like things need to come quite a ways to make this interesting.

  • mawadev 21 hours ago

    Stuck at 90% forever..

    • verytrivial 19 hours ago

      My understanding of JavaScript is cursory, but my reading of that webpage is the UI is just smoke and mirrors, and it is just waiting for the whole thing to be processed in a single remote API call to some back-end system. If the back-end is down, it will always stop at 90%. The crawling progress bar is fake with canned messages updated with Math.Random() delays. Gives you something to look at, I guess, but seems a little misleading. Might be wrong ...

    • james2doyle 19 hours ago
    • someguyiguess 20 hours ago

      Same for me as well. Probably ran out of API token credits when everyone on HN started loading it.

    • lastdong 19 hours ago

      I was wondering if it was running locally… 90% stuck

    • eps 20 hours ago

      Yup, same here.

    • bigtones 17 hours ago

      Same here. It just times out.

    • colordrops 18 hours ago

      Fails for me with:

          '_Function' object has no attribute '_snapshotted'
    • M4R5H4LL 20 hours ago

      Same for me

    • tripplyons 20 hours ago

      Same here

  • Johnny_Bonk 21 hours ago

    Cool, is there a way to upload several photos of a room from different angles to fuse it all together? Is there an api?

    • riotnrrd 21 hours ago

      That's a pretty well-solved problem at this point, if you want to do it yourself. You'll want some kind of NeRF tool and a way to calculate the camera poses of the photos you took. COLMAP is the tool most people use for the latter.

      I'd recommend trying Instant Neural Graphics Primitives (https://github.com/NVlabs/instant-ngp) from NVIDIA. It's a couple years old, so not state-of-the-art, but it runs on just about anything and is extremely fast.

      • Johnny_Bonk 19 hours ago

        Sweet, thank you for sharing. In my case, I need an api I can call cause i only have a mac air which is essentially worthless for development lol. Also I am bootstrapping a startup and one of the features is essentially turning rooms into 3d space. I know theres matterport 3d and some others but still looking for something simple where i could pay a couple cents per api call with x amount of images. does that make sense?

    • carlosjobim 20 hours ago

      That is the entire science of photogrammetry. Which has made tremendous progress in the past 10 years. There's many tools which will do it for you.

  • j2kun 20 hours ago

    Would be useful to have the website say something, _anything_ about what this is doing besides asking you to upload an image.

  • personjerry 20 hours ago

    This is just Apple's tool plus a splat viewing library? Perhaps disingenuous to call "our web app"

    This is the heavy lifting: https://github.com/apple/ml-sharp

    Previous discussion: https://news.ycombinator.com/item?id=46284658

    • vunderba 20 hours ago

      Yeah I think you're right. It calls that out (in really tiny footer text) that it's leveraging ml-sharp.

      It's pretty trivial to get running locally and generating the PLY files. Spark's a pretty good renderer for it after you've generated the gaussian splats.

      https://github.com/sparkjsdev/spark

  • nmstoker 19 hours ago

    Gets stuck at 84% each time - seems wasteful to let it get that far!

  • methuselah_in 18 hours ago

    Thrown 2 images didn't nothing just a error

  • voodooEntity 20 hours ago

    Its funny, always stucks on 90% till it fails with the error that another big image may be keeping the server busy.

    I mean ok its a "demo" tho the funny thing is if you actually check the cli and requests, you clearly can see that the 3 stages the images walks through on "processing" are fake, its just doing 1 post request in the backend that runs while it traverses through the states, and at 90% it stops until (in theory) the request ends.

    • fenwick67 19 hours ago

      When I saw the progress bar moving so smoothly I knew it was BS lol

    • hahahahhaah 20 hours ago

      Oh it's an IE6 progress bar then.

  • mightysashiman 11 hours ago

    Conveniently fails to start processing

  • xnx 21 hours ago
    • causal 21 hours ago

      What is Pinokio? The website just says "Your PC is the Cloud" - what?

      • xnx 21 hours ago

        That website tries too hard to write clever marketing copy and does a bad job describing what actually is.

        Better description: Pinokio is a free, open-source "AI browser" that simplifies installing, running, and managing complex, open-source AI applications and creative tools (like Stable Diffusion, ComfyUI) with one-click scripts, removing the need for coding or complex command-line setup.

        • causal 21 hours ago

          Huh. That doesn't sound like a browser at all. But okay, thanks for the summary!

          • Cieric 21 hours ago

            I think in this case browser is meant as a place to browse, e.g. the Google Play store is an app browser. I don't hear it used that way often anymore, but it at least sounds familiar.

      • shermantanktop 20 hours ago

        Not sure I would name a product after a legendary liar...

        But sure, click that download link, what's the worst that could happen? Get turned into a donkey and swallowed by a whale?