7 comments

  • avaer 32 minutes ago

    If you haven't tried AI modeling pipelines in the last year you'll be surprised.

    The star of the show here is https://platform.worldlabs.ai/ (author works there, I don't) which is really good. There's also Meshy.ai (which this repo doesn't seem to use?) for non-scene stuff that's right up there in quality. There's texturing, auto-rigging, etc.

    The latest VLLM models have true pixel image grounding which means you can totally ask your AI about pixel coordinates of things, so you get 3d perception for edits and anything else you need.

    I'm actually surprised I don't see this stuff being used more; I think it's because most pipelines are hard-baked with assumption that your 3D assets are files you get from an artist, not something you can imagine up in minutes in a script. The technology is moving faster than the industry can keep up with.

  • tombert 33 minutes ago

    This is cool as hell.

    I remember like seventeen years years ago, Microsoft had "PhotoSynth", which would make 3D environments based on a bunch of images, and seventeen-year-old-tombert thought it was one of the most amazing things to ever be done on a computer.

    Doing this with just one image makes this at least an order of magnitude cooler. I will be playing with this over the weekend.

    • taffydavid 22 minutes ago

      Photosynth was awesome, I really miss it, but it was more of a panorama tool than a 3d environment.

      My pixel6 has a photo sphere mode on the camera which is the same thing

      • tombert 21 minutes ago

        You could actually make it have a rough 3D environment as well. Their demo had a model of Piazza San Marco with dots to estimate the actual buildings and the like.

        • taffydavid 18 minutes ago

          Oh yes, I remember that now!

  • ZiiS 34 minutes ago

    So Blade Runner's Esper photo analysis went from ruining the suspension of disbelief to reality quicker then most magic.

    • taffydavid 19 minutes ago

      Well, in blade runner he looks around a corner and zooms in microscopic detail on something not visible from the photo.

      But the esper interface is all voice activated, and doesn't talk back - which I think is very prescient, and more likely the way things will go. I'd much rather voice assistants just did the thing that I want them to do rather than talk back to me