127 comments

  • crazygringo 27 minutes ago

    I just want to say this isn't just amazing -- it's my new favorite map of NYC.

    It's genuinely astonishing how much clearer this is than a traditional satellite map -- how it has just the right amount of complexity. I'm looking at areas I've spent a lot of time in, and getting an even better conceptual understanding of the physical layout than I've ever been able to get from satellite (technically airplane) images. This hits the perfect "sweet spot" of detail with clear "cartoon" coloring.

    I see a lot of criticism here that this isn't "pixel art", so maybe there's some better term to use. I don't know what to call this precise style -- it's almost pixel art without the pixels? -- but I love it. Serious congratulations.

    • cannoneyed 3 minutes ago

      Author here, and to reiterate another reply - all of the critique of "pixel art" is completely fair. Aesthetically and philosophically, what AI does for "pixel art" is very off. And once you see the AI you can't really unsee it.

      But I didn't want to call it a "SimCity" map, though that's really the vibe/inspiration I wanted to capture, because that implies a lot of other things, so I used the term "pixel art" even though I figured it might get a lot of (valid) pushback...

      In general, labels and genres are really hard - "techno" to a deep head in Berlin is very different than "techno" to my cousins. This issue has always been fraught, because context and meaning and technique are all tangled up in these labels which are so important to some but so easily ignored by others. And those questions are even harder in the age of AI where the machine just gobbles up everything.

      But regardless, it was a fun project! And to me at least it's better to just make cool ambitious things in good faith and understand that art by definition is meaningful and therefore makes people feel things from love to anger to disgust to fascination.

  • ProjectBarks 3 hours ago

    I was extremely excited until I looked closer and realized how many of these look like ... well AI. The article is such a good read and would recommend people check it out.

    Feels like something is missing... maybe just a pixelation effect over the actual result? Seems like a lot of the images also lack continuity (something they go over in the article)

    Overall, such a cool usage of AI that blends Art and AI well.

    • nonethewiser 2 hours ago

      Basically, it's not pixel art at all.

      It's very cool and I don't mind the use of AI at all but I think calling it pixel art is just very misleading. It's closer to a filter but not quite that either.

      • jasondigitized 7 minutes ago

        It's pixel art, just not the types of pixels most people want in pixel art.

      • QuantumNomad_ 2 hours ago

        Yup, not pixel art. I wonder if people are not zooming in on it properly? If you zoom in max you see how much strangeness there is.

        It kind of looks like a Google Sketchup render that someone then went and used the Photoshop Clone and Patch tools on in arbitrary ways.

        Doesn’t really look anything like pixel art at all. Because it isn’t.

    • cannoneyed 3 hours ago

      Yeah it leaves a lot to be desired. Once you see the AI it's hard to unsee. I actually had a few other generation styles, more 8-bit like, that probably would have lended themselves better to actual pixel-art processing, but I opted to use this fine-tune and in for a penny in for a pound, so to speak...

  • potatowaffle 3 hours ago

    > What’s possible now that was impossible before?

    > I spent a decade as an electronic musician, spending literally thousands of hours dragging little boxes around on a screen. So much of creative work is defined by this kind of tedious grind. ... This isn't creative. It's just a slog. Every creative field - animation, video, software - is full of these tedious tasks. Of course, there’s a case to be made that the very act of doing this manual work is what refines your instincts - but I think it’s more of a “Just So” story than anything else. In the end, the quality of art is defined by the quality of your decisions - how much work you put into something is just a proxy for how much you care and how much you have to say.

    Great insights here, thanks for sharing. That opening question really clicked for me.

    • anonymous908213 an hour ago

      That quote seriously rubs me the wrong way. "Dragging little boxes around" in a DAW is creative, it constitutes the entire process of composing electronic music. You are notating what notes to play, when and for how long they play, what instrument plays them, and any modifications to the default sound of that instrument. Is writing sheet music tedious? Sure, it can be, when the speed of notating by hand can't keep up with the speed your brain is thinking through ideas. But being tedious is not mutually exclusive with being creative despite the attempt to explicitly contrast them as such, and the solution to the process of notating your creativity being tedious is not "randomly generate a bunch of notes and instruments that have little relation with the ones you're thinking of". This excerpt supposes that generative AI lets you automate the tedious part while keeping "the quality of your decisions", but it doesn't keep your decisions, it generates its own "decisions" from a broad, high-level prompt and your role is reduced to merely making decisions about whether or not you like the content generated, which is not creativity.

  • tptacek 5 hours ago

    So, wait: this is just based on taking the 40 best/most consistent Nano Banana outputs for a prompt to do pixel-art versions of isometric map tiles? That's all it takes to finetune Qwen to reliably generate tiles in exactly the same style?

    Also, does someone have an intuition for how the "masking" process worked here to generate seamless tiles? I sort of grok it but not totally.

    • NAR8789 4 hours ago

      I think the core idea in "masking" is to provide adjacent pixel art tiles as part of the input when rendering a new tile from photo reference. So part of the input is literal boundary conditions on the output for the new tile.

      Reference image from the article: https://cannoneyed.com/img/projects/isometric-nyc/training_d...

      You have to zoom in, but here the inputs on the left are mixed pixel art / photo textures. The outputs on the right are seamless pixel art.

      Later on he talks about 2x2 squares of four tiles each as input and having trouble automating input selection to avoid seams. So with his 512x512 tiles, he's actually sending in 1024x1024 inputs. You can avoid seams if every new tile can "see" all its already-generated neighbors.

      You get a seam if you generate a new tile next to an old tile but that old tile is not input to the infill agorithm. The new tile can't see that boundary, and the style will probably not match.

      • cannoneyed 4 hours ago

        That’s exactly right - the fine tuned Qwen model was able to generate seamless pixels most of the time, but you can find lots of places around the map where it failed.

        More interestingly, not even the biggest smartest image models can tell if a seam exists or not (likely due to the way they represent image tokens internally)

        • NAR8789 4 hours ago

          I'm curious why you didn't do something like generate new tiles one at a time, but just expand the input area on the sides with already-generated neighbors. Looks like your infill model doesn't really care about tile sizes, and I doubt it really needs full adjacent tiles to match style. Why 2x2 tile inputs rather than say... generate new tiles one at a time, but add 50px of bordering tile on each side that already has a pixel art neighbor?

          • cannoneyed 2 hours ago

            Yeah I actually did that quite a bit too. I didn't want to get too bogged down in the nitty gritty of the tiling algorithm because it's actually quite difficult to communicate via writing (which probably contributed to it being hard to get AI to implement).

            The issue is that the overall style was not consistent from tile to tile, so you'd see some drift, particularly in the color - and you can see it in quite a few places on the map because of this.

            • NAR8789 2 hours ago

              Oh that makes sense, thanks for explaining! And thanks for sharing your process and result! Interesting to see your process, and looking at the map really tickles my nostalgia

          • polishdude20 3 hours ago

            There would have to be some tiles which don't have all four neighbors generated yet.

    • __mharrison__ 17 minutes ago

      Does anyone have a good reference for finetuning Qwen? This article opened my eyes a bit...

    • larodi 2 hours ago

      you can tell the diffusion from space, sadly it would normally take years to do it the conventional way, which is still the only correct way.

  • cannoneyed 4 hours ago

    Sorry about the hug of death - while I spent an embarassing amount of money on rented H100s, I couldn't be bothered to spend $5 for Cloudflare workers... Hope you all enjoy it, it should be back up now

    • Octoth0rpe 2 hours ago

      > while I spent an embarassing amount of money on rented H100s

      Would you mind sharing a ballpark estimate?

  • ivangelion 4 hours ago

    Want to thank you for taking the time to write up the process.

    I know you'll get flak for the agentic coding, but I think it's really awesome you were able to realize an idea that otherwise would've remained relegated to "you know what'd be cool.." territory. Also, just because the activation energy to execute a project like this is lower doesn't mean the creative ceiling isn't just as high as before.

  • dormento 5 hours ago

    Not working here, some CORS issue.

    Firefox, Ubuntu latest.

    Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://isometric-nyc-tiles.cannoneyed.com/dzi/tiles_metadat.... (Reason: CORS header ‘Access-Control-Allow-Origin’ missing). Status code: 429.

    Edit: i see now, the error is due to the cloudflare worker being rate limited :/ i read the writeup though, pretty cool, especially the insight about tool -> lib -> application

    • flaviolivolsi 5 hours ago

      Not working here either. Two different errors with two different browsers on Arch.

      - Chromium: Failed to load tiles: Failed to fetch

      - Zen: Failed to load tiles: NetworkError when attempting to fetch resource.

      • cannoneyed 4 hours ago

        Yeah I'm gonna blame Claude (and my free plan) for this one. Fixing!

        • cannoneyed 4 hours ago

          Cloudflare caching should be back. Turns out that there were a lot of tiles being served, who could have seen that coming?

      • jen20 4 hours ago

        Same in Safari on macOS here, FWIW.

  • weinzierl 3 hours ago

    Probably the best pre-AI take of the isometric pixel art NYC is poster from the art collective eboy. In the early 2000s their art was featured in MoMA (but I don't remember the NYC poster specifically).

    https://www.eboy.com/products/new-york-colouring-poster

  • jcelliott an hour ago

    Hey, engineer at Oxen.ai here. Glad the fine-tuning worked well for this project! If anyone has questions on that part of it we would be happy to answer.

    We have a blog post on a similar workflow here: https://www.oxen.ai/blog/how-we-cut-inference-costs-from-46k...

    On the inference cost and speed: we're actively working on that and have a pretty massive upgrade there coming soon.

    • cannoneyed 44 minutes ago

      Hell yeah oxen.ai is awesome, made my life so much easier

  • xnx 5 hours ago

    > This project is far from perfect, but without generative models, it couldn’t exist. There’s simply no way to do this much work on your own,

    Maybe, though a guy did physically carve/sculpt the majority of NYC: https://mymodernmet.com/miniature-model-new-york-minninycity...

    • cannoneyed 4 hours ago

      This project is awesome, and I love that there are people who are driven enough to make something with so much craft, attention, and duration.

      That being said I have three kids (one a newborn) - there's no possible way I could have done this in the before times!

    • sp9k 3 hours ago

      Also, sites like Pixeljoint used to (or still do? I haven't really kept up) collaborations. This would be a mammoth one, but the result would be much more impressive. This is a cool concept, but it's definitely not pixel art by any definition.

    • pavel_lishin 5 hours ago

      Huh, the linked instagram account is no longer available :/

    • fwip 5 hours ago

      I got a recommended video in Youtube just the other day, where a bunch of users made NYC in Minecraft at a 1:1 scale: https://www.youtube.com/watch?v=ZouSJWXFBPk

      Granted, it was a team effort, but that's a lot more laborious than a pixel-art view.

  • cannoneyed 4 hours ago

    Author here: Just got out of some meetings at work and see that HN is kicking my cloudflare free plan's butt. Let me get Claude to fix it, hold tight!

    • cannoneyed 4 hours ago

      We should be back online! Thanks for everyone's patience, and big thanks to Claude for helping me debug this and to Cloudflare for immediately turning the website back on after I gave them some money

  • 10c8 an hour ago

    There's absolutely no pixel art anywhere in the entirety of the map.

  • rafram an hour ago

    > This project is far from perfect, but without generative models, it couldn’t exist. There’s simply no way to do this much work on your own

    100 people built this in 1964: https://queensmuseum.org/exhibition/panorama-of-the-city-of-...

    One person built this in the 21st century: https://gothamist.com/arts-entertainment/truckers-viral-scal...

    AI certainly let you do it much faster, but it’s wrong to write off doing something like this by hand as impossible when it has actually been done before. And the models built by hand are the product of genuine human creativity and ingenuity; this is a pixelated satellite image. It’s still a very cool site to play around with, but the framing is terrible.

  • mrandish 2 hours ago

    This is really wonderful. Thanks for doing it!

    I especially appreciated the deep dive on the workflow and challenges. It's the best generally accessible explication I've yet seen of the pros and cons of vibe coding an ambitious personal project with current tooling. It gives a high-level sense of "what it's generally like" with enough detail and examples to be grounded in reality while avoiding slipping into the weeds.

  • shredprez 4 hours ago

    This is so cool! Please give me a way to share lat/long links with folks so I can show them places that are special to me :)

    • cannoneyed 2 hours ago

      oh wow that's such a good/obvious idea, i'll see if I can whip it together tonight

  • emzo 38 minutes ago

    I'm on holiday in NY now - this is so cool

  • rzzzt an hour ago

    Does anyone remember a Chinese website that did something similar? IIRC it was one of the selectable rendering modes of a Google Maps equivalent.

    Edit: this submission has a few links that could be what I had in mind but most of them no longer work: https://news.ycombinator.com/item?id=2282466

  • sandpaper26 an hour ago

    "map of NYC" does not include Staten Island. That's how we like it

  • gregsadetsky 4 hours ago

    amazing work!

    gemini 3.5 pro reverse engineered it - if you use the code at the following gist, you can jump to any specific lat lng :-)

    https://gist.github.com/gregsadetsky/c4c1a87277063430c26922b...

    also, check out https://cannoneyed.com/isometric-nyc/?debug=true ..!

    ---

    code below (copy & paste into your devtools, change the lat lng on the last line):

        const calib={p1:{pixel:{x:52548,y:64928},geo:{lat:40.75145020893891,lng:-73.9596826628078}},p2:{pixel:{x:40262,y:51982},geo:{lat:40.685498640229675,lng:-73.98074283976926}},p3:{pixel:{x:45916,y:67519},geo:{lat:40.757903901085726,lng:-73.98557060196454}}};function getAffineTransform(){let{p1:e,p2:l,p3:g}=calib,o=e.geo.lat*(l.geo.lng-g.geo.lng)-l.geo.lat*(e.geo.lng-g.geo.lng)+g.geo.lat*(e.geo.lng-l.geo.lng);if(0===o)return console.error("Points are collinear, cannot solve."),null;let n=(e.pixel.x*(l.geo.lng-g.geo.lng)-l.pixel.x*(e.geo.lng-g.geo.lng)+g.pixel.x*(e.geo.lng-l.geo.lng))/o,x=(e.geo.lat*(l.pixel.x-g.pixel.x)-l.geo.lat*(e.pixel.x-g.pixel.x)+g.geo.lat*(e.pixel.x-l.pixel.x))/o,i=(e.geo.lat*(l.geo.lng*g.pixel.x-g.geo.lng*l.pixel.x)-l.geo.lat*(e.geo.lng*g.pixel.x-g.geo.lng*e.pixel.x)+g.geo.lat*(e.geo.lng*l.pixel.x-l.geo.lng*e.pixel.x))/o,t=(e.pixel.y*(l.geo.lng-g.geo.lng)-l.pixel.y*(e.geo.lng-g.geo.lng)+g.pixel.y*(e.geo.lng-l.geo.lng))/o,p=(e.geo.lat*(l.pixel.y-g.pixel.y)-l.geo.lat*(e.pixel.y-g.pixel.y)+g.geo.lat*(e.pixel.y-l.pixel.y))/o,a=(e.geo.lat*(l.geo.lng*g.pixel.y-g.geo.lng*l.pixel.y)-l.geo.lat*(e.geo.lng*g.pixel.y-g.geo.lng*e.pixel.y)+g.geo.lat*(e.geo.lng*l.pixel.y-l.geo.lng*e.pixel.y))/o;return{Ax:n,Bx:x,Cx:i,Ay:t,By:p,Cy:a}}function jumpToLatLng(e,l){let g=getAffineTransform();if(!g)return;let o=g.Ax*e+g.Bx*l+g.Cx,n=g.Ay*e+g.By*l+g.Cy,x=Math.round(o),i=Math.round(n);console.log(` Jumping to Geo: ${e}, ${l}`),console.log(` Calculated Pixel: ${x}, ${i}`),localStorage.setItem("isometric-nyc-view-state",JSON.stringify({target:[x,i,0],zoom:13.95})),window.location.reload()};
        jumpToLatLng(40.757903901085726,-73.98557060196454);
    • polishdude20 3 hours ago

      That second link shows controls but does not have any water effects?

      • gregsadetsky 3 hours ago

        As far as I can see, OP tried to implement water shaders but then abandoned this idea.

        • cannoneyed 2 hours ago

          that's right - it worked very nice, but the models to generate the "shore distance mask" for the water shader weren't reliable enough to automate, and I just couldn't justify sinking any more time into the project

          • gregsadetsky 2 hours ago

            0 shade (hehe), the project is extraordinary as it is! cheers

  • dluan 27 minutes ago

    This on top of Subway Builder would be incredible

  • blintz 5 hours ago

    I was most surprised by the fact that it only took 40 examples for a Qwen finetune to match the style and quality of (interactively tuned) Nano Banana. Certainly the end result does not look like the stock output of open-source image generation models.

    I wonder if for almost any bulk inference / generation task, it will generally be dramatically cheaper to (use fancy expensive model to generate examples, perhaps interactively with refinements) -> (fine tune smaller open-source model) -> (run bulk task).

    • cannoneyed 4 hours ago

      In my experience image models are very "thirsty" and can often learn the overall style of an image from far fewer models. Even Qwen is a HUGE model relatively speaking.

      Interestingly enough, the model could NOT learn how to reliably generate trees or water no matter how much data and/or strategies I threw at it...

      This to me is the big failure mode of fine-tuning - it's practically impossible to understand what will work well and what won't and why

      • blintz 4 hours ago

        I see, yeah, I can see how if it's like 100% matching some parts of the style, but then failing completely on other parts, it's a huge pain to deal with. I wonder if a bigger model could loop here - like, have GPT 5.2 compare the fine-tune output and the Nano Banana output, notice that trees + water are bad, select more examples to fine-tune on, and the retry. Perhaps noticing that the trees and water are missing or bad is a more human judgement, though.

        • cannoneyed 2 hours ago

          Interestingly enough even the big guns couldn't reliably act as judges. I think there are a few reasons for that:

          - the way they represent image tokens isn't conducive to this kind of task

          - text-to-image space is actually quite finicky, it's basically impossible to describe to the model what trees ought to look like and have them "get it"

          - there's no reliable way to few-shot prompt these models for image tasks yet (!!)

  • bigwheels 5 hours ago

    Very impressive result! are you taking requests for the next ones? SF :D Tokyo :D Paris :D Milan :D Rome :D Sydney :D

    Oh man...

    • cannoneyed 4 hours ago

      Really want to do SF next. Maybe the next gen of models will be reliable enough to automate it but this took WAY too much manual labor for a working man. I’ll get the code up soon if people wanna fork it!

    • devilsdata 2 hours ago

      Really would love to see Tokyo, Kyoto, or Sydney.

  • filoleg 3 hours ago

    This is awesome, thanks for sharing this!

    I am especially impressed with the “i didn’t write a single line of code” part, because I was expecting it to be janky or slow on mobile, but it feels blazing fast just zooming around different areas.

    And it is very up to date too, as I found a building across the street from me that got finished only last year being present.

    I found a nitpicky error though: in Brooklyn downtown, where Cadman Plaza Park is, your webite makes it looks like there is a large rectangular body of water there (e.g., a pool or a fountain). In reality, there is no water at all, it is just a concrete slab area.

    • cannoneyed 3 hours ago

      the classic "water/concrete" issue! There's probably a lot of those around the map - turns out, it's pretty hard to tell the difference between water and concrete/terrain in a lot of the satellite imagery that the image model was looking at to generate the pixel images!

    • teaearlgraycold 3 hours ago

      The author had built something like this image viewer before and used an existing library to handle some of the rendering.

  • kkukshtel 3 hours ago

    This is awesome, and thanks so much for the deep dive into process!!

    One thing I would suggest is to also post-process the pixel art with something like this tool to have it be even sharper. The details fall off as you get closer, but running this over larger patch areas may really drive the pixel art feel.

    https://jenissimo.itch.io/unfaker

  • lagniappe 5 hours ago

    Failed to load tiles: NetworkError when attempting to fetch resource.

  • _0xdd 4 hours ago

    Nice work! But not all of NYC. Where's the rest of Staten Island?

    • cannoneyed 2 hours ago

      Haha I had to throw in the towel at some point and Staten Island didn't make the cut. Sorry (not sorry)

  • suriya-ganesh 2 hours ago

    This is such a cool concept. props to you for building it.

  • jesse__ 5 hours ago

    > Slop vs. Art

    > If you can push a button and get content, then that content is a commodity. Its value is next to zero.

    > Counterintuitively, that’s my biggest reason to be optimistic about AI and creativity. When hard parts become easy, the differentiator becomes love.

    Love that. I've been struggling to succinctly put that feeling into words, bravo.

    • pimlottc an hour ago

      Where’s the love here? There are artists who dedicate their lives to creating a single masterwork. This is someone spending a weekend on a “neat idea”.

    • NelsonMinar 4 hours ago

      I agree this is the interesting part of the project. I was disappointed when I realized this art was AI generated - I love isometric handdrawn art and respect the craft. But after reading the creator's description of their thoughtful use of generative AI, I appreciated their result more.

  • natufunu 3 hours ago

    One thing I learned from this is that my prompts are much less detailed than what author has been using.

    Very cool work and great write up.

  • cheschire 3 hours ago

    > I’m not particularly interested in getting mired down in the muck of the morality and economics of it all. I’m really only interested in one question: What’s possible now that was impossible before?

    Upvote for the cool thing I haven’t seen before but cancelled out by this sentiment. Oof.

    • cannoneyed 3 hours ago

      I mean this pretty literally though - I'm not particularly interested in these questions. They've been discussed a ton by people way more qualified to discuss them, but I personally I feel like it's been pretty much the same conversation on loop for the last 5 years...

      That's not to say they're not very important issues! They are, and I think it's reasonable to have strong opinions here because they cut to the core of how people exist in the world. I was a musician for my entire 20s - trust me that I deeply understand the precarity of art in the age of the internet, and I can deeply sympathize with people dealing with precarity in the age of AI.

      But I also think it's worth being excited about the birth of a fundamentally new way of interacting with computers, and for me, at this phase in my life, that's what I want to write and think about.

      • cheschire 3 hours ago

        I appreciate the thoughtful reply. I will try to give you the benefit of the doubt then and not extrapolate from your relatively benign feelings as it pertains to a creative art project any capacity for you to take up engineering projects that would make the world worse.

        You get your votes back from me.

    • pimlottc an hour ago

      This is basically the inversion of the famous Jurassic Park quote. “Never mind if we should. What if we could?”

  • mpaepper 3 hours ago

    You mentioned needing 40k tiles and renting a H100 for 3$/hour at 200tiles/hour, so am I right to assume that you spend 200*3=600$ for running the inference? That also means letting it run 25 nights a 8 hours or so?

    Cool project!

    • cannoneyed an hour ago

      Yup back of the napkin is probably about there - also spent a fair bit on the oxen.ai fine-tuning service (worth every penny)... paint ain't free, so to speak

  • polishdude20 3 hours ago

    To take it a step further it would be super cool to so rhiw figure out the roadway system from the map data and use the buildings as masks over the roads and have little simulated cars driving

    • cannoneyed 2 hours ago

      100% - I originally wanted to do that but when I realized how much manual work I'd have to do just to get the tiles generated I had to cut back on scope pretty hard.

      I actually have a nice little water shader that renders waves on the water tiles via a "depth mask", but my fine-tunes for generating the shader mask weren't reliable enough and I'd spent far too much time on the project to justify going deeper. Maybe I'll try again when the next generation of smarter, cheaper models get released.

  • Kkoala 4 hours ago
  • ChrisbyMe 3 hours ago

    A bit tangential but i really think the .nyc domain is underappreciated.

    SF/Mountain View etc don't even have one! you get a little piece of the NYC brand just for you!

  • _august 4 hours ago

    This is very cool, it would be awesome if I could rotate it as well by 90 degree increments to peek at different angles! I loved RCT growing up so this is hitting the nostalgia!

  • cyrusradfar 5 hours ago

    Insane outcome. Really thoughtful post with insights across the board. Thanks for sharing

  • nzeid 3 hours ago

    Amazing. Took forever but I found my building in Brooklyn as well as the nearby dealership, gas station, and public school.

  • dsmmcken 3 hours ago

    Just curious, about how long did this project take you? I don't see that mentioned in the article.

    • cannoneyed 3 hours ago

      We had our third kid in late November, and I worked sporadically on it over the following two months of paternity leave and holiday... If I had to bet, I'd say I put in well over 200 hours of work on it, the majority of that being manual auditing/driving of the generation process. If any AI model were reliable at checking the generated pixels, I could have automated this process, but they simply aren't there yet, so I had to do a lot more manual work than I'd anticipated.

      All told I probably put in less than 20 hours of actual software engineering work, though, which consisted entirely of writing specs and iterating with various coding agents.

      • mrandish 2 hours ago

        > If any AI model were reliable at checking the generated pixels, I could have automated this process, but they simply aren't there yet, so I had to do a lot more manual work than I'd anticipated.

        Since the output is so cool and generally interesting, there might be an opportunity for those forking this to do other cities to deploy a web app to crowd source identifying broken tiles and maybe classifying the error or even providing manual hinting for the next run. It takes a village to make a (sim) city! :-)

        • cannoneyed an hour ago

          Yeah I'll get the code out there soon - it's just very vibe-y right now, the repo is a bit of a mess since I never bothered to organize things. The secret sauce is really in the fine-tuning, can definitely get those datasets/models public on oxen.ai too

  • deltamidway 2 hours ago

    Absolutely love this! Yay robots!

  • sanufar 5 hours ago

    Seems to have been hugged to death as of now

    • cannoneyed 4 hours ago

      Should be back after some help from Claude and some money to Cloudflare

      • sanufar 4 hours ago

        Class, looks amazing. The embed in the writeup looks so cool!

  • rkagerer 2 hours ago

    I love it! Such a SimCity 2000 vibe!

  • tehlike 5 hours ago

    Some people reported 429 - otherwise known as HN hug of death.

    You probably need to adjust how caching is handled with this.

    • cannoneyed 4 hours ago

      Yup the adjustment was giving cloudflare 5 bucks :)

      • tehlike 3 hours ago

        Hah! I thought caching stuff was free. Is it because of the workers? I assumed this was all static assets.

        I too have been giving cloudflare 5$ for a while now :D

        • cannoneyed an hour ago

          yeah I had to put it behind a worker to deal with the subdomain and various other subtle caching issues. All in all cloudflare is incredible, and Claude makes it actually quite easy to deal with all the ins and outs

  • relium 4 hours ago

    Very cool. Street names with an on/off toggle would be nice.

  • epa an hour ago

    Amazing

  • xnx 5 hours ago

    I see you used Gemini-CLI some but no mention of Antigravity. Surprising for a Googler. Reasons?

    • cannoneyed 4 hours ago

      I used antigravity a bit, but it still feels a bit wonky compared to Cursor. Since this was on my own time, I'm gonna use the stuff that feels best. Though, by the end of the project I wasn't touching an IDE at all.

  • honeycrispy 4 hours ago

    This is kind of beautiful. Great work! I mean it.

  • howToTestFE 2 hours ago

    this makes me want to play simcity again! really cool

  • deadbabe 3 hours ago

    Would it be simple to modify this to make a highly stylized version of NYC instead? Like post apocalyptic NYC or medieval NYC, night time NYC, etc. because then that would have some very interesting applications

    • cannoneyed an hour ago

      simple is relative, could definitely be done, but until the models get a bit smarter and require less manual hand-holding it'd be a lot of grindy work

  • vortegne 2 hours ago

    While impressive on a technical level, I can't help but notice that it just looks...bad? Just a strange blurry mess that only vaguely smells of pixelart.

    Makes me feel insane that we're passing this off as art now.

  • mal10c an hour ago

    reticulating splines

  • aaronbrethorst 2 hours ago

    this is truly amazing. bravo.

  • MontyCarloHall 3 hours ago

    This doesn't really look like pixel art; it looks like you applied a (very sophisticated) Photoshop filter to Google Earth. Everything is a little blurry, and the characteristic sharp edges of handmade pixel art (e.g. [0]) are completely absent.

    To me, the appeal of pixel art is that each pixel looks deliberately placed, with clever artistic tricks to circumvent the limitations of the medium. For instance, look at the piano keys here [1]. They deliberately lack the actual groupings of real piano keys (since that wouldn't be feasible to render at this scale), but are asymmetrically spaced in their own way to convey the essence of a keyboard. It's the same sort of cleverness that goes into designing LEGO sets.

    None of these clever tricks are apparent in the AI-generated NYC.

    On another note, a big appeal of pixel art for me is the sheer amount of manual labor that went into it. Even if AI were capable of rendering pixel art indistinguishable from [0] or [1], I'm not sure I'd be impressed. It would be like watching a humanoid robot compete in the Olympics. Sure, a Boston Dynamics bot from a couple years in the future will probably outrun Usain Bolt and outgymnast Simone Biles, but we watch Bolt and Biles compete because their performance represents a profound confluence of human effort and talent. Likewise, we are extremely impressed by watching human weightlifters throw 200kg over their heads but don't give a second thought to forklifts lifting 2000kg or 20000kg.

    OP touches on this in his blog post [2]:

       I spent a decade as an electronic musician, spending literally thousands of hours dragging little boxes around on a screen. So much of creative work is defined by this kind of tedious grind. [...] This isn't creative. It's just a slog. Every creative field - animation, video, software - is full of these tedious tasks. In the end, the quality of art is defined by the quality of your decisions - how much work you put into something is just a proxy for how much you care and how much you have to say.
    
    I would argue that in some case (e.g. pixel art), the slog is what makes the art both aesthetically appealing (the deliberately placed nature of each pixel is what defines the aesthetic) but also awe-inspiring (the slog represents an immense amount of sustained focus).

    [0] https://platform.theverge.com/wp-content/uploads/sites/2/cho...

    [1] https://www.reddit.com/media?url=https%3A%2F%2Fi.redd.it%2Fu...

    [2] https://cannoneyed.com/projects/isometric-nyc

    • cannoneyed an hour ago

      Yeah this is all completely fair and I agree with all of it. Aesthetically and philosophically, what AI does for "pixel art" is very off. And once you see the "AI" you can't really unsee it.

      But I didn't want to call it a "SimCity" map, though that's really the vibe/inspiration I wanted to capture, because that implies other things, so I used the term "pixel art" even though I knew it'd get a lot of (valid) pushback...

      As with all things art, labels are really difficult and the context / meaning / technique is at once completely tied to genre but also completely irrelevant. Think about the label "techno" - the label is deeply meaningful and subtle to some and almost meaningless to others

  • alterom 2 hours ago

    Holy damn, this map is a dream and the best map of NYC I've ever seen!

    It's as if NYC was built in Transport Tycoon Deluxe.

    I'll be honest, I've been pretty skeptical about AI and agentic coding for real-life problems and projects. But this one seems like the final straw that'll change my mind.

    Thanks for making it, I really enjoy the result (and the educational value of the making-of post)!

  • d--b 3 hours ago

    This is huge!

    At first I thought this was someone working thousands of hours putting this together, and I thought: I wonder if this could be done with AI…

  • k1rd 5 hours ago

    Really nice.

  • ChrisArchitect 5 hours ago

    Appreciate that writeup. Very detailed insights into the process. However those conclusions left me on the fence about whether I 'liked' the project. The conclusions about 'unlocking scale' and commodity content having zero value. Where does that leave you and this project? Does it really matter that much that the project couldn't exist without genAI? Maybe it shouldn't exist then at all. As with alot of the areas AI touches, the problem isn't the tools or use of them exactly, it's the scale. We're not ready for it. We're not ready for the scale of impact the tech touches in multitude of areas. Including the artistic world. The diminished value and loss of opportunities. We're not ready for the impacts of use by bad actors. The scale of output like this, as cool as it is, is out of balance with the loss of huge chunk of human activity and expression. Sigh.

    • cannoneyed 4 hours ago

      At the risk of rehashing the same conversation over and over again, I think this is true of every technology ever.

      Personally I'm extremely excited about all of the creative domains that this technology unlocks, and also extremely saddened/worried about all of the crafts it makes obsolete (or financially non-viable)...

      • anonymous908213 3 minutes ago

        Do you seriously believe this[1] makes any craft obsolete or financially non-viable?

        [1] https://files.catbox.moe/1uphaw.png

        This is a fairly cool and novel application of generative AI[2], but it did not generate pixel art and it's still wildly incoherent slop when you examine it closely. This mostly works because it uses scale to obfuscate the flaws; users are expected to be zoomed out and not looking at the details. But the details are what makes art, art. You could not sell a game or an animation like this. This is not replacing anybody.

        [2] It's also wholly unrepresentative of general use-cases. 99.99999999% of generative AI usage does not involve a ton of manual engineering labour fine-tuning a model and doing the things you did to get this set up. Even with all of that effort, what you've produced here is never replacing a commercially viable pixel artist. The rest of the world slapping a prompt into an online generator is even further away from doing that.

    • dreadlordbone 5 hours ago

      Does it really matter that much that a sewage treatment plant couldn't exist without automated sensors? Maybe it shouldn't exist then at all.

    • DANmode an hour ago

      > Where does that leave you and this project? Does it really matter that much that the project couldn't exist without genAI? Maybe it shouldn't exist then at all. As with alot of the areas AI touches, the problem isn't the tools or use of them exactly, it's the scale. We're not ready for it. We're not ready for the scale of impact the tech touches in multitude of areas. Including the artistic world. The diminished value and loss of opportunities. We're not ready for the impacts of use by bad actors. The scale of output like this, as cool as it is, is out of balance with the loss of huge chunk of human activity and expression. Sigh.

      If you don’t see these tools as a way for ALL of us to more-intimately reach more of our intended audiences,

      whether as a musician, marketer, small business, whatever,

      then I don’t know if you were really passionate or excited about what you were doing in the first place.

  • squigz 5 hours ago

    Hugged to death? :(

  • detectivestory 5 hours ago

    beautiful!

  • meindnoch an hour ago

    Looks badly AI generated.