We ran over 600 image generations to compare AI image models

(latenitesoft.com)

39 points | by kalleboo 2 hours ago ago

19 comments

  • gs17 3 minutes ago

    It's interesting to me that the models often have their "quirks". GPT has the orange tint, but it also is much worse at being consistent with details. Gemini has a problem where it often returns the image unchanged or almost unchanged, to the point where I gave up on using it for editing anything. Not sure if Seedream has a similar defining "feature".

    They noted the Gemini issue too:

    > Especially with photos of people, Gemini seems to refuse to apply any edits at all

  • frotaur 4 minutes ago

    It's crazy that the 'piss filter' of openAI image generation hasn't been fixed yet. I wonder if it's on purpose for some reason ?

  • whoaoweird 23 minutes ago

    It was interesting to see how often the OpenAI model changed the face of the child. Often the other two models wouldn't, but OpenAI would alter the structure of their head (making it rounder), eyes (making them rounder), or altering the position and facing of the children in the background.

    It's like OpenAI is reducing to some sort of median face a little on all of these, whereas the other two models seemed to reproduce the face.

    For some things, exactly reproducing the face is a problem -- for example in making them a glass etching, Gemini seemed unwilling to give up the specific details of the child's face, even though that would make sense in that context.

  • alienbaby an hour ago

    Interesting experiment, though I'm not certain quite how the models are usefully compared.

  • kevin009 an hour ago

    Everyday I generate more than 600 image and also compare them, it takes me 5 hours

  • sema4hacker an hour ago

    Are artists and illustrators going the way of the horse and buggy?

    • Bombthecat 3 minutes ago

      Yes and now. IKEA and co didn't replace custom made tables, just reduced the number of people needing a custom table.

      Same will happen to music, artists etc. They won't vanish. But only a few per city will be left

    • LogicFailsMe 34 minutes ago

      No, but this is the beginning of a new generation of tools to accelerate productivity. What surprises me is that the AI companies are not market savvy enough to build those tools yet. Adobe seems to have gotten the memo though.

      • bnj 2 minutes ago

        I've been waiting for solutions that integrate into the artistic process instead of replacing it. Right now a lot of the focus is on generating a complete image, but if I was in photoshop (or another editor) and could use AI tooling to create layers and other modifications that fit into a workflow, that would help with consistency and productivity.

        I haven't seen the latest from adobe over the last three months, but last I saw the firefly engine was still focused on "magically" creating complete elements.

      • somenameforme 22 minutes ago

        In testing some local image gen software, it takes about 10 seconds to generate a high quality image on my relatively old computer. I have no idea the latency on a current high end computer, but I expect it's probably near instantaneous.

        Right now though the software for local generation is horrible. It's a mish-mash of open source stuff with varying compatibility loaded with casually excessive use of vernacular and acronyms. To say nothing of the awkwardness of it mostly being done in python scripts.

        But once it gets inevitably cleaned up, I expect people in the future are going to take being able to generate unlimited, near instantaneous images, locally, for free, for granted.

    • jonathanstrange 10 minutes ago

      Artists no, illustrators and graphic designers yes. They'll mostly become redundant within the next 50 years. With these kind of technologies, people tend to overestimate the short-term effects and severely underestimate the long-term effects.

  • jstummbillig an hour ago

    > If you made it all the way down here you probably don’t need a summary

    Love the optimism

    • LogicFailsMe 41 minutes ago

      I skipped to the end to see if they did any local models. spoilers: they didn't.

  • Dwedit an hour ago

    You can always identify the OpenAI result because it's yellow.

    • Bombthecat 2 minutes ago

      And mid journey because it's cell shading:)

  • fsniper 36 minutes ago

    Is it me or ChatGPT change subtle or sometimes more prominent things? Like ball holding position of the hand, face features like for head, background trees and alike?

  • yapyap 33 minutes ago

    Using gen. ai for filters is stupid, a filter guarantees the same object but filtered, a gen. AI version of this guarantees nothing and an expensive AI bill.

    It’s like using gen. ai to do math instead of extracting the numbers from a story and just doing the math with +, -, / and *

  • th0ma5 an hour ago

    This seems to imply that the capabilities being tested are like the descriptive words used in the prompts, but, as a category using random words would be just as valid for exercising the extents of the underlying math. And when I think of that reality I wonder why a list of tests like this should be interesting and to what ends. The repeated nature of the iteration implies that some control or better quality is being sought but the mechanism of exploration is just trial and error and not informative of what would be repeatable success for anyone else in any other circumstance given these discoveries.

  • XYZ12334 44 minutes ago

    Waiting for the SimonW futa conversion benchmark