I feel like the comment made by the author's friend captures a lot of my feelings on AI art. AI art is often extremely detailed, but that detail is often just random noise. It's art that becomes worse and makes less sense the more carefully you look at it.
Compared that to human art. When a person makes a highly detailed artwork, that detail rewards looking closely. That detail forms a cohesive, intentional vision, and incentivizes the viewer to spend the time and effort to take it all in. A person could spend hours looking at Bruegel's Tower of Babel paintings, or Bosch's "The Garden of Earthly Delights".
Overall, I've never felt the need to spend the time looking closely at AI art, even AI art that I couldn't tell was AI right away. Instead of rewarding close inspection with more detail, AI art punishes the viewer who looks closer by giving them undecipherable mush.
The gate picture has the same problem as as the cat one that he didn't filter out. There's a lot going on, and the lighting does seem to be one of the somewhat-inconsistent issues IMO, but it's just generally weird about why there's cats of all different sizes, why some of the smallest cats have the same coloring as the biggest ones, but some don't, what's going on with the arms of the two darker cats on the right, why aren't the sides of the throne symmetric, etc.
Everything is consistent in terms of "the immediate pixels surrounding it" but the picture as a whole is just "throw a LOT at the wall."
It passes the "looks cool" test but fails the "how likely would a human be to paint that particular composition" test.
The cat picture really shows the "noisy detail" problem with AI art. There's a lot going on on the area directly above the cat with a crown, as well as on the armrests and the upper areas of the wall background. But it's all random noise, which makes it both exhausting and distracting. A human artist would probably either make those areas less detailed, or give a more consistent pattern. Both would let those parts fade into the background, which would help draw our focus to the cats and the person.
There's other, more general issues too. The front paw on the big cat on the left is twisting unnaturally. The cat on the right with the pendant thing looks like it only has one front paw. The throne looks more like a canopy bed then a throne, with the curtains and the weird top area. The woman's face is oddly de-emphasized, despite being near the center of the piece.
Most of these things are subtle, and can be hard to articulate if you aren't looking closely. But the picture reeks of AI art, and it doesn't surprise me that the author was able to identify it as such right away.
Maybe the author's friend is just way better than me at this, but I tried applying her advice to some of the other images and I don't feel like it would have helped me.
Looking at the human impressionist painting "Entrance to the Village of Osny" that lots of people (including me) thought was AI, it seems like there's lots of details which don't really make sense. The road seems to seamlessly become the wall of a house on the right side for instance. On the other hand, even looking at the details, there's no details in the cherub image that I could see which would give anything away.
It’s impressionist. It’s not supposed to make sense in the sense that it’s an accurate reflection of reality; it’s supposed to make sense in that you can understand why the details were drawn in the way they were because someone put thought and intention into them.
I was able to tell because the distant houses are placed in a nonsensical formation in the AI image, but in the human image they make sense (they're more of a 'swoosh').
This is why I'm baffled when people want to put this kind of stuff on their Behance/Artstation profile.
Can AI art be useful? Sure but I'd argue only in the pursuit of something else (like a cute image to help illustrate a blog article), and certainly not for art's sake.
Posing it as "ART" means that the intent is for viewers to linger upon the piece, and the vast majority of AI art just wilts under scrutiny like that.
There's a lot of thought that goes into things like the greebles on a spaceship, like the shape language, the values and hues, etc.
Impressionism might seem "random" like what a model would output, but the main difference is the human deciding how that "randomness" should look.
The details on a model generated art piece are meaningless to me, no one sat down and thought "the speckles here will have to be this value to ensure they don't distract from the rest of the piece."
That's more what I look at when I digest art, the small, intentional design choices of the person making it.
Hmm?
Impressionism is noted for extreme lack of detail, that still is suggestive of something specific, because the artist knows what details your brain will fill in. (8bit pixel art is impressionistic :-) )
Yes, a suggestive smudge, a vague mark, as the artist in the article said (the one talking about the "ruined gate" picture). That's like an honest communication between artist and viewer, "this mark stands for something beyond the limit of my chosen resolution". It's like a deliberately non-committal expression, like saying "I don't know exactly, kinda one of these". In contrast, we have in AI art misleading details that contain a sort of confabulated visual nonsense, like word salad, except graphical. Similar to an LLM's aversion to admitting "I don't know".
It's exactly what isn't captured in the training data. The AI knows what the final texture of an oil painting looks like but it doesn't, know if what it's creating isn't possible from the point of view of physical technique. Or, likewise, it doesn't see the translation from mental image to representation of that image that a human has. It's always working off the final product.
You can't really draw many conclusions from this test, since the AI art has already been filtered by Scott to be ones that Scott himself found confusing. So What do any of the numbers at the end of this really mean? "Am I better than Scott at discerning AI art from human" is about the only thing this test says.
If you didn't filter the AI art first, people would do much better.
I had the same thought, but a counterargument is that the human art has also been filtered to be real artist stuff rather than what a random person would draw.
It's still impressive that pleasant AI art is possible.
Fine art is a matter of nuance, so in that sense I think it does matter that a lot of the "human art" examples are aggressively cropped (the Basquiat is outright cut in half) and reproduced at very low quality. That Cecily Brown piece, for example, is 15 feet across in person. Seeing it as a tiny jpg is of course not very impressive. The AI pieces, on the other hand, are native to that format, there's no detail to lose.
But those details are part of what make the human art interesting to contemplate. I wouldn't even think of buying an art book with reproductions of such low quality--at that point you do lose what's essential about the art work, what makes it possible to enjoy.
This article has an implicit premise that the ultimate judge of art is “do I/people like it” but I think art is more about the possibilities of interpretation - for example, the classics/“good art” lend themselves to many reinterpretations, both by different people and by the same person over time. When humans create art "manually" all of their decisions - both conscious and unconscious - feed into this process. Interpreting AI art is more of a data exploration journey than an exploration of meaning.
That's one of my problems with AI art. AI art promises to bring your ideas to life, no need to sweat the small stuff. But it's the small details and decisions that often make art great! Ideas are a dime a dozen in any artistic medium, it's the specific way those ideas are implemented that make art truly interesting.
There does not need to be intentionality for people to interpret it. Humans have interpreted intentionality behind natural phenomenon like the weather and constellations since pre-history, and continue to do so.
And I contest the original claim that AI art has no intentionality. A human provided a prompt, adjusted that prompt, and picked a particular output, all of which is done with intent. Perhaps there is no specific intent behind each individual pixel, but there is intent behind the overall creation. And that is no different to photography or digital art, where there is often no specific intent behind each individual pixel, as digital tools modify wide swathes of pixels simultaneously.
AI Art can be hard to identify in the wild. But it still largely sucks at helping you achieve specific deliverables. You can get an image. But it’s pretty hard to actually make specific images in specific styles. Yes we have Loras. Yes we have control nets (to varying degrees) and ipadapter (to lesser degrees) and face adapters and what not. But it’s still frustrating to get something consistent across multiple images. Especially in illustrated styles.
AI Art is good if you need something in a general ballpark and don’t care about the specifics
Eh, this is pretty unfair. That's a test of how good humans are at deceiving other humans, not a of how hard it is to distinguish run-of-the-mill AI art from run-of-the-mill human art in real life.
First, by their own admission, the author deliberately searched for generative images that don't exhibit any of the telltale defects or art choices associated with this tech. For example, they rejected the "cat on a throne" image, the baby portrait, and so on. They basically did a pre-screen to toss out anything the author recognized as AI, hugely biasing the results.
Then, they went through a similar elimination process for human images to zero in on fairly ambiguous artwork that could be confused with machine-generated. The "victorian megaship" one is a particularly good example of such chicanery. When discussing the "angel woman" image, they even express regret for not getting rid of that pic because of a single detail that pointed to human work.
Basically, the author did their best to design a quiz that humans should fail... and humans still did better than chance.
I think it's fair. It's the same thing humans do with their own art. You don't release the piece until you like it. You revise until you think it's don't. If a human wants to make AI art, they aren't just going to drop the first thing they generated. They're going to iterate. I think it's just as unfair to include the worst generations, because people are going to release the highest quality they can come up with.
> I think it's fair. It's the same thing humans do with their own art.
No, hold on. The key part is that you have a quiz that purports to test the ability of an average human to tell AI artwork from human artwork.
So if you specifically select images for this quiz based on the fact that you, the author of the quiz, can't tell them apart, then your quiz is no longer testing what it's promised to. It's now a quiz of "are you incrementally better than the author at telling apart AI and non-AI images". Which is a lot less interesting, right?
I'm not saying the quiz has to include low-quality AI artwork. It also doesn't need to include preschoolers' doodles on the human side. But it's one thing to have some neutral quality bar, and another thing altogether to choose images specifically to subvert the stated goal of the test.
But they didn't do this at all. They picked the most human-like AI images (usually high quality), and the most AI-like human images (usually mid).
The anime pictures are particularly poor and look much worse than commercial standard work (e.g. https://pbs.twimg.com/media/FwWPeNhXoAQZGW8?format=jpg&name=...) -- but of course those would be too easy to classify, wouldn't they? I wouldn't fault anyone for thinking the provided examples are AI.
Based on what I've empirically seen out in the world most people posting AI art are not using the same filtering as the author of this test. Plus the human choices used probably skew more towards what people think of as classic AI art than all human art as a whole.
The test was interesting to read about, but it didn't really change my mind about AI art in general. It's great for generating stock images and other low engagement works, but terrible as fine art that's meant to engage the user on any non-superficial level.
> It's the same thing humans do with their own art.
How so? Humans distributed all those "I filtered them out because they were too obvious" AI ones that aren't in the test too. So they passed someone's "is this something that should get released" test.
What we aren't seeing is human-generated art that nobody would confuse with a famous work - which of course there is a lot of out there - but IMO it generally looks "not famous" in very different ways. More "total execution issues" vs detail issues.
Generative AI is so cool. My wife (a creative director) used it to help design our wedding outfits. We then had them embroidered with those patterns. It would have been impossible otherwise for us to have that kind of thing expressed directly. It’s like having an artist who can sketch really fast and who you can keep correcting till your vision matches the expression. Love it!
I don’t think there have been any transformative AI works yet, but I look forward to the future.
It’s unsurprising to me that AI art is often indistinguishable from real artists’ work but famous art is so for some reason other than technical skill. Certainly there are numerous replica painters who are able to make marvelous pieces.
So maybe some people hate AI because they have an artist's eye for small inadequacies and it drives them crazy.
This is it 100%.
When somebody draws something (in an active fashion), there is a significantly higher level of concentration and thought put towards the final output.
By its very nature, GenAI is mostly using an inadequately descriptive medium (e.g. text) which a user then must WAIT until an output that roughly matches your vision "pops" out. Can you get around this? Not entirely, though you can help mitigate this through inpainting, photobashing, layering, controlnets, loras, etc.
However, want to wager a guess what 99% of the AI art slop that people throw up all over the internet doesn't use? ANY OF THAT.
A conventional artist has an internal visualization that they are constantly mentally referring to as they put brush to canvas - and it shows in the finer details.
It's the same danger that LLMs have as coding assistants. You are no longer in the driver's seat - instead you're taking a significantly more passive approach to coding. You're a reviewer with a passivity that may lead to subtle errors later down the line.
And if you need any more proof, here's a GenAI image attached to _Karpathy_'s (one of the founding members of openAI) twitter post on founding an education AI lab:
Not sure I understand the article. The author specifically chose art from humans and AI that he found difficult to categorize into human or AI art.
The fact that people had a 60% success rate suggest that they are a little better in seeing the difference then he was himself?
(What am I missing? This is not like "take 50 random art objects from humans and AI", but take 50 most human like AI, and non-obvious human art from humans)
so much of the value of art, which Scott has actually endowed on these AI generated pieces, is the knowledge that other people are looking at the same thing as you.
66% here. I was pretty much scrolling through and clicking on first instinct instead of looking in any detail.
Interestingly I did a lot better in the second half than the first half - without going through and counting them up again I think somewhere around 40% in the first half and 90% in the second half. Not sure if it's because of the selection/order of images or if I started unconsciously seeing commonalities.
The only way I can explain people getting 98% accuracy on this is being familiar with the handful of AI artists submitting their work for this competition.
It's a google form with no apparent time limit. It wouldn't surprise me if some people could do this (think of it like how older special effects in TV/movies look dated), but most likely they did an image search on each one and got one wrong.
I didn't say it can't retrospect. What it can't do is retrospect as a human mind, it can only read the intepretation a human mind has of its retrospection, and the human mind can't fully explain what its way of thinking is. So it doesn't have a useful model of the human mind, that it would need for the strategy. And strategy is a whole complex feature, that would use overlapping models for the ambiguity.
It's interesting that the impressionist-styled pieces mostly fooled people. I think this is because the style requires getting rid of one of the hallmarks of AI imagery: lots and lots of mostly-parallel swooshy lines, at a fairly high frequency. Impressionism's schtick is kind of fundamentally "fuck high-frequency detail, I'm just gonna make a bunch of little individual paint blobs".
One of the other hallmarks of AI imagery was deliberately kept out of this test. There's no shitposts. There's one, as an example of "the dall-e house style". It's a queen on a throne made of a giant cat, surrounded by giant cats, and it's got a lot of that noodly high-frequency detail that looks like something, but it is also a fundamentally goofy idea. Nobody's gonna pay Michael Whelan to paint the hell out of this and yet here it is.
It would have been interesting to know how much time most people spent per picture because if you look at the quoted comment from the well scoring art interested person mentioned:
"The left column has a sort of door with a massive top-of-doorway-thingy over it. Why? Who knows? The right column doesn't, and you'd expect it to. Instead, the right column has 2.5 arches embossed into it that just kind of halfheartedly trail off."
You can find this in almost every AI generated picture. The picture that people liked most, AI generated with the cafe and canal, the legs on the chairs make little sense. Not as bad as in non-curated AI art, but still no human would paint like this. Same for the houses in the background. If you spend say a minute per picture with AI art you almost always find these random things, even if the image is stylized, unlike human art it has a weird uncanniness to it.
I agree that the cafe had tells, just like the city street. But Gauguin also ended up in my AI bin. With the latter I feel the cropping was very infavourable.
Even though I was warned of the cropping, I didn't think the works would be cut that badly. Since I was working under the assumption that good specimens of each category would be chosen, the cut Gauguin didn't make it.
But in the end I'd convinced myself that Osny also had tells apart from the composition. So what do I know?
I think what gave AI away the most was mixed styles. If one part of the painting is blurred, and another part is very focused, you can tell it's AI. People don't do that.
I got all of Jack Galler's pictures wrong though. The man knows how to do it.
An AI art Turing Test would be interactive with me telling it what to draw and what changes to make and see if what is producing the art is human or AI.
This species of test would also need a multi-day turnaround period on each image. And/or a video stream of the work being drawn.
"Changes" are an interesting one, honestly as a professional artist who has to pay her rent, there is certain complexity of change beyond which I am likely to say "look, we're going to need to renegotiate the budget on this if you want this much of a change from the sketch you already approved", or even "no".
I feel like the comment made by the author's friend captures a lot of my feelings on AI art. AI art is often extremely detailed, but that detail is often just random noise. It's art that becomes worse and makes less sense the more carefully you look at it.
Compared that to human art. When a person makes a highly detailed artwork, that detail rewards looking closely. That detail forms a cohesive, intentional vision, and incentivizes the viewer to spend the time and effort to take it all in. A person could spend hours looking at Bruegel's Tower of Babel paintings, or Bosch's "The Garden of Earthly Delights".
Overall, I've never felt the need to spend the time looking closely at AI art, even AI art that I couldn't tell was AI right away. Instead of rewarding close inspection with more detail, AI art punishes the viewer who looks closer by giving them undecipherable mush.
The gate picture has the same problem as as the cat one that he didn't filter out. There's a lot going on, and the lighting does seem to be one of the somewhat-inconsistent issues IMO, but it's just generally weird about why there's cats of all different sizes, why some of the smallest cats have the same coloring as the biggest ones, but some don't, what's going on with the arms of the two darker cats on the right, why aren't the sides of the throne symmetric, etc.
Everything is consistent in terms of "the immediate pixels surrounding it" but the picture as a whole is just "throw a LOT at the wall."
It passes the "looks cool" test but fails the "how likely would a human be to paint that particular composition" test.
The cat picture really shows the "noisy detail" problem with AI art. There's a lot going on on the area directly above the cat with a crown, as well as on the armrests and the upper areas of the wall background. But it's all random noise, which makes it both exhausting and distracting. A human artist would probably either make those areas less detailed, or give a more consistent pattern. Both would let those parts fade into the background, which would help draw our focus to the cats and the person.
There's other, more general issues too. The front paw on the big cat on the left is twisting unnaturally. The cat on the right with the pendant thing looks like it only has one front paw. The throne looks more like a canopy bed then a throne, with the curtains and the weird top area. The woman's face is oddly de-emphasized, despite being near the center of the piece.
Most of these things are subtle, and can be hard to articulate if you aren't looking closely. But the picture reeks of AI art, and it doesn't surprise me that the author was able to identify it as such right away.
Maybe the author's friend is just way better than me at this, but I tried applying her advice to some of the other images and I don't feel like it would have helped me.
Looking at the human impressionist painting "Entrance to the Village of Osny" that lots of people (including me) thought was AI, it seems like there's lots of details which don't really make sense. The road seems to seamlessly become the wall of a house on the right side for instance. On the other hand, even looking at the details, there's no details in the cherub image that I could see which would give anything away.
It’s impressionist. It’s not supposed to make sense in the sense that it’s an accurate reflection of reality; it’s supposed to make sense in that you can understand why the details were drawn in the way they were because someone put thought and intention into them.
I was able to tell because the distant houses are placed in a nonsensical formation in the AI image, but in the human image they make sense (they're more of a 'swoosh').
This is why I'm baffled when people want to put this kind of stuff on their Behance/Artstation profile.
Can AI art be useful? Sure but I'd argue only in the pursuit of something else (like a cute image to help illustrate a blog article), and certainly not for art's sake.
Posing it as "ART" means that the intent is for viewers to linger upon the piece, and the vast majority of AI art just wilts under scrutiny like that.
That makes it sound like impressionism. But the phony details have a more intense bullshitting quality, like the greebles on a Star Wars spaceship.
There's a lot of thought that goes into things like the greebles on a spaceship, like the shape language, the values and hues, etc.
Impressionism might seem "random" like what a model would output, but the main difference is the human deciding how that "randomness" should look.
The details on a model generated art piece are meaningless to me, no one sat down and thought "the speckles here will have to be this value to ensure they don't distract from the rest of the piece."
That's more what I look at when I digest art, the small, intentional design choices of the person making it.
Hmm? Impressionism is noted for extreme lack of detail, that still is suggestive of something specific, because the artist knows what details your brain will fill in. (8bit pixel art is impressionistic :-) )
Yes, a suggestive smudge, a vague mark, as the artist in the article said (the one talking about the "ruined gate" picture). That's like an honest communication between artist and viewer, "this mark stands for something beyond the limit of my chosen resolution". It's like a deliberately non-committal expression, like saying "I don't know exactly, kinda one of these". In contrast, we have in AI art misleading details that contain a sort of confabulated visual nonsense, like word salad, except graphical. Similar to an LLM's aversion to admitting "I don't know".
It's exactly what isn't captured in the training data. The AI knows what the final texture of an oil painting looks like but it doesn't, know if what it's creating isn't possible from the point of view of physical technique. Or, likewise, it doesn't see the translation from mental image to representation of that image that a human has. It's always working off the final product.
You can't really draw many conclusions from this test, since the AI art has already been filtered by Scott to be ones that Scott himself found confusing. So What do any of the numbers at the end of this really mean? "Am I better than Scott at discerning AI art from human" is about the only thing this test says.
If you didn't filter the AI art first, people would do much better.
I had the same thought, but a counterargument is that the human art has also been filtered to be real artist stuff rather than what a random person would draw.
It's still impressive that pleasant AI art is possible.
Fine art is a matter of nuance, so in that sense I think it does matter that a lot of the "human art" examples are aggressively cropped (the Basquiat is outright cut in half) and reproduced at very low quality. That Cecily Brown piece, for example, is 15 feet across in person. Seeing it as a tiny jpg is of course not very impressive. The AI pieces, on the other hand, are native to that format, there's no detail to lose.
But those details are part of what make the human art interesting to contemplate. I wouldn't even think of buying an art book with reproductions of such low quality--at that point you do lose what's essential about the art work, what makes it possible to enjoy.
This article has an implicit premise that the ultimate judge of art is “do I/people like it” but I think art is more about the possibilities of interpretation - for example, the classics/“good art” lend themselves to many reinterpretations, both by different people and by the same person over time. When humans create art "manually" all of their decisions - both conscious and unconscious - feed into this process. Interpreting AI art is more of a data exploration journey than an exploration of meaning.
That's one of my problems with AI art. AI art promises to bring your ideas to life, no need to sweat the small stuff. But it's the small details and decisions that often make art great! Ideas are a dime a dozen in any artistic medium, it's the specific way those ideas are implemented that make art truly interesting.
I couldn't agree more; I love what you said in your other reply: "AI art punishes the viewer who looks closer"
Eh. That’s an artificial goalpost. Realistically, it’s a tool in the toolkit.
There does not need to be intentionality for people to interpret it. Humans have interpreted intentionality behind natural phenomenon like the weather and constellations since pre-history, and continue to do so.
And I contest the original claim that AI art has no intentionality. A human provided a prompt, adjusted that prompt, and picked a particular output, all of which is done with intent. Perhaps there is no specific intent behind each individual pixel, but there is intent behind the overall creation. And that is no different to photography or digital art, where there is often no specific intent behind each individual pixel, as digital tools modify wide swathes of pixels simultaneously.
Agreed. AI art subtracts intentionality.
AI Art can be hard to identify in the wild. But it still largely sucks at helping you achieve specific deliverables. You can get an image. But it’s pretty hard to actually make specific images in specific styles. Yes we have Loras. Yes we have control nets (to varying degrees) and ipadapter (to lesser degrees) and face adapters and what not. But it’s still frustrating to get something consistent across multiple images. Especially in illustrated styles.
AI Art is good if you need something in a general ballpark and don’t care about the specifics
Yep, that's why you see AI art a lot as generic blog hero/banner images.
Eh, this is pretty unfair. That's a test of how good humans are at deceiving other humans, not a of how hard it is to distinguish run-of-the-mill AI art from run-of-the-mill human art in real life.
First, by their own admission, the author deliberately searched for generative images that don't exhibit any of the telltale defects or art choices associated with this tech. For example, they rejected the "cat on a throne" image, the baby portrait, and so on. They basically did a pre-screen to toss out anything the author recognized as AI, hugely biasing the results.
Then, they went through a similar elimination process for human images to zero in on fairly ambiguous artwork that could be confused with machine-generated. The "victorian megaship" one is a particularly good example of such chicanery. When discussing the "angel woman" image, they even express regret for not getting rid of that pic because of a single detail that pointed to human work.
Basically, the author did their best to design a quiz that humans should fail... and humans still did better than chance.
I think it's fair. It's the same thing humans do with their own art. You don't release the piece until you like it. You revise until you think it's don't. If a human wants to make AI art, they aren't just going to drop the first thing they generated. They're going to iterate. I think it's just as unfair to include the worst generations, because people are going to release the highest quality they can come up with.
> I think it's fair. It's the same thing humans do with their own art.
No, hold on. The key part is that you have a quiz that purports to test the ability of an average human to tell AI artwork from human artwork.
So if you specifically select images for this quiz based on the fact that you, the author of the quiz, can't tell them apart, then your quiz is no longer testing what it's promised to. It's now a quiz of "are you incrementally better than the author at telling apart AI and non-AI images". Which is a lot less interesting, right?
I'm not saying the quiz has to include low-quality AI artwork. It also doesn't need to include preschoolers' doodles on the human side. But it's one thing to have some neutral quality bar, and another thing altogether to choose images specifically to subvert the stated goal of the test.
I don't see why you wouldn't use the highest quality possible for both.
But they didn't do this at all. They picked the most human-like AI images (usually high quality), and the most AI-like human images (usually mid).
The anime pictures are particularly poor and look much worse than commercial standard work (e.g. https://pbs.twimg.com/media/FwWPeNhXoAQZGW8?format=jpg&name=...) -- but of course those would be too easy to classify, wouldn't they? I wouldn't fault anyone for thinking the provided examples are AI.
Based on what I've empirically seen out in the world most people posting AI art are not using the same filtering as the author of this test. Plus the human choices used probably skew more towards what people think of as classic AI art than all human art as a whole.
The test was interesting to read about, but it didn't really change my mind about AI art in general. It's great for generating stock images and other low engagement works, but terrible as fine art that's meant to engage the user on any non-superficial level.
> It's the same thing humans do with their own art.
How so? Humans distributed all those "I filtered them out because they were too obvious" AI ones that aren't in the test too. So they passed someone's "is this something that should get released" test.
What we aren't seeing is human-generated art that nobody would confuse with a famous work - which of course there is a lot of out there - but IMO it generally looks "not famous" in very different ways. More "total execution issues" vs detail issues.
Also impressionism is probably one of the most favorable art styles for AI. The lack of detail means there are fewer places for AI to fuck up.
A street with cafe chairs and lights, that's like an entire genre of impressionist paintings.
Generative AI is so cool. My wife (a creative director) used it to help design our wedding outfits. We then had them embroidered with those patterns. It would have been impossible otherwise for us to have that kind of thing expressed directly. It’s like having an artist who can sketch really fast and who you can keep correcting till your vision matches the expression. Love it!
I don’t think there have been any transformative AI works yet, but I look forward to the future.
It’s unsurprising to me that AI art is often indistinguishable from real artists’ work but famous art is so for some reason other than technical skill. Certainly there are numerous replica painters who are able to make marvelous pieces.
Anyway, I’m excited to see what new things come.
From the article:
So maybe some people hate AI because they have an artist's eye for small inadequacies and it drives them crazy.
This is it 100%.
When somebody draws something (in an active fashion), there is a significantly higher level of concentration and thought put towards the final output.
By its very nature, GenAI is mostly using an inadequately descriptive medium (e.g. text) which a user then must WAIT until an output that roughly matches your vision "pops" out. Can you get around this? Not entirely, though you can help mitigate this through inpainting, photobashing, layering, controlnets, loras, etc.
However, want to wager a guess what 99% of the AI art slop that people throw up all over the internet doesn't use? ANY OF THAT.
A conventional artist has an internal visualization that they are constantly mentally referring to as they put brush to canvas - and it shows in the finer details.
It's the same danger that LLMs have as coding assistants. You are no longer in the driver's seat - instead you're taking a significantly more passive approach to coding. You're a reviewer with a passivity that may lead to subtle errors later down the line.
And if you need any more proof, here's a GenAI image attached to _Karpathy_'s (one of the founding members of openAI) twitter post on founding an education AI lab:
https://x.com/karpathy/status/1813263734707790301
Previous submission: https://news.ycombinator.com/item?id=42202288 (1 comment)
Not sure I understand the article. The author specifically chose art from humans and AI that he found difficult to categorize into human or AI art. The fact that people had a 60% success rate suggest that they are a little better in seeing the difference then he was himself?
(What am I missing? This is not like "take 50 random art objects from humans and AI", but take 50 most human like AI, and non-obvious human art from humans)
so much of the value of art, which Scott has actually endowed on these AI generated pieces, is the knowledge that other people are looking at the same thing as you.
66% here. I was pretty much scrolling through and clicking on first instinct instead of looking in any detail.
Interestingly I did a lot better in the second half than the first half - without going through and counting them up again I think somewhere around 40% in the first half and 90% in the second half. Not sure if it's because of the selection/order of images or if I started unconsciously seeing commonalities.
The only way I can explain people getting 98% accuracy on this is being familiar with the handful of AI artists submitting their work for this competition.
It's a google form with no apparent time limit. It wouldn't surprise me if some people could do this (think of it like how older special effects in TV/movies look dated), but most likely they did an image search on each one and got one wrong.
Easy to defeat. AI can't come up with ambiguous art:
https://en.wikipedia.org/wiki/Ambiguous_image
There is a strategic feature to it based on retrospection.
Why wouldn't AI be able to retrospect?
I didn't say it can't retrospect. What it can't do is retrospect as a human mind, it can only read the intepretation a human mind has of its retrospection, and the human mind can't fully explain what its way of thinking is. So it doesn't have a useful model of the human mind, that it would need for the strategy. And strategy is a whole complex feature, that would use overlapping models for the ambiguity.
It's interesting that the impressionist-styled pieces mostly fooled people. I think this is because the style requires getting rid of one of the hallmarks of AI imagery: lots and lots of mostly-parallel swooshy lines, at a fairly high frequency. Impressionism's schtick is kind of fundamentally "fuck high-frequency detail, I'm just gonna make a bunch of little individual paint blobs".
One of the other hallmarks of AI imagery was deliberately kept out of this test. There's no shitposts. There's one, as an example of "the dall-e house style". It's a queen on a throne made of a giant cat, surrounded by giant cats, and it's got a lot of that noodly high-frequency detail that looks like something, but it is also a fundamentally goofy idea. Nobody's gonna pay Michael Whelan to paint the hell out of this and yet here it is.
Duchamp rolling in his grave about this post!
It would have been interesting to know how much time most people spent per picture because if you look at the quoted comment from the well scoring art interested person mentioned:
"The left column has a sort of door with a massive top-of-doorway-thingy over it. Why? Who knows? The right column doesn't, and you'd expect it to. Instead, the right column has 2.5 arches embossed into it that just kind of halfheartedly trail off."
You can find this in almost every AI generated picture. The picture that people liked most, AI generated with the cafe and canal, the legs on the chairs make little sense. Not as bad as in non-curated AI art, but still no human would paint like this. Same for the houses in the background. If you spend say a minute per picture with AI art you almost always find these random things, even if the image is stylized, unlike human art it has a weird uncanniness to it.
I agree that the cafe had tells, just like the city street. But Gauguin also ended up in my AI bin. With the latter I feel the cropping was very infavourable.
Even though I was warned of the cropping, I didn't think the works would be cut that badly. Since I was working under the assumption that good specimens of each category would be chosen, the cut Gauguin didn't make it.
But in the end I'd convinced myself that Osny also had tells apart from the composition. So what do I know?
Oh come on. I guess I missed the part in the "Turing test" where a human filters out 99.999% of the machine's output prior to the test.
I think what gave AI away the most was mixed styles. If one part of the painting is blurred, and another part is very focused, you can tell it's AI. People don't do that.
I got all of Jack Galler's pictures wrong though. The man knows how to do it.
I don’t think that is AI art Turing test.
An AI art Turing Test would be interactive with me telling it what to draw and what changes to make and see if what is producing the art is human or AI.
This species of test would also need a multi-day turnaround period on each image. And/or a video stream of the work being drawn.
"Changes" are an interesting one, honestly as a professional artist who has to pay her rent, there is certain complexity of change beyond which I am likely to say "look, we're going to need to renegotiate the budget on this if you want this much of a change from the sketch you already approved", or even "no".