The Writers Came at Night

(metropolitanreview.org)

28 points | by ctoth 3 hours ago ago

11 comments

  • ofalkaed 2 hours ago

    I am not sure what the general point of this is; for a good chunk of their conversation it seems to show why AI will fail in the arts, it is incapable of understanding their frustration with the AI as demonstrated by the conversation, it misses the humanity of it and only states it and states it as a weird sort of concession. But at the end it seems to undercut that by making it all out as futile and the writers pretentious and/or the AI cruel, which leaves the whole rather thin. The final prompt to the AI had a great chance for a bit of recursive metafictional fun, but does not seem to be used; could be a hint to a subtle bit of indirect metafiction but I don't think it was.

    • cryzinger 5 minutes ago

      It spoke to me as someone who's not jazzed about LLMs but also not convinced by the "it's violating our precious copyright!" arguments against them.

      I think there's something in there with the character hierarchy of screenwriter vs novelist vs poet; it seems like the screenwriter in the story writes to make a living, the novelist does it for prestige, and the poet does it largely for the love of the game. The screenwriter is on board with AI until he realizes it'll hurt him more than it'll help him--ironic since he had been excited about being able to use different actors' likenesses!--and the whole time he's looking down at the poet like "Oh, god, if all this takes off I'm going to be as poor and pathetic as that guy." (Which raises interesting questions about the poet's stake in all of this: he doesn't actually have much to lose here, considering how little money or recognition he gets in the first place, but he's helping the other two guys anyway.)

      I dunno. I think it's a messy story in the same way that the conversation about AI and the arts is itself messy, which I like. And I always appreciate a story that leaves me with questions to mull over instead of trying to dump a bunch of platitudes in my lap :P

  • acessoproibido 2 hours ago

    Telling stories is as old as humanity, no machine will ever make storytelling obsolete. It will change it for sure but change is as constant as the waxing and waning of the moon.

    This was a really entertaining read, do any of you have similar contemporary stories to share?

  • oorza an hour ago

    I think the story makes a good point, but I'm not sure it's even the primary point the story was trying to make.

    > “Writing a book is supposed to be hard,” he said.

    > “Is it, though?” said the AI. The novelist wasn’t sure, but he thought he detected a touch of exasperation in the machine’s voice.

    > “Perseverance is half the art,” he said. He hadn’t had much natural talent and had always known it, but he had staying power.

    It's this right here. I don't think any LLM-based AI is going to be able to replace raw human creativity any time soon, but I do think it can dramatically reduce the effort it takes to express your creativity. And in that exchange, people whose success in life has been built on top of work ethic and perseverance rather than unique insight or intelligence are going to get left behind. If you accept that, you must also accept its contrapositive: people who have been left behind despite unique insights and intelligence because of a lack of work ethic will be propelled forward.

    I think a lot of the Luddite-esque response to AI is actually a response to this realization happening at a subconscious level. From the gifted classes in middle school until I was done with schooling, I can always remember two types of students: those that didn't work very hard but succeeded on their talents and those that were otherwise unexceptional beyond their organizational skills and work ethic. Both groups thought they were superior to the other group, of course, and the latter group has gone on to have more external success in their lives (at least among my student peers I maintain contact with decades later). To wit, the smart lazy people are high-ranking individual contributors, but the milquetoast hard workers are all management who the smart lazy people that report to them bitch about. The inversion of that power dynamic in creative and STEM professions... it's not even worth describing the implications, they're so obvious.

    Let's say, just for the sake of argument, that AI can eventually serve to level the playing field for everything. It outputs novels, paintings, screenplays - whatever you ask it for - of such high quality that they can't be discerned from the best human-created works. In this world, the only way an individual human matters in the equation is if they can encode some unique insight or perspective into how they orchestrate their AI; how does my prompt for an epic space opera vary meaningfully from yours? In other words, everything is reduced to an individual's unique perspective of things (and how they encode it into their communication to the AI) because the AI has normalized everything else away (access to vocabulary, access to media, time to create, everything). In that world, the only people who can hope to distinguish themselves are those with the type of specific intelligence and insight that is rarely seen; if you ask a teacher, they will recant the handful of students over their career that clear that bar. Most of us aren't across that bar, less than 1% of people can be by definition, so of course everyone emotionally rejects that reality. No one wants their significance erased.

    We can hand wring about whether that reality ever can exist, whether it exists now, whatever, but the truth is that's how AI is being sold and I think that's the reality people are reacting to.

    • BarryMilo 21 minutes ago

      > Let's say, just for the sake of argument, that AI can eventually serve to level the playing field for everything. It outputs novels, paintings, screenplays - whatever you ask it for - of such high quality that they can't be discerned from the best human-created works.

      This requires the machine to understand a whole bunch of things. You're talking about AGI, at that point there will be blood in the streets and screenplays will be the least of our problems.

    • majormajor an hour ago

      > And in that exchange, people whose success in life has been built on top of work ethic and perseverance rather than unique insight or intelligence are going to get left behind. If you accept that, you must also accept its contrapositive: people who have been left behind despite unique insights and intelligence because of a lack of work ethic will be propelled forward.

      I think there's still a very high chance that someone willing to refine their AI-co-generated output 8-10+ hours a day, for days on end, will have much more success than someone who puts in 1 or 2 hours a day on it and largely takes the one of the first things from one of the first prompt attempts.

      The most successful people I know are in a category you leave out: the people who will put in long hours out of being super-intrinsically-motived but are ALSO naturally gifted creatively/intelligently in some domain.

      • oorza an hour ago

        > I think there's still a very high chance that someone willing to refine their AI-co-generated output 8-10+ hours a day, for days on end, will have much more success than someone who puts in 1 or 2 hours a day on it and largely takes the one of the first things from one of the first prompt attempts.

        That's the truth right now, but that's merely a limitation of the technology. Particularly if you imagine arbitrarily wide context windows such that the LLM can usefully begin to infer your specific preferences and implications over time.

        > The most successful people I know are in a category you leave out: the people who will put in long hours out of being super-intrinsically-motived but are ALSO naturally gifted creatively/intelligently in some domain.

        Those are the people I mention at the end, those that clear the bar into being uniquely special. From what I hear from my friends that have been teaching for about twenty years now, you're lucky if you get more than one or two of those every ten years.

        • majormajor 24 minutes ago

          No previous force multipliers have lifted the "lazy but smart" over the "smart and NOT lazy". That's not how lazy works, or how taste/expectations work. The "smart and NOT lazy" will evolve their preferences, perspectives, and point of view over time much faster than the "smart and lazy" will so even if they have these agents doing all their work for them, the people motivated to introspect much more on that work will be the ones driving the trends and leading the edge of creative production.

          It's like conventions in art: you could make Casablanca much more easily today than in 1942. But if you made it today it would be seen as lazy and cliche and simplistic, because it's already been copied by so many other people. If you make something today, it needs to take into account that everyone has already seen Casablanca + nearly 85 additional years of movies and build on top of that to do something interesting that will surprise the viewer (or at least meet their modern expectations). "The best created human works" changes over time; in your proposed world, it will change even faster, and so you'll have to pay even more attention to keep up.

          So if you're content to let your AI buddy cruise along making shit for you while you just put in 1 hour a day of direction, and someone else with about equal natural spark is hacking on it for 10 hours a day—watching what everyone else is making, paying much more active attention to trends, digging in and researching obscure emerging stuff—then that second person is going to leave you in the dust.

          > Those are the people I mention at the end, those that clear the bar into being uniquely special. From what I hear from my friends that have been teaching for about twenty years now, you're lucky if you get more than one or two of those every ten years.

          Again, it's a false dichotomy. What you described was just "super super smart", not what I suggested as "smart + hard worker: "In that world, the only people who can hope to distinguish themselves are those with the type of specific intelligence and insight that is rarely seen; if you ask a teacher, they will recant the handful of students over their career that clear that bar. Most of us aren't across that bar, less than 1% of people can be by definition, so of course everyone emotionally rejects that reality. No one wants their significance erased." That's not hard work + smart, that's "generationally smart genius." And that set is much smaller than the set I'm talking about. It's very easy to coast on "gifted but lazy" to perpetually be a big-fish-in-a-small-pond school-wise. But there are ponds out there full of people who do both. Twenty or thirty years ago this was the difference between a 1540 SAT score, As/Bs in high school, and going to a very good school and 1540 SAT score, A's in high school with a shitload of AP courses, and significant positions in extracurricular activities, and going to MIT. I don't know what it looks like for kids today - parents have cargo-culted all the extracurriculars so that it now reflects their drive more than the kids' - but those kids who left the pack behind to go to the elite institutions were grinders AND gifted.

      • kingofmen 44 minutes ago

        Of course talent+effort are better than either alone, but it seems strange to argue that there will be zero effect on the value of having just one of them. AI may not raise the talented lazy person straightforwardly above the hard-working grinder but it seems likely that it will alter their relative position, in favor of talent.

        • majormajor 18 minutes ago

          What does it mean to even say "having just one of them"? I think the false dichotomy just torpedoes the ability to predict the effect of new tools at all. There's already a world of difference between the janitor who couldn't learn how to read but does his best to show up and clean the place as well as he can every day and the middle manager engineer with population-median math or engineering abilities but a 12-hour-day work ethic that has let him climb the ladder a bit. And the effect of these AI tools we're considering here is going to be MUCH larger on one than the other - it's gonna be worse here for the smarter one, until the AI's are shoveling crap around with human-level dexterity. (Who knows, maybe that's next.)

          Anyone you'd interact with in a job in a HN-adjacent field has already cleared several bars of "not actually that lazy in the big picture" to avoid flunking out of high school, college, or quitting their office job to bum around... and so at that point there's not that same black-and-white "it'll help you but hurt you" shortcut classification.

          EDIT: here's a scenario where it'll be harder to be lazy as a software engineer already, not even in the "super AI" future: in the recent past, if you were quicker than your coworkers and lazy, you could fuck around for 3 hours than knock something out in 1 hour and look just as productive, or more, than many of your coworkers. If everyone knows - even your boss - that it actually should only take 45 minutes of prompting then reviewing code from the model, and can trivially check that in the background themselves if they get suspicious, then you might be in trouble.

    • chairmansteve 7 minutes ago

      Very well said.