Buckle Up for Bumpier Skies

(newyorker.com)

43 points | by littlexsparkee 6 hours ago ago

19 comments

  • kuhaku22 an hour ago

    I liked the article for its brief foray into aviation history, something I wasn't too familiar with myself past the standard Wright Brothers factoids, and for making me appreciate the smooth rides I've luckily had, especially compared to that poor Singapore Airlines flight. The author is also good at conveying the visual feelings associated with turbulence despite only using words. Though I do feel more photographs wouldn't have hurt: like of the glider, NCAR's buildings, the Boeing hangar, visualizations of Cornman's software, and the turbulence simulator.

    The article is a good reminder why politics matter and why we can't keep on seeing climate change as some far-off issue that future generations will just bear the brunt of.

    > and there was talk of dismantling NCAR altogether. Russell Vought, the director of the White House Office of Management and Budget, had called the research center “one of the largest sources of climate alarmism in the country.”

  • _fw 4 hours ago

    The WebGL animation at the top is really cool. It’s probably smaller than a video too, and much sharper

  • ur-whale 4 hours ago
  • nineteen999 5 hours ago

    > Can today’s planes still keep us safe?

    Probably not if the turbines aren't spinning, no.

    • Onavo 3 hours ago

      Presumably the article is more referring to turbulence at a macro scale. If the air is so turbulent that the compressor blades stall because of it, well, we have bigger problems.

      • nineteen999 32 minutes ago

        I was referring to the really obvious animation (or lack of) at the top of the page.

        But yeah once again it seemed to go over the heads of the oblivious around here...

  • jsrozner 2 hours ago

    Idk but the analogies in the piece strike as AI generated. I don't think the new yorker is using AI to write pieces, so maybe the author has just been ingesting too much slop

    • troyjfarrell an hour ago

      Have you see AI repeat itself inside a paragraph? This looks more like something an editor missed.

      Fourth paragraph, sixth sentence: "Still, at best, only two-thirds of the occupants were buckled up after seventy seconds."

      Fourth paragraph, final sentence: "Fully a third of the occupants were still out of their seats after seventy seconds."

      • whywhywhywhy 41 minutes ago

        Yeah if you had AI rewrite things a few times and copy and paste paragraphs from multiple drafts together this is definitely something that could happen.

    • A_D_E_P_T 2 hours ago

      If it weren't the New Yorker, I'd swear up and down that Claude wrote this:

      > Turbulence is rarely that simple. It’s too scattered, too mercurial, too easily triggered by weather patterns that trigger other patterns in an endless cascade. “It’s not just one thing that’s going on,” Bob Sharman, an atmospheric scientist at NCAR, told me. “It’s not just atmospheric convection. It’s not just wind flowing over mountains. It’s everything going on all the time and interacting.”

      > “It’s not a piece of farm equipment,” Larson said. “It’s a life-support system. At thirty-five thousand feet, you can’t pull over.”

      The funny thing is that the passages that feel the most "AI-generated" come in quotes, when the author is quoting others. It could be that the author was communicating with those experts via email, and they used AI to generate their responses.

      Otherwise, I think that AI language patters are diffusing into common use. Being so aware of them is a curse...

      • notarobot123 an hour ago

        It isn't only LLMs that use rhetorical constructs like these, humans use them too.

        • wongarsu 26 minutes ago

          Funnily enough "It isn't only X, Y too" doesn't trigger my AI-sense nearly as much as "It's not just X, it's Y". Similarly in the above quote the "It's not just U. It's not just V. It's not just X, it's Y" doesn't seem AI generated to me.

        • kuhaku22 an hour ago

          Not to mention these patterns didn't come out of thin air. The LLMs are statistically regurgitating language from its training set, which researchers probably tuned more towards journalistic sources like the one we're reading.

      • forgetfreeman an hour ago

        You're reversing causality here. LLMs train on massive bodies of human-generated content. Constructs like the ones mentioned are an entirely unremarkable staple of long-form text content produced for audiences who are accustomed to consuming long-form text content.

      • FreakLegion an hour ago

        People point to the basic structure of "It's not X, it's Y" as the hallmark of AI, but I find it's more the incongruity between X and Y, especially when figures of speech (invariably strained) are involved[1]. That first quote reads like a real interaction that's been tightened up for print, but the second, the 'farm equipment' <> 'life-support system', does smell like AI, even though the article implies it's from an in-person conversation.

        1. These are all from a single 850-word op-ed I saw the other day: "Presidents do not usually lose power because of a single speech. They lose power when a speech reveals something structural." "But the most important part of the speech was not the applause lines. It was the compression." "Markets can rise. But voters do not live inside charts. They live inside grocery stores and mortgage payments." "The issue is not whether a statistic was stretched. The issue is that the presidency becomes reactive instead of agenda-setting." "That friction is not theoretical — it is baked into the constitutional design." "Trump’s address was not a pivot to persuasion — it was a doubling down on confrontation as strategy." "They are not just another campaign cycle. They are leverage."