The FSF considers large language models

(lwn.net)

88 points | by birdculture 7 hours ago ago

60 comments

  • badsectoracula 6 hours ago

    > The prompt used to create the code should also be provided. The LLM-generated code should be clearly marked.

    I have a feeling the people who write these haven't really used LLMs for programming because even just playing around with them will make it obvious that this makes no sense - especially if you try to use something local based that lets you rewrite the discussion at will, including any code the LLM generated. E.g. sometimes when trying to get Devstral make something for me, i let it generate whatever (sometimes buggy/not working) code it comes up with[0] and then i start editing its response to fix the bug so that further instructions are under the assumption it generated the correct code from the get go instead of trying to convince it[0] to fix the code it generated. In such a scenario there is no clear separation between LLM-generated code and manually written code nor any specific "prompt" (unless you count all snapshots of the entire discussion every time one hits the "submit" button as a series of prompts, which technically is what the LLM using as a prompt instead of what the user types, but i doubt this was what the author had in mind).

    And all that without taking into account what someone commented in the article about code not even done in a single session but with plans, restarting from scratch, summarizing, etc (and there are tools to automate these too and those can use a variety of prompts by themselves that the end user isn't even aware of).

    TBH i think if FSF wants to "consider LLMs" they should begin by gaining some real experience using them first - and bringing people with such experience on board to explain things for them.

    [0] i do not like anthropomorphizing LLMs, but i cannot think of another description for that :-P

    • falcor84 4 hours ago

      Agreed, it's almost like requiring that code always come with full transcripts of all the meetings where the team discussed the next steps.

    • jmathai 4 hours ago

      > I have a feeling the people who write these haven't really used LLMs for programming because even just playing around with them will make it obvious that this makes no sense

      This is one problem with LLM generated code. It is very greenfield. There’s no correct or even good way to do it. Because it’s a little bit unbounded in possible approaches and quality of output.

      I’ve tried tracking prompt history in many permutations as a means to documenting and making rollbacks more possible. I hasn’t felt like that's the right way to think about it.

    • cxr 6 hours ago

      What you're describing isn't any different from a branch of commits between two people practicing a form of continuous integration where they commit whatever they have (whether it breaks the build or not, or is buggy, etc.), capped off by a merge commit when it's finally in the finished state.

      • badsectoracula 5 hours ago

        Eh, i do not think these are comparable, unless you really stretch the idea of what is a "commit", who makes it and you consider all sorts of destructive modifications of branch history and commits normal.

        • cxr 2 hours ago

          Huh?

          Hal and Dave work together. Hal is going home at 6:00 PM, but before it's time to leave, Dave tells Hal to go ahead and start working on some new feature. At 5:50 PM, Hal hits Cmd+Q, saving whatever unfinished work there is and no matter what state it's in and commits it to a new development branch with the commit message "Start on $X" followed by a copy of the explanation of that Dave first gave Hal about what they needed to do. Then Hal pushes that commit upstream for Dave and leaves. At 6:00 PM Dave, still at the office, runs git-pull, spends a little time fixing up several issues with the code Hal wrote, then commits the result and pushes it to the development branch of the shared repo. Dave's changes mainly focus on getting the project to build again and making sure some or all of the existing tests pass. Dave then writes an email to Hal about this progress. At 8:30 PM Hal reads Dave's email about what Dave fixed and what Hal should do now. Hal then runs git-pull and writes some more code, pushing the result to the development branch before watching a movie and going to bed. Around midnight, Dave runs git-pull, fixes some more problems with the code that Hal wrote, and then pushes that to the repo. The next day at the office, they resume their work together following this pattern, where Hal writes the bulk of the code followed by Dave fixing it up and/or providing instruction for Hal about how to proceed. When they're done, one of them switches to the main branch with `git checkout main` and runs `git merge $OUR_DEVELOPMENT_BRANCH_NAME`.

          Which part of this entails "destructive modifications of branch history"?

  • isodev 6 hours ago

    > There is also, of course, the question of copyright infringements in code produced by LLMs, usually in the form of training data leaking into the model's output

    Well yes, LLMs like Claude Code are merely a "copyright violation as a service". Everyone is so focused on the next new "AI" feature but we haven't actually resolved the issue of all model providers using stolen code to train their models and their lack of transparency on sourced training data.

    • 1gn15 5 hours ago

      Copyright violation is not stealing, and training is not copyright violation (it's already been ruled as fair use, multiple times).

      • blibble 3 hours ago

        > it's already been ruled as fair use, multiple times

        most countries don't have a concept of fair use

        but they nearly all have copyright law

        • nradov an hour ago

          Why should we care about conflicting IP laws in other countries? Most of them have no effective means of extraterritorial enforcement.

        • quantummagic 3 hours ago

          That fact in itself is a worse injustice than anything the LLM companies are doing. At the very least, it should be open to use in reporting, parody, and critique. Having no concept of such fair-use is oppressive and stifling.

          • Hamuko 3 hours ago

            Fair use is not the only way to allow critique and/or parody.

            • quantummagic 3 minutes ago

              What are you talking about? You can call it whatever you want, but it amounts to fair-use if you're allowed to use something for the purposes of critique and/or parody.

      • matheusmoreira 4 hours ago

        Yeah, copyright infringement isn't stealing, copyright shouldn't even exist to begin with.

        I just think it's especially asinine how corporations are perfectly willing to launder copyrighted works via LLMs when it's profitable to do so. We have to perpetually pay them for their works and if we break their little software locks it's felony contempt of business model, but they get to train their AIs on our works and reproduce them infinitely and with total impunity without paying us a cent.

        It's that "rules for thee but not for me" nonsense that makes me reach such extreme logical conclusions that I feel empathy for terrorists.

        • thesz 2 hours ago

            > copyright shouldn't even exist to begin with.
          
          You then get trade secretes and guilds. Hardly an improvement.
          • matheusmoreira 2 hours ago

            Secrets? Just leak them, it only has to happen once. Guilds? Revoke their privileges and protections, and there's nothing they can do about it.

            Absolutely an improvement. Information wants to be free. Stop criminalizing it and people will find a way to free it. And once it's out there it's over, there is no containing it.

            • wakawaka28 2 hours ago

              People want to be paid for their work. If you don't let them, they won't do the work. "Information" does not have a mind of its own.

              Even when the idea of a thing is "out there" there is a lot of grunt work and special stuff that needs to be implemented to get the best outcomes. Nobody owes you that work for free. Regardless of what GPL copers say, it is very hard to make money with software without enforcing some access restrictions and IP. Open source is great when it works, but it does not work for most things nor is it at the leading edge for most things.

              • matheusmoreira 28 minutes ago

                Welp. Then don't do the work if you don't want to. Nobody's advocating for your enslavement.

        • wakawaka28 2 hours ago

          Your views are contradictory. Copyright shouldn't exist, but the businesses infringing on it are the bad ones?

          >We have to perpetually pay them for their works and if we break their little software locks it's felony contempt of business model

          You don't have to pay them, or break their restrictions.

          >but they get to train their AIs on our works and reproduce them infinitely and with total impunity without paying us a cent.

          You don't need to allow this either. Unfortunately open-source code is necessarily public.

          >It's that "rules for thee but not for me" nonsense that makes me reach such extreme logical conclusions that I feel empathy for terrorists.

          The way LLMs use code is fundamentally different from wholesale copying. If someone read your code and paraphrased it and tweaked it, it would be a completely new work not subject to the original copyright. At least it would be really hard to get a court to regard it as an infringement. This is like what LLMs do.

          • matheusmoreira 38 minutes ago

            > Your views are contradictory.

            How is it contradictory? Tell it to the corporations who defend copyright for you and public domain fair use for themselves. If they were honest, they'd abolish copyright straight up instead of creating this idiotic caste system.

            > Copyright shouldn't exist, but the businesses infringing on it are the bad ones?

            Yes. Copyright shouldn't exist to begin with, but since it does, one would expect the corporations to work within the legal framework they themselves created and lobbied so heavily for. One would expect them to reap the consequences of their actions and be bound by the exact same limitations they seek to impose on us.

            It is absolutely asinine to watch them make trillions of dollars breaking their own rules, and simultaneously pretend that nothing is happening, and therefore you mortal citizen must still abide by the same rules they are breaking.

            The sheer dishonesty of it makes me sick to my core.

            > If someone read your code and paraphrased it and tweaked it, it would be a completely new work not subject to the original copyright.

            Derivative work. I was once told that corporate programmers are warned by legal not to even read AGPLv3 source code, lest it subconsciously infect their thought processes and the final result. This is also the reason we have clean room reverse engineering where one team produces documentation and another uses it to reimplement the thing. Isolating minds from the copyrighted inputs is the whole point of it all.

            There is absolutely no reason to believe LLMs are any different. They are literally trained on copyrighted inputs. Either they're violating copyrights or we're being oppressed by these copyright monopolists who say we can't do stuff. Can't have both.

            > At least it would be really hard to get a court to regard it as an infringement.

            It's extremely hard to get a court to do anything. As in tens of thousands if not hundreds of thousands of dollars difficult. Nothing is decided until actual judges start deciding things, and to get to that point you need to actually go through the legal system, and to do that you need to pay expensive lawyers lots of money. It's the reason people instantly fold the second legal action is threatened, doesn't matter if they're right. Corporations have money to burn, we don't.

            And that's assuming that courts are presided by honest human beings who believe in law and reason instead of political activist judges or straight up corrupt judges who can be lobbied by industry.

      • inglor_cz 5 hours ago

        I think the concerning problem is when the LLM reproduces some copyrighted code verbatim, and the user doesn't even stand a chance to know it.

        • 1gn15 5 hours ago

          Yes, but that's not what the grandparent comment was talking about.

          • isodev 5 hours ago

            If I’m the grandparent comment, it was a big part of what I mean. Stolen/Unknown content goes in for training, verbatim or very close “inspired by” code comes out and there is no way to verify the source - “violation as a service”.

            • fluidcruft 3 hours ago

              Verbatim dumping is one thing but otherwise this seems closer to the issue of plagiarism than copyright. If someone studies the Linux kernel and then builds a new kernel that follows some of the design decisions and idioms that's not really copyright infringement.

              The bigger issue (spiritually anyway) seems to be the need to develop free software LLM tools the same way FSF needed to develop free compilers. That's what's going to keep users from being able to adapt and control their machines. The issue is more ecological that programmers equipped with LLM are likely much more productive at creating and modifying code.

              Some of the rest seems more like saying that anyone who studies GCC internals is forever tainted and must write copyleft code for life which seems laughable to me. Again this is more a topic of plagiarism than copyright which are fairly similar but actually different and not as clear cut.

              • isodev 2 hours ago

                > more a topic of plagiarism than copyright

                You’re right, in the context of a technical legal interpretation they’re different. In the context of right or wrong, they amount to the same.

                > anyone who studies GCC internals

                LLMs are not a someone, they’re more like … the indigo printout of some text or design, you then use to make a scrapbook to be mass produced for profit. Very different situation.

                When the AI bubble pops, I hope we will have some equalisation back to something more ethical.

                • tjr an hour ago

                  LLMs are not a someone

                  This is in line with my disagreement over the fair use rulings. Most people who published works that have been used to train AI systems, created those works and published them for other people to consume and benefit from, not for proprietary software systems to consume and benefit from. The existing licenses and laws did not account for this; nobody was anticipating it.

                • fluidcruft an hour ago

                  I don't know... there's a pretty clear difference between copyright and a say a utility patent or trade secret. The right and wrong in FSF isn't about labor, it's about control over machines and ability to modify. Free software has never tried to control the community using patents and trade secrets and in general are rather hostile to them. In fact its fairly contemptuous of copyright and uses it from a purely utilitarian perspective. And frankly FSF is not opposed to commercial software. They're opposed to users being unable to modify machines and software that they are using. That's the core of the ethics. See the origins in that damn printer firmware RMS did battle with.

                  But I also disagree in general about LLMs. LLMs are statistical text models but the general concept of what if there were an "AI" that wasn't a LLM and was trained on open source software is the same at the end of the day. I think whether or not LLM are intelligent or equivalent to humans is a red herring. There's no reason to not consider the implications of machines that are indistinguishable or even superior to human programmers. Particularly if we're discussing ethics getting lost in implementation details seems like a distraction and then all the derived ethics gets thrown out after the next innovation.

        • CamperBob2 4 hours ago

          When that happens, it's because the code was trivial enough to be compressed to a minuscule handful of bits... either because it literally is trivial, or because it's common enough to have become part of our shared lexicon.

          As a society, we don't benefit from copyright maximalism, despite how trendy it is around here all of a sudden. See also Oracle v. Google.

          • thesz 2 hours ago

            Quake's sqrt approximation is not trivial and is not common.

            [1] https://www.reddit.com/r/programming/comments/oc9qj1/copilot...

            • CamperBob2 2 hours ago

              (Shrug) It's a math trick, documented by Abrash among others and very heavily discussed on forums such as this one. And it didn't originate in the Quake codebase. Like much IEEE754 hackery, it goes back to the father of IEEE754 himself, William Kahan.

              Nobody benefits from a law that says that LLMs can't regurgitate the Quake sqrt() approximation. If that's what the law actually says, which it isn't.

      • thesz 2 hours ago

        One can train model with copyrighted code as it is fair use, fair enough.

        Are there any rulings about use of code generated by model trained on copyrighted code?

        I believe distinction is clear.

      • isodev 5 hours ago

        Not really, only a handful of authorities have weighed on that and most of them in a country where model providers literally buy themselves policy and judges.

    • falcor84 4 hours ago

      Wasn't copyleft essentially intended to be "copyright violation as a service"? I.e. making it impossible for an individual working with copyleft code to use copyright to assert control over the code?

      • wvenable 3 hours ago

        Copyleft requires strong copyright protections. Without a license, you have no rights at all to use the code. If you want to use the code, because it's copyrighted, you have abide by the terms of the license.

  • somewhereoutth 3 hours ago

    1. Understand that code that has been wholly or partly LLM generated is tainted - it has (in at least some part) been created neither by humans nor by a deterministic, verifiable, process. Any representations to its quality are therefore void.

    2. Ban tainted code.

    Consider code that (in the old days) had been copy pasted from elsewhere. Is that any better than LLM generated code? Why yes - to make it work a human had to comb through it, tweaking as necessary, and if they did not then stylistic cues make the copy pasta quite evident. LLMs effectively originate and disguise copy pasta (including mimicking house styles), making it harder/impossible to validate the code without stepping through every single statement. The process can no longer be validated, so the output has to be. Which does not scale.

    • acoustics 3 hours ago

      It depends on the nature of the code and codebase.

      There have been many occasions when working in a very verbose enterprise-y codebase where I know exactly what needs to happen, and the LLM just types it out. I carefully review all 100 lines of code and verify that it is very nearly exactly what I would have typed myself.

  • bgwalter 6 hours ago

    It looks like the FSF is going to sit this one out like the SaaS revolution, to which they reacted late with the AGPL but did not push it. They are not working on a new license and Siewicz is already low-key pushing in favor of LLMs:

    "Many years ago, he said, photographs were not generally seen as being copyrightable. That changed over time as people figured out what could be done with that technology and the creativity it enabled. Photography may be a good analogy for LLMs, he suggested."

    I have zero trust in the FSF since they backstabbed Stallman.

    EDIT: Criticizing anything from LWN, be it Debian, Linux or FSF related, results in instant downvotes. LWN is not a critical publication and just lionizes whoever has a title and bloviates on a mailing list or at a conference.

    • lukan 5 hours ago

      "I have zero trust in the FSF since they backstabbed Stallman."

      The controversial line might have also been that one.

      • bgwalter 5 hours ago

        Sure, but remember that the Stallman situation started with a highly clumsy Minsky/Epstein mail on an MIT mailing list. The Epstein coverup was bipartisan and now all tech companies are ostensibly on Trump's side and even finance his ballroom.

        Are there any protests or demands for the cancellation of Trump, Clinton, Wexner, Black, Barak?

        I have not seen any. The cancel tech people only go after those who they perceive as weak.

        • duped 3 hours ago

          Millions of people have been trying for a decade to

          • bgwalter an hour ago

            The tech people have never tried to cancel Bill Clinton or Ehud Barak. They were vocal about Trump when it was politically expedient during Trump's first term and from 2020-2024.

            They are entirely silent since January 2025.

        • inglor_cz 5 hours ago

          Cancellation of Stallman was the low point of that period, at least within tech, but it also made quite a lot of people aware that this monster of a practice must be resisted, or it will devour everyone unchecked. (Or, at least, anyone.)

          • wizzwizz4 5 hours ago

            You're forgetting the "second cancellation", where people brought legitimate (and often long-standing) criticisms against Richard Stallman. Cancelling a philosopher for having bad takes on age of consent, but otherwise drawing the line between "rape" and "not rape" in a sensible place, is not a good idea; but removing a community leader for a long history of applied misogyny is much more appropriate.

            • bgwalter 4 hours ago

              These measures are not applied equally though.

              Deb Nicholson, PSF "Executive Director", won an FSF award in 2018, handed to her by Stallman himself. Note that at that time at least one of Stallman's embarrassing blog posts was absolutely already known:

              https://www.fsf.org/news/openstreetmap-and-deborah-nicholson...

              In 2021 Deb Nicholson then worked to cancel Stallman:

              https://rms-open-letter.github.io/

              In 2025 Deb Nicholson's PSF takes money from all new Trump allies, including from those that finance the ballroom and the destruction of the historical East Wing like Google and Microsoft. Will Deb Nicholson sign a cancellation petition for the above named figures?

              • wizzwizz4 4 hours ago

                I don't think Deb Nicholson values many of the ideas that those people stand for. What would be the point of trying to reform the organisations they're a part of?

                • bgwalter 3 hours ago

                  The PSF could reject donations from Microsoft and Google. Deb Nicholson was previously at the OSI, which is widely thought to be ... industry friendly, so that is unlikely to happen.

                  They could also have done research in 2018 before accepting the award, which is standard procedure for politicians etc. But of course they wanted the award for their career.

                  • wizzwizz4 3 hours ago

                    And before that, she was at the Software Freedom Conservancy, who are not industry friendly. I don't see why you're focusing on this one person.

                    • bgwalter 2 hours ago

                      For one, I cannot examine all signatories of the Stallman cancellation petition here. Then, given the cancel and defamation happiness of the PSF, this is an excellent example for double standards.

            • pessimizer 4 hours ago

              > the "second cancellation", where people brought legitimate (and often long-standing) criticisms against Richard Stallman.

              No, the reason why this "second cancellation" is vague is because it was the typical feeding frenzy that happens after a successful cancellation, where people hop on to paint previously uninteresting slanders in a new light. Stallman, before saying something goofy about Epstein, was constantly slandered by people who hated what he stood for and by people that were jealous of him. After he said the goofy thing, they all piled in to say "you should have listened to me." The "second cancellation" is when "he asked me out once at a conference" becomes redolent of sexual assault.

              None of them seem to like the politics of Free Software, either. They attempt to taint the entire philosophy with the false taint of Stallman saying that sleeping with older teenagers that seemed to be consenting isn't the worst crime in the world. The people who attacked him for that would defend any number of intimately Epstein-related people to the death; the goal imo was to break (or to take over and steer into a perversion of itself) Free Software. Every one of them was the "it's not fair to say that about Apple" type.

              • ants_everywhere 2 hours ago

                I didn't follow the "second cancellation", but re: the first cancellation, I think so far this thread has lost the fact that the Epstein/Minsky comments were not the first comments RMS made about childhood sexuality. And that many of the other comments were cause for concern.

                But there does seem to be a lack of uniformity about cancellation. For example, many people seem totally unconcerned by the stuff Bernie Sanders wrote about toddler sexuality, but the same writings from RMS would almost certainly be considered deeply problematic. My take is that both are problematic.

                In other cases, sometimes I'll hear about problematic behavior that is on the order of "this awkward person asked me out," where the same behavior by a more attractive and charming person would be often welcome. It seems like the standard of acceptable behavior probably shouldn't depend on whether someone is attractive or charming.

                So IMO, the current approach depends a lot on social dynamics and both misses problematic behavior of popular people and also overly restricts normal behavior of unpopular people.

                I think we need a more rules-based approach, and my guess is that's where we'll eventually settle. Arguably, things seem to have trended that way already.

                • lukan 5 minutes ago

                  "It seems like the standard of acceptable behavior probably shouldn't depend on whether someone is attractive or charming."

                  It does not. Before asking someone out, there is also nonverbal communication happening like eye contact. Ignoring eye contact and all the little signs and then asking out of the blue can be awkward, but is also not problematic in a bad way, unless it was asked in a sexually charged way.

                  The standard of acceptable behavior is reading all the little signs, before asking for more. But asking someone out in no sexually charged way is never problematic, even though it might result in comments.

              • wizzwizz4 4 hours ago

                > it was the typical feeding frenzy that happens after a successful cancellation

                It was actually a few years later, prompted by Richard Stallman's reinstatement by the board. I don't know what you mean by "feeding frenzy", but I habitually ignore the unreasonable voices in such cases: it's safe to assume I'm not talking about those.

                > "he asked me out once at a conference"

                That wasn't the main focus of the criticism I saw. However, there is an important difference between an attendee asking someone out at a conference, and an invited speaker (or organiser) asking someone out at a conference. If you're going to be in a leadership position, you need to be aware of power dynamics.

                That's a running theme throughout all of the criticism of Richard Stallman, if you choose to abstract it that way: for all he's written on the subject, he doesn't understand power dynamics in social interactions. He's fully capable of understanding it, but I think he prefers the simpler idea of (right-)libertarian freedom. (And by assuming he expects others to believe he'll behave according to his respect of the (right-)libertarian freedom of others, you can paint a very sympathetic picture of the man. That doesn't mean he should be in a leadership position for an organisation as important as the FSF, behaving as he does.)

                > None of them seem to like the politics of Free Software, either.

                Several of them are involved in other Free Software projects. To the extent those people have criticisms of the politics of Free Software, it's that it doesn't go far enough to protect user freedoms. (I suspect I shouldn't have got involved in this argument, since I'm clearly missing context you take for granted.)

                • inglor_cz 2 hours ago

                  IMHO invited speakers aren't in any position of power over attendees. At least not in Western countries, IDK how it works in Dubai etc.

                  "Power" should not be confused with "prestige". If an attendee can ensure the speaker's disinvitation from future events by their complaint, they have plenty of power themselves.

                • serf 3 hours ago

                  >there is an important difference between an attendee asking someone out at a conference, and an invited speaker (or organiser) asking someone out at a conference. If you're going to be in a leadership position, you need to be aware of power dynamics.

                  so one side of social messaging is "Don't bother trying to look for a date if you're not a CEO, worth millions, have a home, an education, a plan, a yacht and a summer home" ,

                  and the other side is

                  "If you're powerful you'd better know that any kind of question needs to be re-framed with the concept of a power dynamic involvement, and that if you're sufficiently powerful there is essentially no way to pursue a relationship with a lesser mortal without essentially raping them through the power dynamics of the question itself and the un-deniability of a question asked by such a powerful God."

                  ... and you say birth rates are declining precipitously?

                  Pretty ridiculous. It used to be that we used conventions as the one and only time to flatten the social hierarchy -- it was the one moment where you could talk and have a slice of pizza with a billionaire CEO or actor or whatever.

                  Re-substantiating the classism within conventions just pushes them furthest into corporate product marketing and employment fairs -- in other words it turns it into shit no one wants to attend without being paid to sit in a booth.

                  But all of that isn't the problem : the problem lies with personal sovereignty.

                  If someone doesn't want to do something, they say no. If they receive retribution because of that no we then investigate the retribution and as a society we turn the ne'er-do-well into a social pariah until they have better behavior.

                  There is a major problem when we as a society have decided "No, the problem is with the underlying pressure of what a no 'may mean' for their future." 'May' being the operative word.

                  We have turned this into a witch-hunt, but for maybe-witches or those who may turn into witches without any real evidence of witch craft that prompted the chase.

                  'Power dynamics's is shorthand for "I was afraid i'd be fired if I denied Stallman." ; did anything resembling this ever occur?

                  • wizzwizz4 2 hours ago

                    If you're sufficiently-powerful that your power affects how other people feel they can interact with you, then you should consider reducing your power. If it's important for you to be that powerful, and there's really no way to achieve your goals without it, then that's a sacrifice you're willing to make.

                    > If someone doesn't want to do something, they say no. If they receive retribution because of that no we then investigate the retribution and as a society we turn the ne'er-do-well into a social pariah until they have better behavior.

                    This only works if we have accountability. You can't have accountability if there's no evidence that a conversation took place, and if decisions aren't made in open and transparent ways: you can't classify things as "retribution" or "not retribution" without… witch hunts. Oh. So it doesn't solve the witch-hunt problem. (Wearing a body-cam everywhere would, but that kind of mass surveillance has its own problems.)

                    "Turn the ne'er-do-well into a social pariah" doesn't help the victim of retribution.

                    If the (alleged) ne'er-do-well has a strong enough support network, no force on earth will turn them into a social pariah, so this becomes an exercise in eroding political support, and… oh. That's also a procedure decoupled from justice.

                    This is not a simple topic, and it does not have simple solutions. Many of the issues you've identified (such as selective enforcement) are issues, but that doesn't mean your proposed solutions actually work.

                    > "I was afraid i'd be fired if I denied Stallman." ; did anything resembling this ever occur?

                    Edit: while waiting for the rate limit to expire, I found some claims of Paul Fisher, quoted in the "Stallman Report" https://stallman-report.org/:

                    > RMS would often throw tantrums and threaten to fire employees for perceived infractions. FSF staff had to show up to work each day, not knowing if RMS had eliminated their position the night before.

                    This conflicts with my understanding of Richard Stallman's views and behaviour. I'll have to look into this further. I've left my original answer below.

                    ---

                    I vaguely recall a time he tried to remove authority from someone, in favour of a packed committee, because he disagreed with a technical decision they made. (It didn't really work, because the committee either had no opinion, or agreed with the former authority figure about that technical decision.) Can't find a reference though.

                    But in this kind of context, I'm not aware of Richard Stallman ever personally retaliating against someone for saying no to him. I don't imagine he'd approve of such behaviour, and he's principled enough that I doubt he'd ever do it. (There are a few anecdotes set in MIT about pressures from other people, but these are not directly Richard Stallman's fault, so I think it's unfair to blame him for them.)

                    This isn't really the point, though. A community leader should be aware of "people stuff" like this, and act to mitigate it. If he doesn't want the responsibility, he shouldn't have the power. By all accounts, he doesn't want the responsibility.

    • pessimizer 5 hours ago

      I have no idea how to criticize them because I have no idea what to say about LLMs irt the GPL, other than that Free Software should try its best to legally protect itself from LLMs being trained on its code.

      I've always been in favor of the GPLs being pushed as proprietary, restrictive licenses, and being as aggressive in enforcement as any other restrictive license. GPL'd software is public property. The association with Open Source, "Creative Commons" and "Public Domain" code is nothing but a handicap; proprietary code can take advantage of all permissively licensed code without pretending that it shares anything in terms of philosophy, and without sharing back unless it finds it strategically advantageous.

      > They are not working on a new license and Siewicz is already low-key pushing in favor of LLMs

      I just have no idea what I would put in a new license, or what it means to be "in favor" of LLMs. Are Free Software supporters just supposed to not use them, ever? Even if they're only trained on permissively licensed code? Do you think that it means that people are pushing to allow LLMs to train on GPL-licensed software?

      I just don't understand what you're trying to say. I also have zero trust in the FSF over Stallman, simply because I don't hear people who speak like Stallman at the FSF i.e. I think his vision was pushed out along with his voice. But I do not understand what you're getting at.

      • bgwalter 5 hours ago

        More or less what you said in your last paragraph: Stallman also reacted late to the web revolution, but at least he was passionate. That passion seems gone.

        I don't see any sense of urgency in the reported discussion or any will to fight against large corporations. The quoted parts in the article do not seem very prepared, there are a lot of maybes, no clear stance and no overarching vision that LLMs must be fought for software freedom.

    • gjvc 6 hours ago

      Yes. 100% agree.

  • 1gn15 5 hours ago

    > A member of the audience pointed out that the line between LLMs and assistive (accessibility) technology can be blurry, and that any outright ban of the former can end up blocking developers needing assistive technology, which nobody wants to do.

    This is because LLMs are a type of assistive technology, usually for those with mental disabilities. It's a shame that mental disabilities are still seen as less important than physical disabilities. If one takes them seriously, one would realize that banning LLMs is inherently ableist. Just make sure that the developer takes accountability for the submitted code.