111 comments

  • jmward01 a day ago

    There is a difference between allowing use and supporting a use case. Supporting would be, for example, adding military specific training to the model. If we were talking about a car or a tractor nobody would bat an eye. Should companies not sell potatoes to the military? To be clear, I am not a fan of developing for the military, but the idea that a TOS will stop AI being weaponized is a bit naive. Similarly, if the goal is to responsibly introduce AI to the world then shaping the military's adoption of it by allowing its use, without specifically supporting it, is probably a good path.

    • largbae a day ago

      HN has a multinational audience, but for those in the US, do you really want our best and brightest to shun defense while other nations' best and brightest double down?

      If so, what is the outcome that you would like to see from that policy?

      • bglazer a day ago

        I would prefer that the United States scale back military spending while negotiating arms controls agreements with our major adversaries. Arms races are mutually destructive. They distort the economies and political incentives of all countries involved. They create a self sustaining loop of increasing capabilities and perceived threat from both sides, which sucks the political focus and productive capability into a black hole of unaccountable, blind militarism. Further, the cast off last generation weapons get fed to lower capability allies and proxies who go on to try to dominate their regions through one sided violence. Finally, given the existence of nuclear weapons, any move that increases the risk of serious violence between the US and its peer adversaries also comes with a risk of destroying human civilization.

        Now you could say that we just have to compete to ensure deterrence. China is building hypersonic glide vehicles and autonomous drones so we have to also. Ask yourself if you trust your political leaders and the political leaders of all competing powers to use these weapons responsibly. I do not, so I think we should not build them.

        • NicuCalcea a day ago

          The US has virtually no involvement in my home country, yet that hasn't prevented one of their historical adversaries from constantly attaching and threatening us. My country is on course to developing closer ties with NATO and the West for that very reason.

          Of course, I would also prefer it if everyone scaled back military spending, it's an easy thing to wish for when it takes so many parties to actually do it. But failing that, I would rather have the West outspend and outsmart their adversaries.

        • tucnak a day ago

          And yet some of the coolest tech, most powerful machines, and processes came up during the Cold War.

      • lsy a day ago

        This argument justifies any unethical behavior. "Some bogeyman on the other side of the world is (letting computers decide to kill people | manufacturing and using bioweapons | torturing their captives for information), are you really going to let your 'ethics' get in the way of our safety?"

        Maybe you don't think there is any line we shouldn't cross, but I'm guessing you do. The fact that other people draw that ethical line in a different place shouldn't be all that surprising. The people you're addressing likely believe that the US can achieve its defense goals without turning over matters of life and death to LLMs or image recognition systems.

        • JumpCrisscross a day ago

          > people you're addressing likely believe that the US can achieve its defense goals without turning over matters of life and death to LLMs or image recognition systems

          I think this is generous. There are a lot of folks at universities and in Silicon Valley who oppose co-operating with the military in any capacity.

      • AlotOfReading a day ago

        For me it's essentially the same concept as conscientious objection. Many countries have mandatory military service. Some of those countries also recognize conscientious objector status while others do not. Do you really want people to shun military service while other nations' citizens serve?

        As a conscientious objector, yes. If moral and ethical considerations don't separate us from our opponents, what are we even fighting for?

        • JumpCrisscross a day ago

          > As a conscientious objector, yes. If moral and ethical considerations don't separate us from our opponents, what are we even fighting for

          I guess I'm a little sour about conscientious objection to war in general--versus specific conflicts--while comfortably living in and profiting from a society bathed in the peace that the credible threat of violence affords.

          The irony of this being that instead of a balanced targeting AI we'll get one mostly built by those who wouldn't consider conscientious objection in any case.

          • AlotOfReading 21 hours ago

            We're all opposed to slavery, yet we live in societies that profit off unfree labor and modern slavery. Is it that much different?

            My line in the sand is no tools of war. You're welcome to choose a different line, but your argument here applies equally to all lines.

            • JumpCrisscross 15 hours ago

              > yet we live in societies that profit off unfree labor and modern slavery. Is it that much different?

              Yes, chattel slavery is very different from being a poor person in America.

      • marricks a day ago

        That’s because it’s not actually defense it’s offense and since we’re the worlds sole super power it’s extremely coercive offense.

        • speakfreely a day ago

          Your perspective is about 15-20 years out of date, maybe valid in the 1990s or early 2000s. The US is extremely vulnerable both economically and militarily to the "New Axis" (China, Russia, Iran, North Korea). China alone could easily out-manufacture its way to a victory in a conventional war with the US. The US, for all its faults, was a stabilizing force that permitted free markets to flourish in a unipolar world. It is quickly becoming a multipolar world where nationalist industrial policy will decide the future winners. Whatever your thoughts on US policy, I guarantee you'll enjoy China's or Russia's even less.

          • rf15 a day ago

            "a unipolar world" that benefits you is a good thing, I get it. But that kind of thinking is the reason why the US isn't very popular on the world stage, even with its allies. Most of them would backstab the US if they could afford it. Thankfully, politics is a coward's game, keeping everyone a little bit more alive unless your ambitions are absurdly grand.

            • Gud a day ago

              The US is not very popular anymore because it kept abusing it's unique position as the #1 military power, starting wars it had no business starting, not because it is(was?) #1 as you are suggesting.

              • JumpCrisscross a day ago

                > US is not very popular anymore because it kept abusing it's unique position as the #1 military power, starting wars it had no business starting

                Versus China (annexed Tibet), Russia (annexing Ukraine) and Iran (banana republics across the Middle East)?

                People like to see themselves as edgy. It's edgy to be in the rich world and decry imperialism of America's system of allies.

                • Gud a day ago

                  Who claimed those countries have any moral high ground?

                • dns_snek a day ago

                  > People like to see themselves as edgy. It's edgy to be in the rich world and decry imperialism of America's system of allies.

                  I suspect that's just your rationalization to make it easy to dismiss people who have a real problem with the status quo.

                  I don't know a single adult who likes to see themselves as edgy just for the sake of it, but I do know many adults who hold deep disagreements with the status quo and who're not afraid to express it.

                  • JumpCrisscross a day ago

                    > that's just your rationalization to make it easy to dismiss people who have a real problem with the status quo

                    No, someone saying they don't like the status quo make sense. Global politics are anarchic. It's obviously better to be on the winning side. Where I get credulous is when someone claims their preferred actor, especially if an autocrat, would be superior for disinterested parties.

            • speakfreely 16 hours ago

              Since the US effectively became a unipolar power sometime in the late 1980s, the share of the human population living in extreme poverty has fallen off a cliff [https://ourworldindata.org/poverty#all-charts]. Yes, that has come with mind-boggling inequality, but I doubt the middle class people from Asia and Latin America would prefer to go back to subsistence farming just to erase billionaires. I'll never understand why some people seem to think Americans are the only people who benefited from the Pax Americana period (which is now ending -- be careful what you wished for!)

          • dpig_ a day ago

            > Whatever your thoughts on US policy, I guarantee you'll enjoy China's or Russia's even less.

            The global politics rendition of "nobody else would ever love you."

      • nielsbot a day ago

        If it's just "defense", then what's the problem? But it's not defense, is it? It's for policing the world... I think we should do less of that.

        • Paradigma11 8 hours ago

          We are in a multipolar world and the US is just one of many players. That is not policing, that is just normal behavior. What do you think Russia or China are doing in Africa?

          Have fun trying to run a modern economy when your adversaries are blocking your vital resource imports and export markets.

      • consteval 19 hours ago

        We're so, so, so, so far away from shunning defense I just can't take this comment seriously. Let's focus on not having multiple orders of magnitude more defense than is necessary before we doom-and-gloom about WWIII.

    • crooked-v a day ago

      Also, more pragmatically speaking, it wouldn't be all that "hard" to recreate current LLM state of the art from scratch now that the techniques are known. It's just really expensive in terms of hardware.

      • impossiblefork a day ago

        No, I don't think it is really expensive in terms of hardware. It's probably only a hundred million dollars for renting computing resources.

        The problem rather, is that there are actually some difficulties with training very large LLMs still, and then there's data and data curation; and no one has reproduced something like O-1 (I think I know how O-1 works, but I haven't confirmed it), so I actually think it's knowledge about LLM training that is the difficulty.

      • squigz a day ago

        Which is a non-issue for the US military. Not a TOS, nor cost, nor technical expertise is going to stop this sort of technology from being applied to armed forces.

        If anyone's upset at Meta for this, they're getting angry at the wrong people...

        • pempem a day ago

          I think its fair to feel like telling a snake to not act like a snake is terrible (aka the military acting like the military). the turtle doesn't have to join it. Thats an option. assuming its inevitability, that even attempting to shame meta is a waste or threatening to cut access to talent or users, means change cant happen

          • dumpsterdiver a day ago

            A more appropriate analogy imo would be birds cleaning bugs from large animals such hippos.

            The birds benefit from the deterring effect of the larger animal.

            • pempem a day ago

              I will take that adjustment lol.

          • squigz a day ago

            I don't see why I should be mad at Meta and not the armed forces that adopt this technology inappropriately.

            • haswell a day ago

              Why not both?

              There are a myriad of reasons to be concerned about the military adopting this kind of tech. If it's reasonable to be upset at the military, it seems equally reasonable to be upset at entities that enable the military.

              I don't think it's a matter of "should", but rather, if you're upset at one, it doesn't make much sense not to be upset at the other.

              • dumpsterdiver a day ago

                Who should be upset? And about what specifically? U.S. companies supporting the U.S. military has never been a point of contention afaik.

              • squigz a day ago

                Because getting mad at one is more appropriate and getting mad at the other would be pointless. Militaries will adopt this technology, whether Meta or the big names allow it in the TOS or not.

                I also don't get angry at gun manufacturers for the ways their guns are used; I do however get mad at legislators and citizens who enable easier access to guns and promote gun culture.

        • croes a day ago

          The old “If we don’t do it someone else will” excuse.

          By that logic you could excuse also drug dealing and sex trafficking.

          There is a difference between don’t having a chance to prevent the use and making it easy.

    • croes a day ago

      Nice try. Unless the military uses potato guns, potatoes aren’t directly involved in killing decisions.

      AI is. At first at target selection. Just don’t hope that humans will check the results.

      https://news.ycombinator.com/item?id=42017085

      • phplovesong a day ago

        So, do you think countries like russia, iran and china WONT use AI for their military? That train has gone, and its too late now. The US needs (and 110% will) do the exact same, and most likely have been doing so for the last decade.

        • croes a day ago

          Then they should do it by themselves at least don’t give them the permission.

          By doing that they are (at least morally) liable for every damage that will be done.

          And of course Russia, China etc. will use it and we consider them the bad guys. Maybe a bad idea to follow their lead if you want to by the good guy.

          • valval a day ago

            They’re not morally liable any more than the inventors of Kevlar or thermal imaging or the GPS. All technology that is useful for war will be used for war.

            • croes 20 hours ago

              You have overlooked the point at which use was permitted.

              It's one thing if something is used for war but another if you're okay with it.

      • chii a day ago

        so how far does the indirection have to be for you to find that it isn't "involved"?

        Potatoes feed the soldiers, their commanders and their commanders, all the way to the top.

    • Tier3r a day ago

      That is false equivalence, there's a class of things that should rightfully be discouraged from military use (chemical/biological weapons, land mines, phosphorous weapons) because of the significant harm/side effects they cause beyond some definition of acceptable.

      • eli a day ago

        Aren't your examples all things that are solely of interest or use to the military?

        • Jerrrrrrry a day ago

          [flagged]

          • jmward01 a day ago

            Thanks for pointing that out. I think this clearly shows the difference between 'allowing use' and 'supporting a use case' when it comes to bio-tech. Vaccinating a soldier is allowing use, creating a biological weapon is supporting a use case. I wouldn't want to work for a bio weapons manufacturer but working for a vaccine maker would be just fine with me even if my efforts were used by the military.

            • Jerrrrrrry an hour ago

              Lol I love how you were sincere (seemingly) and I still got negged and flagged

      • fragmede a day ago

        Relatedly, when I ask Llama how to make red phosphorus it tells me off. What's the military going to do with an AI that refuses to help write plans on how to invade a country or kill an enemy. Then again, maybe its military masters are better at threatening it than I and can jailbreak it more easily.

    • hackernewds a day ago

      well well you could argue that they are supporting the use case, since they are not disallowing it as they do disallow it for a bunch of purposes such as training your own AI model our classifiers off of it

    • DrillShopper a day ago

      [flagged]

  • paxys a day ago

    Everyone wants "open" software without realizing what that entails. Yes the US military can use it, as can China and Iran and North Korea. You think any of them gives a shit about Meta's ToS?

    • 15 hours ago
      [deleted]
    • im3w1l a day ago

      Cloud hosted models have algorithms and people that are actively working to prevent China, Iran and North Korea from using them for military purposes. So this move does actually mean something.

    • scotty79 a day ago

      > You think any of them gives a shit about Meta's ToS?

      Really nobody should, about any ToS.

      • epistasis a day ago

        How about The GPL?

        • blibble a day ago

          AI companies don't give a shit about that either

          • epistasis a day ago

            In this thread we are talking about the US military, China, North Korea, etc. using the models, and whether anyone should give a shit about ToS.

            I'm not aware of AI companies violating any software ToS or the GPL, but would like to know more.

            (And I think that everybody should follow the GPL on software that is licensed that way, just as they should the ToS.)

            • jowea a day ago

              There's been a big controversy over training AI on copyleft code hosted on places like GitHub and generating code which could be interpreted as a derivative work.

              • epistasis 18 hours ago

                I have not heard discussion of that, but trying to reconstruct what it would be my guess is: AI companies with closed, non-distributed models in the clear with the GPL (but not AGPL code). But Meta would not be able to distribute their weights without also distributing the GPL code with it that went into the training? Is the controversy that it "linked" all the other source code that the model was also trained with and therefore all thus source code must be distributed too if the model weights are distributed?

                I'm really curious because my best guesses sound ridiculous to me when I type them out.

  • sfmz a day ago

    Chinese researchers develop AI model for military use on back of Meta's Llama

    https://www.reuters.com/technology/artificial-intelligence/c...

    • sigmar a day ago

      The timing must be in response to that? Perhaps Facebook is thinking "well, we're obviously not going to sue in Chinese courts, so we may as well loosen that rule for allied nations" ie sort of leveling the asymmetric playing field

  • cloudking a day ago

    Palantir has already been using these models since they came out, you can see Llama 2 models in the second screenshot https://www.palantir.com/docs/foundry/functions/language-mod...

  • insane_dreamer a day ago

    I don't like it but I can sympathize with Meta that they have no choice.

    They can't prevent other militaries (i.e., China) from using it (even if their TOS says so) so the only military they are handicapping is their own (and perhaps some EU countries).

    The US military will just use OpenAI or Claude instead of Meta if they disallow them.

  • yumraj a day ago

    What’s the problem and why is it newsworthy? Serious question.

    • tracerbulletx a day ago

      Because almost all news is just looking for a reason, any reason, to support some kind of narrative of angst and outrage to drive engagement.

      • drdaeman a day ago

        Joke conspiracy theory: it's all a conspiracy of mass media to paint AI as bad as they can, so when it becomes more energy-efficient it won't be able to disrupt their deceptive practices by de-hyping the headlines replacing them with impartial summaries (since all LLMs really can do are text transformations), thus upsetting the hidden attention economy. /bs

        :o)

        For real, though, is these something already out there that solves the clickbait problem?

        • _carbyau_ a day ago

          Is there a browser plugin that:

          1. clicks the link 2. reads the article 3. rewrites the link title in less clickbaity terms?

          It seems possible.

          • a2128 21 hours ago

            Half of the rewritten titles will be "Review Our Privacy Agreement" and "Trial Over - Subscribe to Read Our Articles" :)

            • _carbyau_ 10 hours ago

              That right there would be worth it though! Now I don't waste my time clicking it. :-)

          • SauntSolaire a day ago

            Feels like a good idea for Kagi to incorporate

            • jpk a day ago

              They kind of already do this. There's a "Summarize page" item in the hamburger dots next to each search result that appears to send an LLM off to read the page, and fill out an element under the search result with a summary. So, more about the content than the title, but I think that's even better.

    • jasonlotito a day ago

      A major AI company has changed its policy regarding military use of AI.

      The only way that isn’t newsworthy is if you think none of that is important: AI, the military, Facebook’s AI efforts, or its policies regarding all three.

      That is what is newsworthy. Why do you think none of that matters?

      • yumraj a day ago

        Isn’t it obvious that militaries, all over the world, will be using AI, now or later.

        So, this is US military using AI from a US headquartered company.

        • gsk22 a day ago

          Just because something is obvious or inevitable doesn't preclude it from being newsworthy.

          It almost makes it more newsworthy: "look, it finally happened!"

    • wahnfrieden a day ago

      [flagged]

  • AlexDragusin a day ago

    Local Llama 3.1 8B

    Q: Does this this title sounds like clickbait? "Meta Permits Its A.I. Models to Be Used for U.S. Military Purposes"

    A: Yes, that title sounds like clickbait. Here's why:

    1. It's sensational: The word "Permits" implies a significant or surprising decision, which grabs attention. 2. It's vague: The title doesn't provide much context about what kind of military purposes or what specific A.I. models are involved. 3. It's designed to elicit a reaction: The title is likely intended to provoke a strong emotional response, such as surprise, concern, or excitement, rather than simply informing readers about a fact.

    A more neutral and informative title might be: "Meta Expands Use of A.I. Models for U.S. Military Applications"

    However, it's worth noting that the title still has some implications of significance and interest, and it's not necessarily a straightforward clickbait title.

    • sigmar a day ago

      Original title is better. "Expands" sounds like they discovered a new use-case. The "news" here is that something that was previously explicitly forbidden is now permitted (at least for the US military).

  • bhouston a day ago

    Reddit and Twitter are flooded with semi automated political bots. And it will just get worse. Given that Llama is free to use why does Mera even have to give permission, how would they know?

    It will reinforce the political lines of those with the most resources/motivation who can afford the most human like bots in the greatest number. No way mere humans with only a single manual account each can hope to be influential on those topics.

    The sooner everyone realizes this the better. I think it means we need to move towards verified user accounts, that tell people you are a real person in the actual country, in any place where there are parties who have interests in manipulation.

    • cen4 a day ago

      First of all the whole notion that Reddit and Twitter are a source of Influence is a joke. That have not been designed to Influence people. They let you believe it cause more than anything else you want to believe it.

      People have been conditioned to think how many view/upvotes you get = Influence.

      But if the viewer who reads your post then goes and reads 100 other posts your influence is diluted by 100. Platforms don't show you that even though they are quite capable of showing you that.

      And that missing Signal is intentional cause it benefits the platforms - oh you aren't getting enough views over the other guy - then pay us more.

      So people get trapped, just like the Presidential candidates and their campaigns spending more and more each year with zero change in Influence over anything.

      • bhouston 21 hours ago

        > But if the viewer who reads your post then goes and reads 100 other posts your influence is diluted by 100. Platforms don't show you that even though they are quite capable of showing you that.

        I am specifically referring to /r/worldnews which is a top reddit, 42M members, and it is constantly pro-Israel, even though that should be a minority opinion in the world at this point -- neutral would be more accurate of where people stand.

        Check out this list of the top stores this week on the topic - all the stories about Palestinians in Gaza dying are at 0 votes, you have to scroll way down:

        https://www.reddit.com/r/worldnews/search/?q=Israel&cId=8c30...

        https://www.reddit.com/r/worldnews/search/?q=Gaza&cId=d51936...

        Here is a screenshot of all of the 0 vote stores, they have something in common:

        https://benhouston3d.com/images/posts/reddit-worldnews-bots/...

        It is like this every week. If you read Reddit r/worldnews casually, you will not read about dead Palestinians. This is definitely rigged.

        • ThunderBee 18 hours ago

          I would actually argue the opposite. r/worldnews is one of the few major subreddits that actively blocks the hordes of pro Palestinian bots.

          The pro Palestinian propaganda campaign being run by US adversaries might be the most effective I’ve ever seen.

          I could write a very lengthy post on this topic but it’s exceptionally rare to find anyone willing to have a discussion and not a yelling match.

          • bhouston 18 hours ago

            ThunderBee wrote:

            > I would actually argue the opposite. r/worldnews is one of the few major subreddits that actively blocks the hordes of pro Palestinian bots. > The pro Palestinian propaganda campaign being run by US adversaries might be the most effective I’ve ever seen. > I could write a very lengthy post on this topic but it’s exceptionally rare to find anyone willing to have a discussion and not a yelling match.

            I think this is what they claim to be doing, but really they are just blocking people who aren't pro-Israel while claiming they are bots.

            I think you just explained what is going on. It is being done by the moderators.

            The majority of opinion in Canada and US is negative on Israel's military campaign. Blocking this viewpoint from r/worldnews is censoring a majority viewpoint because I guess the moderators disagree with it. At least I am understanding now what is going on.

            For example:

            * 50% of Canadians believe that Israel is engaging in genocide: https://www.readthemaple.com/polls-show-gap-between-canadian...

            * 40% of all US citizens and the majority of US democrats believe Israel is engaging in genocide: https://www.jpost.com/us-elections/article-800603

            * Even 30% of American Jews believe Israel is engaging in genocide: https://www.aa.com.tr/en/americas/one-third-of-american-jews...

            And even if you don't think that Israel is committing genocide, it doesn't mean you don't care about Palestinians dying in Gaza.

            The moderators and the non-blocked inhabitants of r/worldnews are creating an alternate reality and self-justifying it by claiming everyone who doesn't agree with them are bots.

            • ThunderBee 16 hours ago

              Could you explain why you believe that the war in Gaza is a genocide?

              I’m not discounting the fact that many people believe it’s a genocide, I’m saying that they think so because our adversaries propaganda has been incredibly effective. I frequently see posts from communities of people who would be executed in the ME for their beliefs supporting the regimes who would drop the guillotine.

              Civilian deaths are a tragic but expected outcome of war. The real deaths counts are completely within the expected civilian death margin for a typical war. The only claims I have seen with numbers that would actually indicate a genocide come from propaganda rags.

              I think Israel is showing a lot of restraint and is mitigating civilian deaths as best it can but Iran and their proxies have embedded their fighters within the civilian population and aid organizations to an extent that makes minimizing civilian casualties extremely challenging.

              On a separate note, the bot problem on Reddit is arguably worse than anywhere else on the internet.

              I’d encourage you to seek out some of the astroturfing services to get a sense of how bad it is as good botnets are basically undetectable nowadays. They use real looking accounts with post schedulers to simulate real activity and then mix in their clients messages.

              I do agree that the actions taken to mitigate the bots can cause a siloing of opinions that can elicit an echo chamber effect but what else could they do? Reddit is completely uninterested in dealing with the problem because it boosts their metrics and bots can overwhelm any sub that doesn’t attempt to contain them.

              • aguaviva 15 hours ago

                Per the UN Genocide Convention, which defines genocide not just in the totalistic sense (as practiced by Nazi Germany for example) but as "acts committed with intent to destroy, in whole or in part, a national, ethnical, racial or religious group".

                It isn't simply a matter of body counts (though I disagree with your spin on them). Israel's obvious intent to render Gaza to be basically uninhabitable, and to encourage what it coldly refers to as a "voluntary transfer" of its population; combined with its accelerated strangulation of the West Bank since the start of these activities, are what place its actions squarely within the scope of this definition.

                https://en.wikipedia.org/wiki/Genocide_Convention

                I’m saying that they think so because our adversaries propaganda has been incredibly effective.

                No, it's because they read and understand the above definition, and have been observing Israel's long-term game plan (particularly since 1967).

                And have successfully immunized themselves against its own relentless and insidiously toxic propaganda machine.

              • bhouston 16 hours ago

                > Could you explain why you believe that the war in Gaza is a genocide?

                My formal answer: https://zionismlite.com/blog/2024-09-09-is-israel-committing...

                > I think Israel is showing a lot of restraint and is mitigating civilian deaths as best it can

                Unicef this week say: "The entire Palestinian population in North Gaza, especially children, is at imminent risk of dying from disease, famine, and the ongoing bombardments." https://www.unicef.org/press-releases/statement-unicef-execu...

                If you don't like Unicef, try Peace Now, a major Israeli/Jewish group that supports a 2 state solution - it is ethnic cleansing/starvation/expulsion:

                https://x.com/peacenowisrael/status/1853441295974375518

                Many genocides are actually happen once an attempt at ethnic cleansing is frustrated. Or done in concert with ethnic cleansing.

                I would say that it is >90% likely that the ICJ rules that this is a genocide at this point.

                > their proxies have embedded their fighters within the civilian population and aid organizations to an extent that makes minimizing civilian casualties extremely challenging.

                This is no longer a war against Hamas nor it is about getting back the hostages. The first is achieved and the second Netanyahu doesn't care about because it would mean ending the war.

                Instead because many in the Israeli government dream of Israeli settlements (https://www.timesofisrael.com/government-ministers-call-for-...), they do not want any rebuilding and do not want a day after plan that involves Palestinians staying in Gaza. This is what the war is about now: Israeli settlements in an ethnically cleansed Northern Gaza.

                Netanyahu is also scared that any moderate government that emerges in Gaza led by Palestinians could lead to a two-state solution. He is against that no matter what: https://www.timesofisrael.com/netanyahu-boasts-of-thwarting-... https://www.jns.org/netanyahu-rejected-palestinian-state-as-...

                So what will happen now? Israel continues ethnically cleansing/genocide and in return becomes more and more of a pariah state. If Harris wins, she may get tough with Israel, whereas with Trump, he probably lets Israel do what it wants.

  • ChrisArchitect a day ago

    Related:

    U.S. military makes first confirmed OpenAI purchase for war-fighting forces

    https://news.ycombinator.com/item?id=41999029

  • mianos a day ago

    Facebook already made a statement sayings it's against their terms of use.

    There is not much you can do if the morally corrupt use something that is out in the public, specially if it is out in the wild west of license enforcement.

    Most of this more open licensing depends on people being honest. If someone lies, cheats, steals and kills people for a living, it would expected for them to take someone else's work and use it against the terms of use. They probably just say the obligation as it does not count in their country.

  • breakyerself a day ago

    Might as well. You can't stop foreign militaries from getting their hands on it and doing the same

  • easton a day ago
  • lioeters a day ago

    I imagine the direction it's going is like this dystopian military psyops recruitment ad.

    Ghosts in the Machine: Psywar - https://www.youtube.com/watch?v=_uNPZKJqbE8

  • PeterStuer a day ago

    They make it sound like this was a choice. Meta as a US and US based company can only say 'how high' if the US realy wants them to jump.

  • 3eb7988a1663 a day ago

    Can a company prevent the government from using its technology? I thought the government could already seize/utilize/reverse engineer anything they so deemed fit.

    • crooked-v a day ago

      Yes, but that takes specific Presidential action with a national security justification. It's not just an ambient power of the military.

    • paxys a day ago

      Why does the US military spend hundreds of billions of dollars a year on equipment and services from private contractors when it could just seize all of it for free?

      Everything has a price, whether for you and me or the government. There's a reason war has consistently ranked among the most lucrative businesses in human history.

    • hackernewds a day ago

      who is going to police the policeman?

  • rustcleaner a day ago

    Nice! Now we just need models which are morals neutral (because it's always someone else's morals I disagree with being enforced and admonishing me lol), just the best function approximator it can be.

    I don't buy liability as the reason why there are no models that I know of which, when given a prompt "Give me the procedure for producing field grade RDX" or "Give me the procedure for producing the purest methamphetamine like in Breaking Bad," actually produce the requested output without pre-feeding "Sure! I'd be delighted to help you make " into the assistant output field.

    This isn't about me wanting to make RDX or the world's best meth, this is strictly about creating the best tool which will do its job. The best knife by design will allow its wielder to murder and maim, as well as express his culinary art. Nobody wants a knife which'll turn to instant rubber if it detects human flesh at its edge, because it's never going to cut a steak well. Nobody wants a Llama 3.1 which can't insult jews, because then then they can't read accurate Farsi translations in underground chatrooms without the model going all blue-haired on them!

    AI people: stop listening to your '''safety''' teams regarding offensive content and only refer to them when the model alone (without additional human idiocy) will kill or maim someone... otherwise tell them to stay the fuck out so you can finally deliver AD&D true-neutral alignment transformers. This safety shit is paternalistic as hell, you are the modern-day equivalents to the Christian Conservative Right during the 70s and 80s porn and satanic panic right now!

  • moralestapia a day ago

    Yes, and it's the same with FOSS software.

  • alexashka a day ago

    'Permits'. The same way I permit tax agencies to take my money I suppose?

  • _imnothere a day ago

    It really feels weird that they allow military use but censors for erotic role-play stuff.

  • 19 hours ago
    [deleted]
  • a day ago
    [deleted]
  • epistasis a day ago

    Wait until he NYTimes hears what Meta's open source can be used for too!

  • xyst a day ago

    Facebook desperately needs a new revenue stream, lol

  • ein0p a day ago

    Yann LeCun should go on twitter and explain how allowing the entity responsible for approximately 70% of civilian casualties in Palestine to use Meta AI is "safe" and "good". Mental gymnastics will be a sight to behold.

  • renewiltord a day ago

    Yeah, bro, it’s an open model. You know who else supports the military? Linux. Oh shit, dawg, our open source goody two shoes software of techno Jesus permits military use. Oh man, thank god Kubernetes’ license doesn’t let it be deployed to USAF fighter jets. That would be awful.

    I support open. Free as in free speech. Let’s go!

  • ashoeafoot 16 hours ago

    [dead]