128 comments

  • tananaev 3 hours ago

    I have an open source project and started receiving a lot of security vulnerability reports in the last few months. A lot of them are extremely corner cases, but there were some legit ones. They're all fixed now. Closed source software won't receive any reports, but it will be exploited with AI. So I definitely agree with the message of this article.

    • lelanthran an hour ago

      > Closed source software won't receive any reports, but it will be exploited with AI.

      What makes you so sure that closed-source companies won't run those same AI scanners on their own code?

      It's closed to the public, it's not closed to them!

      • 440bx 44 minutes ago

        As someone who works on closed source software and has done for a couple of decades, most companies won't even know about that and of those who do only a fraction give enough of a shit about it to do anything until they are caught with their pants down.

      • baileypumfleet 33 minutes ago

        As I mentioned above, we actually do run these AI scanners on our code, but the problem is it's simply not enough. These AI scanners, including STRIX, don't find everything. Each scanning tool actually finds different results from the other, and so it's impossible to determine a benchmark of what's secure and what's not.

      • ihaveajob an hour ago

        More eyes, more chances that someone will actually use the tools. Also, the tools and how you use them are not all the same.

        • phendrenad2 43 minutes ago

          With enough copies of GPT printing out the same bulleted list, all bugs are

          1. shallow

          2. hollow

          3. flat

          ...

      • LunicLynx an hour ago

        Came here to say the same. Same tools + private. In security two different defense-mechanisms are always better than one.

        • bluebarbet 19 minutes ago

          Same tools A, B and C, but minus tools D, E and F, and with a smaller chance that any tools at all will even be used.

          Not claiming that it's a slam dunk for open source, but the inverse does not seem correct either.

    • giancarlostoro 7 minutes ago

      > Closed source software won't receive any reports, but it will be exploited with AI.

      This is what worries me about companies sleeping on using AI to at a bare minimum run code audits and evaluate their security routinely. I suspect as models get better we're going to see companies being hacked at a level never seen before.

      Right now we've seen a few different maintainers for open source packages get hacked, who knows how many companies have someone infiltrating their internal systems with the help of AI because nobody wants to do the due dilligence of having a company do security audits on their systems.

    • Aurornis 3 hours ago

      > Closed source software won't receive any reports

      Not from the automated repo scanners, but bug bounty programs can generate a lot of reports in my experience. AI tools are becoming a problem there, too, because amateurs are drawn to the bounties and will submit anything the AI hallucinates.

      Closed source companies can (and should!) also run their own security audits rather than passively waiting for volunteers to spend their tokens on it.

      • switchbak 2 hours ago

        Those bug bounty programs now have to compete against the market for 0-days. I suppose they always did, but it seems the economics have changed in the favour of the bad actors - at least from my uninformed standpoint.

        That still exists in the OSS world too, having your code out there is no panacea. I think we'll see a real swarm of security issues across the board, but I would expect the OSS world to fare better (perhaps after a painful period).

      • baileypumfleet 32 minutes ago

        That's absolutely our plan. We have bug bounty programs, we have internal AI scanners, we have manual penetration testing, and a number of other things that enable us to push really hard to find this stuff internally rather than relying on either the good people in the open source community or hackers to find our vulnerabilities.

      • bmurphy1976 9 minutes ago

        You don't even need a bug bounty program. In my experience there's an army of individuals running low-quality security tools spamming every endpoint they can think (webmaster@ support@ contact@ gdpr@ etc.) with silly non-vulnerabilities asking for $100. They suck now but they will get more sophisticated over time.

      • tananaev 3 hours ago

        Of course everyone should do their own due diligence, but my point is mostly that open source will have many more eyes and more effort put into it, both by owners, but also community.

        • LunicLynx an hour ago

          But also tools that might not be nice and report security vulnerabilities, but exploit them.

          There is no guarantee that open means that they will be discovered.

      • bearsyankees 3 hours ago

        +1, at this point all companies need to be continuously testing their whole stack. The dumb scanners are now a thing of the past, the second your site goes live it will get slammed by the latest AI hackers

    • hardsnow 2 hours ago

      I’ve recently set up nightly automated pentest for my open-source project. I’m considering starting to publish these reports as proof of security posture.

      If the cost of security audit becomes marginal, it would seem reasonable to expect projects to publish results of such audits frequently.

      There’s probably a quite hefty backlog of medium- and low-severity issues in existing projects for maintainers to suffer through first though.

    • rd 2 hours ago

      I don't follow. It seems obvious that there's more to gain for attackers using AI agents to exploit open source repositories, than there is for good samaritan defenders. In this new closed-source world (for Cal.com), there's nothing stopping them from running their own internal security agent audits, all whilst at least blocking the easiest method of finding zero-days - that is, being open source.

      This really just seems like Strix marketing. Which is totally fair, but let's be reasonable here, any open-source business stands to lose way more by continuing to be open-source vs. relying on the benevolence of people scanning their code for them.

      • bigbadfeline an hour ago

        > It seems obvious that there's more to gain for attackers using AI agents to exploit open source repositories, than there is for good samaritan defenders.

        Actually the opposite is obvious - the comment you replied too talked about an abundance of good Samaritan reports - it's strange to speculate on some nebulous "gain" when responding to facts about more then enough reports concerning open source code.

        > In this new closed-source world (for Cal.com), there's nothing stopping them from running their own internal security agent audits

        That's one good Samaritan for a closed source app vs many for an open source one. Open source wins again.

        > any open-source business stands to lose way more

        That doesn't make any sense - why would it lose more when it has many more good Samaritans working for it for free?

        You seem to forget that the number of vulnerabilities in a certain app is finite, an open source app will reach a secure status much faster than a closed source one, in addition to also gaining from shorter time to market.

        In fact, open source will soon be much better and more capable due to new and developing technological and organizational advancements which are next to impossible to happen under a closed source regime.

        • tetha 44 minutes ago

          The main drawback is that you will need to be able to patch quick in the next 3-5 years. We are already seeing this in a few solutions getting attention from various AI-driven security topics and our previous stance of letting fixes "ripen" on the shelf for a while - a minor version or two - is most likely turning problematic. Especially if attackers start exploiting faster and botnets start picking up vulnerabilities faster.

          But at that point, "fighting fire with fire" is still a good point. Assuming tokens are available, we could just dump the entire code base, changesets and all, our dependent configuration on the code base, company-internal domain knowledge and previous upgrade failures into a folder and tell the AI to figure out upgrade risks. Bonus points if you have decent integration tests or test setups to all of that through.

          It won't be perfect, but combine that with a good tiered rollout and increasing velocity of rollouts are entirely possible.

          It's kinda funny to me -- a lot of the agentic hype seems to be rewarding good practices - cooperation, documentation, unit testing, integration testing, local test setups hugely.

      • NaritaAtrox an hour ago

        Some users might be tech sensitive and have the capacity to check the codebase If a company want to use your platform, it can run an audit with its own staff These are people really concerned about the code, not "good samaritans"

      • sureMan6 an hour ago

        A new user is much more likely to scan the codebase and report vulnerabilities so they can be fixed than illegally exploit them since most people aren't criminals

        • eddythompson80 an hour ago

          Exactly. Who even hacks stuff? Most people will report the issue to earn xp and level up than actually exploit it.

      • dgb23 2 hours ago

        Isn’t that security by obscurity?

    • baileypumfleet 35 minutes ago

      We actually run AI scanners on our code internally, so we get the benefit of security through obscurity while also layering on AI vulnerability scanning, manual human penetration testing, and a huge array of other defence mechanisms.

    • cm2187 2 hours ago

      > Closed source software won't receive any reports, but it will be exploited with AI

      How so? AI won't have access to the source code. In some cases AI may have access to deployed binaries (if your business deploys binaries) but I am not aware that it has the same capabilities against compiled code than source code.

      But in a SAAS world, all AI has access to is your API. It might be still be up to no good but surely you will be several orders of magnitude less exposed than with access to source code.

      • geoffschmidt 2 hours ago

        Claude is already shockingly good at reverse engineering. Try it – it's really a step change. It has infinite patience which was always the limited resource in decompiling/deobfuscating most software.

    • devstatic 2 hours ago

      i agree with his too,

      but with cal.com i dont think this is about security lol

      open source will always be an advantage just you need to decide wether it aligns with you business needs

    • baq 3 hours ago

      given what the clankers can do unassisted and what more they can do when you give them ghidra, no software is 'closed source' anymore

      • criddell 2 hours ago

        Which models have you had good luck with when working with ghidra?

        I analyze crash dumps for a Windows application. I haven't had much luck using Claude, OpenAI, or Google models when working with WinDbg. None of the models are very good at assembly and don't seem to be able to remember the details of different calling conventions or even how some of the registers are typically used. They are all pretty good at helping me navigate WinDbg though.

      • embedding-shape 3 hours ago

        Guess that kind of depends on your definition of "source", I personally wouldn't really agree with you here.

        • baq 2 hours ago

          absolutely agree with you if we're talking about clean room reverse engineering; but in the context of finding vulnerabilities it's a completely different story

    • charcircuit 2 hours ago

      Assembly is still source code so really it comes down to if the copy protection is obscuring the executable code to the point where the LLM is not able to retrieve it on its own. And if it can't someone motivated could give it the extra help it needs to start tracing how outside inputs gets handled by the application.

    • kirubakaran 2 hours ago

      Yes exactly! I'm so glad I took this route with my startup. We can't bury our heads in the sand and think the vulnerabilities don't exist just because we don't know about them.

      • ofjcihen 2 hours ago

        This might be the most painfully obvious advertisement I’ve ever seen on a forum.

        • kirubakaran 2 hours ago

          I didn't mean it as such, but I can see why it would seem so. I've edited the link out now. Thanks for the feedback.

  • CodesInChaos 3 hours ago

    > The reasoning provided by their CEO, Bailey Pumfleet, is that AI has automated vulnerability discovery at scale,

    That sounds like an excuse. The real reason is probably that it's hard to make a viable business out of developing open source.

    • kelnos 3 minutes ago

      Yes, it feels like they've been looking for an excuse to go closed-source, and this one is plausible enough to make it sound like they're only doing it because they "have to".

    • baileypumfleet 31 minutes ago

      We've run an extremely profitable business for five years, raised a seed and a Series A, and grown at 300% a year sustainably while being open source.

      Going closed source actually hurts our business more than it benefits it. But it ultimately protects customer data, and that's what we care about the most.

    • bruckie an hour ago

      AI makes a great scapegoat. Need to lay off people? "AI." Need to switch to closed source? "AI."

    • mdp 3 hours ago

      Exactly. I respect their decision to go closed source if that's what they need to do to make it a viable business, but just be honest about it. Don't make up some excuse around security and open source.

      • bearsyankees 3 hours ago

        I don't know if I fully agree with this -- how many people were actually self-hosting cal infra? I def could be wrong though

      • renewiltord an hour ago

        You should be honest about your own personal financial incentive in making these posts.

    • p_stuart82 2 hours ago

      separating codebase and leaving 'cal.diy' for hobbyists is pretty much the classic open-core path. the community phase is over and they need to protect their enterprise revenue.

      blaming AI scanners is just really convenient PR cover for a normal license change.

    • mikeryan 2 hours ago

      It’s also now ridiculously easy to simply cherry pick from open source without actually “using” it.

      “I need to do foo in my app. Libraries bar and baz do these bits well. Pick the best from each and let’s implement them here”

      I’d not be surprised if npmjs.com and its ilk turn into more a reference site than a package manager backend soon.

      • wilj 2 hours ago

        I literally have a Claude Code skill called "/delib" that takes takes in any nodejs project/library and converts it to a dependency-less project only using the standard library.

        It started as a what-if joke, but it's turned out to be amazing. So yeah, npmjs.com is just reference site for me now, and node_modules stays tiny.

        And the output is honestly superior. I end up with smaller projects, clean code, and a huge suite of property-based tests from the refactor process. And it's fully automatic.

        • pixel_popping 2 hours ago

          It's that easy yes, and someday, we will literally be able to prompt "Redo the Linux kernel entirely in Zig" and it will practically make a 1:1 copy.

      • yibers 2 hours ago

        Ironically, given the recent supply chain attacks, that may be also more secure.

    • serial_dev 3 hours ago

      I'd think it's also much easier to spin up a (in some area) slightly better clone and eat into their revenue.

      • svnt 2 hours ago

        This is part of it for sure. It is also true that many open source business depended on it not being worth the trouble to figure out the hosting setup, ops etc, and the code. Typical open source businesses also make a practice of running a few features back on the public repo.

        Now I can take an open source repo and just add the missing features, fix the bugs, deploy in a few hours. The value of integration and bug-fixing when the code is available is now a single capable dev for a few hours, instead of an internal team. The calculus is completely different.

    • phillipcarter 3 hours ago

      I mean, it's hard to make a viable business regardless of if the tech is OSS or not, but it's often seen as more challenging this way.

  • pradn 2 hours ago

    Brilliant piece of content marketing:

    1) Pulls you in with a catchy title, that at first glance seems like a dunk on Cal.com (whatever that is).

    2) Takes the "we understand your pain" approach to empathize w/ Cal.com, so you feel like you're on the good vibes side.

    3) Provides a genuine response to the actual problem Cal.com is dealing with. Something you can't dismiss out of hand.

    4) But in the end of the day, the response aligns perfectly with the product they're promoting (a click away to the homepage!)

    This mix of genuine ideas and marketing is quite potent. Not saying this is all bad or anything, just found it a bit funny. The mixed-up-ness is the point!

    • baileypumfleet 30 minutes ago

      That's exactly how this read to me too. Ultimately, the whole article is written by a company that does AI vulnerability scanning, and it's to try and get you to sign up for their service.

      As it mentions in their article, Strix actually scans the Cal.com codebase and reports vulnerabilities to us. But the reality is, they actually miss so many vulnerabilities that other platforms do find. There's no one platform that seems to be able to reliably find all vulnerabilities, and so simply adopting AI scanners just isn't enough.

    • kreco an hour ago

      I'm sad to see this article being so upvoted while being kind of empty.

      The real content could fit in a comment.

    • shevy-java 2 hours ago

      Is it good marketing though? I mean personally I do not use AI, and I don't think this opinion of mine will change. I can't look into the future, but right now I don't use nor do I depend on AI. I guess it may work for some people, but even then I am unsure whether that is really good marketing. Riding on a hype train (which AI right now still is) is indeed easier, so that has to be considered.

      • BloondAndDoom an hour ago

        They are in HN front page, therefore it’s good marketing.

        • serial_dev 5 minutes ago

          I didn’t even open the article, I’m here for the comments.

  • keeda 19 minutes ago

    >Security through obscurity is a losing bet against automation

    Security through obscurity is only problematic if that is the only, or a primary, layer of defense. As an incremental layer of deterrence or delay, it is an absolutely valid tactic, with its primary function being imposing higher costs on the attacker.

    As such if, as people are postulating post-Mythos, security comes down to which side spends more tokens, it is an even more valid strategy to impose asymmetric costs on the attacker.

    "With enough AI-balls (heheh) all bugs are shallow."

    From a security perspective, the basic calculus of open versus closed comes down to which you expect to be case for your project: the attention donated by the community outweighs the attention (lowered by openness) invested by attackers, or, the attention from your internal processes outweighs the attention costs (increased by obscurity) on attackers. The only change is that the attention from AI is multifold more effective than from humans, otherwise the calculus is the same.

  • janalsncm 42 minutes ago

    Reading between the lines, it seems like they were working with cal.com and used red team bots to find vulnerabilities in cal.com’s code. And they probably found bugs a lot faster than cal.com could fix them. So the CEO balked at the estimated cost of fixing and took his ball home.

    This article is effectively an announcement that cal.com is riddled with vulnerabilities, which should be easy to find in an archive of their code.

  • erelong an hour ago

    I'll admit that I agree with a lot of the post but that I can't fully wrap myself around the cybersecurity situation today, is it basically:

    -if code is open source or closed source, AI bots can still look for exploits

    -so we need to use AI to develop a checklist program regardless to check for currently known and unknown exploits given our current state of AI tools

    -we have to just keep running AI tools looking for more security issues as AI models become more powerful, which empowers AI bots attacking but also then AI bots to defensively find exploits and mitigate them

    -so it's an ongoing effort to work on

    I understand the logic of closing the source to prevent AI bot scans of the code but also fundamentally people won't trust your closed source code because it could contain harmful code, thus forcing it to be open source

    Edit: Another thing that comes to mind is people are often dunking here on "vibe coding" however can't we just develop "standards / tools" to "harden" vibe coded software and also help guide well for decisions related to architecture of the program, and so on?

  • dom96 2 hours ago

    Isn’t the real danger now not the ability to find security vulnerabilities, but rather, the ability of anyone to ask an LLM agent to rewrite your open source project in another language and thus work around whatever license your project has?

    • bluGill an hour ago

      You can do the same for closed source projects.

      There are real limitations of course.

    • short_sells_poo 2 hours ago

      This is happening quite a lot actually. People just feed an existing project into their agent harness and have it regenerate more or less the same with a few tweaks and then they publish it.

      I'm not sure how this works in the legal sense. A human could ostensibly study an existing project and then rewrite it from scratch. The original work's license shouldn't apply as long as code wasn't copy & pasted, right?

      What happens when an automated tool does the same? It's basically just a complicated copy & paste job.

    • micromacrofoot an hour ago

      A lot of open source projects already have licenses that allow forking and selling the fork, it hasn't been a problem most of the time... there's a lot more to operating open source as a business beyond just shipping the code

  • linuxhansl 2 hours ago

    So Cal.com favors security through obscurity.

    Open Source was always open to "many eyes" in theory exposing itself to zero-day vulnerabilities. But the "many eyes" go for the good and the bad actors.

    As far as I am concerned... Way to go Cal.com, and a good reminder to never use your services.

  • cadamsdotcom 2 hours ago

    > Security testing has to become an automated, integral part of the CI/CD pipeline. When a developer opens a pull request, an AI agent should immediately attempt to exploit it. When infrastructure changes, an AI should autonomously validate the new attack surface. You do not beat automated attackers by turning off the lights; you beat them by running better automation on the inside.

    This feels like the core of the article, but it doesn’t prove the need for open source.

  • agentifysh 2 hours ago

    Pretty overreaching claim about another company's internal decisions and open source in general. There is a lot of incentive to stop open source these days.

    One of which I am experiencing right now is somebody just copying my repo, not crediting me, didn't even try to change the README. It's pretty discouraging.

    The other is security reasons, the premise that volunteers will report vulnerabilities really matter if you are big enough for small portion of people to dedicate themselves, for the most part people take open source tool use it and then forget about it, they only want stuff fixed.

    Lastly, open source development kinda sucks so far. I'v been working on a few different tools and the amount of trolling and just bad faith actors I had to deal with is exhausting. On top of that there is a constant stream of people just demanding stuff to be fixed quickly.

  • JoshTriplett 2 hours ago

    I wonder whether cal actually has concerns about security (in which case, they're wrong, this argument was false when people made it decades ago), or whether they just took a convenient excuse to do something they wanted to do anyway because Open Source SaaS businesses are hard.

  • Prunkton 2 hours ago

    I'm hopeful the article is right about its prediction, although I'm under the impression the attacker/defender dynamic is asymmetric and the defender on the loosing end. I hope someone can proof me wrong though...

    Making the assumption that the same amount of money needed to attack a critical vulnerability is also required to find and fix it.

    Lets say we have a project with 100 modules, and it costs us $100 000 to check these modules for vulnerabilities. What is stopping an attacker from spending the same amount of money to scan, lets say 10 modules but this time with 10x the number of tokens per module than the defender had when hardening the software?

  • 6thbit an hour ago

    Great PR piece by Strix, but I find mixed messages.

    Cal.com folks are getting a red team for free, wouldn't that further convince them their closed source software is strong enough?

    Isn't Strix's business companies paying for scans regardless of whether the software scanned is open source or closed?

  • Divs2890 2 hours ago

    Closing your source doesn't close your attack surface,it just closes the community that would have helped you defend it. Security through obscurity is a kind of tradeoff, not a strategy.. i mean that's what I feel.

  • pixel_popping 3 hours ago

    At the same time, I heavily support open-source and contribute a lot, but I can't necessarily agree that security-through-obfuscation doesn't play a major role in slowing down attacks. Cloudflare have based its whole security being closed-source (for example on its anti-bot mechanism) to be hard to reverse engineer, and they remain leaders as of today with few serious security breaches.

    Some things just can't be truly secure as well, ddos protection is mostly a guessing/preventive game, exposing your firewall config/scripts will make you more vulnerable than NOT.

    If your codebase isn't exposed, attackers are constrained by the network and other external restrictions which greatly reduce the number of possible trials, even with a swarm of residential proxies, it's not the same at all from inspecting a codebase in depth with thousand of agents and all models.

  • Talderigi 2 hours ago

    feels like people are arguing the wrong axis tbh

    - it’s not open vs closed anymore, it’s more like bug finding going a few devs poking around to basically infinite parallel scanners

    - so now you don’t get a couple of thoughtful reports, you get a many edge cases and half-real junk. fixing capacity didn’t change though

    - closing the repo doesn’t really save you, it just switches from white-box to black-box… and that’s getting pretty damn good anyway

    real problem is: vuln discovery scaled, patching didn’t. now everything is a backlog game

  • shay_ker 3 hours ago

    It's a good question - is blackbox hacking as effective as whitebox hacking, for AI agents? I've gotta assume someone at Anthropic is putting together an eval as we speak.

    • hansvm 2 hours ago

      I don't really know, but I have a story which might prompt some conversation about it.

      At $WORK we had a system which, if you traced its logic, could not possibly experience the bug we were seeing in production. This was a userspace control module for an FPGA driver connected to some machinery you really don't want to fuck around with, and the bug had wasted something like three staff+ engineer-years by the time I got there.

      Recognizing that the bug was impossible in the userspace code if the system worked as intended end-to-end, the engineers started diving into verilog and driver code, trying to find the issue. People were suspecting miscompilations and all kinds of fun things.

      Eventually, for unrelated reasons, I decided to clean up the userspace code (deleting and refactoring things unlocks additional deletion and refactoring opportunities, and all said and done I deleted 80% of the project so that I had a better foundation for some features I had to add).

      For one of those improvements, my observation was just that if I had to write the driver code to support the concurrency we were abusing I'd be swearing up a storm and trying to find any way I could to solve a simpler problem instead.

      Long story short, I still don't know what the driver bug was, but the actual authors must've felt the same way, since when I opted for userspace code with simpler concurrency demands the bug disappeared.

      Tying it back to AI and hacking, the white box approach here literally didn't work, and the black box approach easily illuminated that something was probably fucky. Given that AI can de-minify and otherwise spot patterns from fairly limited data, I wouldn't be shocked if black-box hacking were (at least sometimes) more token-efficient than white-box.

      • pixl97 2 hours ago

        >simpler concurrency demands

        This seems to be extremely common. Been a very long time since I looked at Linux kernel stuff, but there were numerous drivers that disabled hardware acceleration or offloading features simply because they became unreliable if they were given heavy loads or deep queues.

  • RRRA 3 hours ago

    How long before LLM perform perfect disassembly exploitation...

  • bzmrgonz 2 hours ago

    Strix was so close to being the hero we deserve. I think these blue torches like strix should offer their services for free to open source ships out at sea. There are 3 wins here, GLOBAL GOOD WILL, testimonial and reviews, and market loyalty reward.

  • simonreiff 2 hours ago

    Is there any recent research on whether open or closed-source projects are more secure? I am genuinely curious if anyone has studied the question.

    • teunispeters an hour ago

      I mean "yes but" lots from 2015 and before, on a scholarly paper search engine. (I do not have access to most, but there are some public ones)

      I mean as a convention when dealing with cryptography, so far the only organization that has succeeded in doing closed-source cryptography securely, has been the USA's "NSA", and mostly their algorithms are public.

      I mostly work in the closed source world, however my observation from all the code bases I've seen is "mostly open source are more secure", except when very thorough following of formal security specifications are followed, and then security is as good as the specifications. (YMMV there, of course).

  • phkahler 2 hours ago

    Can any of the AI systems read binary yet? Perhaps generate source code from object file? Is so, that would make access to source redundant for that type of analysis.

    • pixl97 2 hours ago

      AI assisted decompiling has been a thing for a while now, from what I know most people are using assisted tooling for it.

      With that said it at least seems possible to be able to be able to read binary itself, but most of the magic there is in execution, so you'd have to have an LLM behave kind of like a processor I think.

    • charcircuit 2 hours ago

      Yes, the current meta for ctfs, which includes challenges for exploiting binaries, is to just throw an LLM at it.

  • wg0 34 minutes ago

    > Today, Cal.com announced they are transitioning their core codebase away from open source. The reasoning provided by their CEO, Bailey Pumfleet, is that AI has automated vulnerability discovery at scale, making code scanning and exploitation "near zero-cost". In this new world, they argue, "transparency becomes exposure."

    Laughable and hilarious. Extremely short sighted. I can show code generated by Claude Opus 4.6 at the highest compute intensity that lacks even basic checks in input validation that was clearly provided in the spec.

    There's no point in arguing with crypto and AI bros. They are the same tribe. AI crowd however might learn their lessons sooner because the universe isn't forgiving or flexible.

    Note: I use AI code generators all the time but I take them as very very dumb transpilers no matter how expensive their input/output pricing it and I learned that hard way.

    PS: Edit to fix typos.

  • ChrisArchitect 2 hours ago

    Related:

    Cal.com is going closed source

    https://news.ycombinator.com/item?id=47780456

  • skal9606 2 hours ago

    Seems like flimsy reasoning from the Cal.com CEO. How should we think about Strix vs. foundational model releases like Mythos?

  • funvill 3 hours ago

    This is just an excuse to close source their project while blaming AI. Spineless bullshit excuse instead of owning your choices.

    Shame

    • Yaa101 2 hours ago

      I agree, it is shortsighted (next quarter syndrome). First of all the AI does not need source to find vulnerabilities and further it breaks the unwritten contract to exchange source for eyeballs which creates better source. I guess the CEO wants less security and stopped evolution of it's code.

    • serial_dev 3 hours ago

      It's like the layoffs. Let's blame this thing we wanted to do for a while on AI.

  • Bridged7756 2 hours ago

    It's just an excuse. Classic open source rug pull here.

  • dzonga 3 hours ago

    a lot of the vulnerabilities in web-apps are people trying to be too smart for their own good.

    use battle-tested frameworks such as Rails, Django then you won't make rookie security mistakes.

    • pixel_popping 2 hours ago

      Except that Django got so many criticals we can't even list them on a thread here, but yeah, using known and ancient frameworks is generally smart.

  • reenorap 3 hours ago

    All content is going to go behind paywalls.

    There is zero incentive or reason for content creators to let AI slurp their content for free and distribute it and get all the money from it.

    Everything new will be licensed and if AI companies want access to it, they will need to pay for it, just like we will.

    • _flux 2 hours ago

      Will it help? AI authors will just then buy those subscriptions and in the big picture it won't cost that much.

    • pixl97 2 hours ago

      Of course this neglects why mostly free things that were posted on the internet generally won. Take Microsoft for example. All their money makers are licensed, yet at the same time you can download almost every single one for free and install it.

      The people that go behind paywalls don't realize how much they'll have to spend on marketing to catch up to those that are open.

      And that's only frames the current state, where models are very expensive to train. Once model training is close to the point where a group of individuals can afford it, it's pretty much game over for our current paradigm. The software police will be running around trying to play whack-a-mole on open weight models with people all over the world.

      • reenorap an hour ago

        Why would I create content that I don't get paid for and I don't even get credit for? Everyone who creates free content right now is simply doing the work of AI companies to make them more useful for free.

        Search engines will cease to exist, so no one will search your content and then click on your link. AI will simply regurgitate your content and take the money for tokens or subscription and not acknowledge you at all.

        • pixl97 an hour ago

          >There isn't a rule of economics that says better technology makes more, better jobs for horses. It sounds shockingly dumb to even say that

          --Humans need not apply.

          It's kind of funny that you think you're going to be making money writing software. If you lock up your software who exactly are you selling it to anyway? It's like you're thinking 25% through the situation then going "I can stay where I am and I don't have to change anything", and then crying later when it doesn't work.

          What are you going to do, advertise in BYTE magazine (dead). On Instagram? With a sandwich board on a Seattle street corner? What does the software market even look like in the AI age.

          And much like how Google and Amazon eat your lunch now whenever they way, successful AI companies will buy up some software ideas and feed them to their models (which will be stolen later by other models). Anyone that sees your software will mock up a useful clone of it pretty quick the first time they see it. And foreign AI companies will just right out steal it.

          You're right you won't create content that you don't get paid for, you just won't be creating anything while competing with the other unemployed masses for strawberry picking jobs.

    • handzhiev 3 hours ago

      I don't think this will happen. If most content goes behind a paywall, releasing content for free will again become a valuable source of attention. It used to be so before the web got filled with so much free content that it lost any value.

      • reenorap 2 hours ago

        I disagree. AI will slurp their content so quickly that no one will notice.

  • daytonix an hour ago

    I can't believe we still have people out there buying this baby-brain idea of "If muh code is open than people will find vulns!!" This has been disproven for 20+ years catch up.

    AI generated bullshit PRs are clearly the bigger issue in the OSS space.

  • dangus 2 hours ago

    First we blamed AI for layoffs, next we are blaming AI for the AI bait and switch.

    It's entirely possible this CEO sincerely believes this, but that means you as a potential customer should stay away: now you know that the CEO of this company has no idea how technology works even at an executive level and/or that he doesn't consult his experts before making decisions.

    • pixel_popping 2 hours ago

      That's literally not it, a CEO can know how technology work and not apply it for its management, many people do things they "dislike" or don't believe in everyday.

      • dangus 2 hours ago

        Well, that's what I mean, this guy is using this issue as an scapegoat to close source the software and increase revenues as a result.

        The pipeline goes like this:

        Use open source license to gain traction and credibility > establish a customer base > pull the rug on open source to get everyone who depends on your product but isn't yet paying to pay.

  • jongjong 2 hours ago

    I decided to not open source my latest project but it has nothing to do with security concerns. My code is perfectly secure and bug-free.

    My concern is mostly financial. Most people would be in a better position to monetize my software than I am... Using AI to obfuscate the origin while appropriating all the key innovations. I wouldn't get any credit.

    Also, I'm not really interested in humans anymore. I have human fatigue.

    • poorcedural 6 minutes ago

      Humans are fine, the problem is your worth.

    • pixl97 2 hours ago

      >My concern is mostly financial.

      Then AI will eat your lunch anyway if the financial part has anything at all to do with the code.

      AI can decompile code very well.

    • flkiwi 2 hours ago

      > My code is perfectly secure and bug-free.

      I mean, bold statement but statistically speaking it's almost certainly incorrect. I will say that, irrespective of whether source is open or closed, I would be deeply skeptical of a project that made this assertion.

      • robocat 43 minutes ago

        I assumed they were trying to be humorous . Although I find that type of humour obnoxious enough that it would put me off the project.

        • flkiwi 19 minutes ago

          I gave it a good minute of reading and re-reading because I thought it SURELY was meant tongue in cheek, but I couldn’t make it work.

  • julianozen 2 hours ago

    There is another product I use that has a freemium model. They hope to monetize a paid tier for users who use the product a lot.

    In order to build trust, they open source their product. I forked it, removed the blocks from the freemium feature in 15 minutes using Claude Code. Never published the code to anyone else, just used it myself

    Unfortunately, I think it isn’t going to be tenable for systems to be fully open sourced going forward.

  • poorcedural an hour ago

    The idea of tying source code to sustenance will soon be history. We will all remember the days when adding some few thousand smart lines of code meant you could gain notoriety and through cheap viral copy expand those traits to wealth and worth. But software has always just been zeros and ones, the value only happens when interpreted.

    The future is sharing, you may not believe because your income is tied to being clever. Long term we are all more clever because of the sharing, and your contribution sometimes does not add to your personal success. Asking a company or its individuals to forego their success will not make them add more to our future. But they will add to our future nonetheless, because they all feel like we all do, that adding is what we are all meant to do.

  • misiti3780 2 hours ago

    I have a large open source project and noticed the number of LLM generate PR is making it unmanageable. Every two weeks, I go in, kill all of them and when someone complains or asks why, I realize it was a real person and then I merge it.

    is anyone else seeing this / fixed this problem ?

    • cuu508 an hour ago

      Yes, I "fixed" it by disabling pull requests on the repository. I'm still happy to pull from other people's branches (and do say so in CONTRIBUTING.md)

    • pixl97 2 hours ago

      > kill all of them and when someone complains or asks why, I realize it was a real person and then I merge it.

      I mean an AI skill is perfectly capable of doing this exact same thing.

  • the_af 2 hours ago

    I'm pro FOSS, militantly so. FSF-style.

    But... playing devil's advocate, if AI makes it very easy to find exploits without the source code, wouldn't it be doubly effective finding them with the source code as well? And why is the dichotomy posed by this blog post "open source with AI reviews by everyone" vs "closed source but only the bad guys use AI"? What if the scenario was: closed source and the authors/security team use every AI tool at their disposal to find bugs? What do the community's eyeballs add to this equation, assuming (big if) AI review of exploits is such a force multiplier?

    Before any knee-jerk reactions: big fan of open source, I'm not arguing this will kill it, I don't have the faintest idea what Cal.com is and I think a world without FOSS would be a tragedy, I run linux and most of my software on my personal PC (other than games) is FOSS.

  • themafia an hour ago

    > The real solution: fight fire with fire

    Which works if you assume that AI can find 100% of your bugs.

    It can't. So this is a complete waste of your time and will hide actual bugs behind a layer of confidence _and_ obscurity.

    You're going to actually have to sit down and figure out how to provide real security in your product while earning profits. This is called "work." I understand Silicon Valley would like to earn money and not work. I am eager for these people to get their comeuppance.

  • shevy-java 2 hours ago

    "Open Source Isn't Dead."

    Well ...

    Open Source as such will never "die", but we only need to look at what happened in, say, the last 5 or 10 years. Private entities with a commercial interest, have been flexing their muscles. Microsoft - also known as Microslop these days - with Github is probably the most famous example still, but you can see other examples. One that annoys me personally is Shopify's recent influence - rubygems.org is basically just shopifygems.org now. See: https://blog.rubygems.org/2026/04/15/rubygems-org-has-a-publ...

    "Contributors from both the RubyGems client team and Shopify are already working with us on making native gems a better experience for the Ruby community. "

    There is a lot more I could add to this (see my complaint about how rubygems.org hijacks gems past the 100.000 download barrier; this was why I retired from using rubygems.org, and then the year afterwards ruby core purged numerous developers. The handwriting is soooooo clear that shopify flexed their muscles here).

    I think we need to make open source development more accessible to everyone, not just corporations throwing their money to gain influence and leverage. I don't have a great idea to make this model work; economic incentives kind of have to be there too, I get that part, and I am not sure which models could work. But right now we really have a big problem. We can also see this with age sniffing (age verification - see the article that pointed at Meta at orchestrating influence and lobbyism) and many more changes. Something has to change. Hopefully some people cleverer than me can come up with models that are actually sustainable, even if it may not necessarily be a "fund an open source developer for a year". There could be a more wide-spread "achieve xyz" or some other lower finance effort - but again, I don't have a good suggestion here. Hopfully something improves here though, because I am getting really tired of private interests constantly sabotaging and ruining the whole ecosystem while claiming they do "improve" an ecosystem. We have the old "War is peace. Freedom is slavery. Ignorance is strength." going again. Opposite day, every day.

    • pixl97 an hour ago

      There are no answers, only compromises.

      Corporations are about money.

      Individuals need to eat.

      Governments love to concentrate power.

  • righthand 2 hours ago

    Open source is dead, AI-pundits are applying the wrong lessons. No one has to accept AI or play the game all these AI companies don’t work if everyone stops publishing. Let the AI generated content industry have the publish space, they're very adamant about taking it over and watering it down with slop.

    I wrote some very nice expressive text for our deployment guide. My project manager took the guide and had Gemini break it down into plain boring bullet points. AI and the pundits can gf themselves in their journey to kill human expression.

    Here is what I wrote in the guide:

    "Post Deploy Responsibility

    If you made it this far, say “Wow I really did it and it was so easy!”

    Did you say it? Good. Now you are entirely responsible for any issues or bugs that may arise from the newly deployed code. Don’t go anywhere until the deploy has finished (usually takes a few minutes). While an issue or bug may not leave you directly at fault, you are responsible for coordinating any rollbacks or remediations that may be needed until the next deploy."

    Here is what the product manager slopped it into:

    "- Post deploy responsibility

      - You are responsible for performing QA upon deployment 
    
      - You are responsible for any issues or bugs that may arise from newly deployed code 
    
      - You are responsible for coordinating any rollbacks or remediations that may be needed until the next deploy"
    
    My paragraph wasn't long, hard to understand, or poorly written. I wouldn't have objected to a rewording or some changes but the project manager chose to just copy paste it into Gemini and copy and paste it back. So my take is that they didn't understand what I wrote. Which is a few sentences long and frankly sad if a paragraph is too intense for you to read. When my project manager did this during the meeting I said, "RIP human expression" and their response was a very hasty "no that's not what's happening". This is what all the pundits want to do to everyone and society. Don't believe them that "it's just a tool", that is just a tactic to get you to rollover so they can shove more AI in your face.
    • hootz 2 hours ago

      And your paragraph had a much bigger impact on the reader. Your paragraph reads like an experienced senior developer teaching you to not screw things up, while the AI generated bullet points sound like generic ToS that everyone ignores.

  • theturtletalks 3 hours ago

    Enshittification has come for VC backed open-source. As someone on Twittter said, open source has deemed commercial open source obsolete especially when users can point Calude Code to calcom on GitHub and ask it to make them scheduling features directly into their product. That’s what spooked Cal.

  • Peer_Rich 2 hours ago

    cofounder here

    going closed source does not mean we are not fighting fire with fire

    we are using a handful of internal AI vulnerability scanners for months now

    being open source simply reduces risk by 5x to 10x according to several security researchers we are working with https://cal.com/blog/continuous-ai-pentesting-vulnerability-...

    • henry2023 2 hours ago

      Don’t get me wrong but if virtually all modern software infrastructure lives on top of open source and they’re mostly fine then I’d imagine that you can make a scheduling webapp secure independent to if it’s OSS or not.

      It’s OK if there’s another reason for this transition, just be transparent about it and don’t treat your users as children.

      • righthand an hour ago

        They don’t owe you a complete list of reasons why they’re close sourcing their software. They are not a publicly traded company and no one (customers) actually cares if the product is open source or not.

    • OsrsNeedsf2P 2 hours ago

      I've always used and advocated for Cal.com because it's open source. I understand you need to make money and this is no longer the GTM, but don't lie about it.