What CI looks like at a 100-person team (PostHog)

(mendral.com)

54 points | by shad42 5 days ago ago

24 comments

  • joncrane 12 minutes ago

    We get it! They have 22,477 tests with a 99.98% pass rate, ship 65 commits to main daily, and keep 98 engineers productive on a single monorepo.

    I thought the repetition of these statitics was a little tired, but overall that's an impressive solution. Also totally get that the hardest part is log ingestion and indexing.

  • sd9 an hour ago

    It just seems weird to me to throw all these stats together. Putting 75GB of logs in the same category as managing the compute for this many parallel workflows and so on seem like problems on totally different scales.

    Unfortunately I didn’t really get the point of the article after being bombarded with stats, expect that the authors have an AI tool to sell.

  • jofzar an hour ago

    > These are not the numbers of a team with a CI problem. These are the numbers of a team that moves extremely fast and takes testing seriously.

    Please no AI slop, write your own bloody blog posts.

  • SirensOfTitan 2 hours ago

    I don't really think this is at all at the quality bar for posts here. This is obviously AI-slop -- why should I invest more time reading your slop than you took to write it?

    Even so, at what point do we consider the LLM-ification of all of tech a hazard? I've seen Claude go and lazily fix a test by loosening invariants. AI writes your code, AI writes your tests. Where is your human judgment?

    Someone is going to lose money or get hurt by this level of automation. If the humans on your team cannot keep track of the code being committed, then I would prefer not to use your product.

    • Tade0 an hour ago

      > I've seen Claude go and lazily fix a test by loosening invariants.

      He does pull a sneaky on you from time to time, even nowadays, in v4.6, doesn't he?

      To me it's analogous to the current situation at the strait of Hormuz - it's an enormous crisis but since almost everyone has a buffer of oil stockpiles, we can pretend it's not there.

    • fullstackchris 2 hours ago

      this is extremely strawman - with this your basically saying any software ever that has parts written by automation or cron jobs (even before llms) is not a product worth using? foolish.

      • SirensOfTitan an hour ago

        Your response reads much more like a strawman than my original comment.

        I’d challenge you to identify where in my post I said I wouldn’t use software that employs automation?

        It is pretty clear I am not talking about running CI for automated and predictable signals or cron jobs. I am talking about using AI to write code and also fix tests.

        It is exceedingly clear in practice that the volume of code produced by LLMs is too much for the humans using these tools to read and understand. We are collectively throwing decades of best practices out of the window in service of “velocity.” Even the FAANG shops I know of who previously had good engineering cultures seem to be endorsing the cult of: AI generated everything with stamp approval.

      • rogerrogerr an hour ago

        Cron jobs are not capable of flat-out deceit.

  • Havoc 2 hours ago

    To me that reads more like monorepo is a central point of failure and they’re scrambling to bandaid the consequence of that decision. And the bandaids aren’t gonna scale to 1000 people

    I guess they’re missing whatever Google has to make their monorepo scale

  • simianwords 3 hours ago

    interesting that they have an agent that is triggered on flaky CI failures. but it seems far too specific -- you can have pull request on many other triggers.

    there doesn't seem to be any upside on having it only for flaky tests because the workflow is really agnostic to the context.

  • elteto 3 hours ago

    I think this is the first article that truly gave me “slop nausea”. So many “It’s not X. It’s Y.” Do people not realize how awful this reads? It’s not a novel either, just a few thousand words, just fucking write it and edit it yourself.

    • zeristor 3 hours ago

      I'm guessing they have a workflow for blog posts, with 100k workflows I was wondering something seems a bit weird.

  • IshKebab 3 hours ago

    > Every commit to main triggers an average of 221 parallel jobs

    Jesus, this is why Bazel was invented.

  • zX41ZdbW 3 hours ago

    Two problematic statements in this article:

    1. Test pass rate is 99.98% is not good - the only acceptable rate is 100%.

    2. Tests should not be quarantined or disabled. Every flaky test deserves attention.

    • lab14 3 hours ago

      a test pass rate of 100% is a fairy tale. maybe achievable on toy or dormant projects, but real world applications that make money are a bit more messy than that.

      • alkonaut 2 hours ago

        I definitely have 100% pass rate on our tests for most of the time (in master, of course). By "most of the time" I mean that on any given day, you should be able to run the CI pipeline 1000 times and it would succeed all of them, never finding a flaky test in one or more runs.

        In the rare case that one is flaky, it's addressed. During the days when there is a flaky test, of course you don't have 100% pass rate, but on those days it's a top priority to fix.

        But importantly: this is library and thick client code. It should be deterministic. There are no DB locks, docker containers, network timeouts or similar involved. I imagine that in tiered application tests you always run the risk of various layers not cooperating. Even worse if you involve any automation/ui in the mix.

        Obviously there are systems it depends on (Source control, package servers) which can fail, failing the build. But that's not a _test_ failure.

        If the build it fails, it should be because a CI machine or a service the build depends on failed, not because an individually test randomly failed due to a race condition, timeout, test run order issue or similar

        • salomonk_mur 2 hours ago

          If one is flaky, then you are below 100% friend.

          • alkonaut 2 hours ago

            That's not what I mean. I mean that anything but 100% is a "stop the world this is unacceptable" kind of event. So if there is a day when there is a flaky test, it must be rare.

            To explain further

            There is a difference between having 99.99% test pass every day (unacceptable) which is also 99.99% tests passing for the year, versus having 100% tests passing on 99% of days, and 99% tests on a single bad day. That might also give 99.99% test pass rate for the year, but here you were productive on 99/100 days. So "100.0 is the normal" is what I mean. Not that it's 100% pass on 100% of days.

            Having 99.98% tests pass on any random build is absolutely terrible. It means a handful of tests out of your test suite fail on almost _every single CI run_. If you have 100% test pass as a validation for PR's before merge, that means you'll never merge. If you have 100% test pass a validation to deploy your main branch that means you'll never deploy...

            You want 100% pass on 99% of builds. Then it doesn't matter if 1% or 99% of tests pass on the last build. So long as you have some confidence that "almost all builds pass entirely green".

      • _heimdall 2 hours ago

        When I was at Microsoft my org had a 100% pass rate as a launch gate. It was never expected that you would keep 100% but we did have to hit it once before we shipped.

        I always assumed the purpose was leadership wanting an indicator that implied that someone had at least looked at every failing test.

      • 3 hours ago
        [deleted]
    • YetAnotherNick 2 hours ago

      Even something as simple as docker pull fails for 0.02% of the time.

    • rkomorn 3 hours ago

      On top of 2., new tests should be stress-tested to make sure they aren't flaky so that the odds of merging them go down.

      • lionkor 3 hours ago

        I can run flaky tests on my machine a thousand times without failure, whereas they fail in CI sometimes.

    • hristian 3 hours ago

      [dead]