243 comments

  • csdreamer7 19 hours ago

    This means their servers are very old ones that do not support x86-64-v2. Intel Core 2 Duo days?

    https://developers.redhat.com/blog/2021/01/05/building-red-h...

    Think of how much faster their servers would be with one of those Epyc consumer cpus.

    I was about to ask people to donate, but they have $80k in their coffers. I realize their budget is only $17,000 a year, but I am curious why they haven't spent $2-3k on one of those Zen4 or Zen5 matx consumer Epyc servers as they are around under $2k under budget. If they have a fleet of these old servers I imagine a Zen5 one can replace at least a few of them and consume far less power and space.

    https://opencollective.com/f-droid#category-BUDGET

    Not sure if this includes their Librapay donations either:

    https://liberapay.com/F-Droid-Data/donate

    • bayindirh 17 hours ago

      > This means their servers are very old ones that do not support x86-64-v2. Intel Core 2 Duo days?

      This is not always a given. In our virtualization platform, we have upgraded a vendor supplied VM recently, and while it booted, some of the services on it failed to start despite exposing a x86_64v2 + AES CPU to the said VM. Minimum requirements cited "Pentium and Celeron", so it was more than enough.

      It turned out that one of the services used a single instruction added in a v3 or v4 CPU, and failed to start. We changed the exposed CPU and things have returned to normal.

      So, their servers might be capable and misconfigured, or the binary might require more that what it states, or something else.

      • lucb1e 16 hours ago

        A developer on the ticket writes: "Our machines run older server grade CPUs, that indeed do not support the newer SSE4_1 and SSSE3"

        • bayindirh 16 hours ago

          Ooh. They are at least ~15 years old, then. Maybe they have scored on some old, 4 socket Dell R815s. 48 cores ain't that bad for a build server.

          • lucb1e 16 hours ago

            It's kinda good they use such old systems, as the vast majority of pollution occurs during manufacturing of devices since we usually use them only a handful of years. Iirc the break-even point was somewhere around 25 years, as in, upgrading for energy efficiency then becomes worth it (source: https://wimvanderbauwhede.codeberg.page/articles/frugal-comp...). 15 goes a long way towards that!

            On the other hand, I didn't dig very deep into the ticket history now but it sounds like this could have been expected: it broke once already 4 years ago (2021), so maybe planning an upgrade for when this happens again would be good foresight. Then again, volunteers... It's not like I picked up the work as an f-droid user either

            • NewJazz 15 hours ago

              While I appreciate the sentiment, I think you may be misreading the "Emissions from production of computational resources" section of that link.

              It says for servers that 13-21 years is the break even for emissions from production vs consumption.

              The 25 year number is for consumer devices like phones and laptops.

              I would also argue that average load on the servers comes into play.

            • miladyincontrol 3 hours ago

              Moot point imo, no one says they have to buy new hardware. Used, affordable, but still much more modern hardware could still save them plenty on power usage and replace several systems with one.

    • Timshel 18 hours ago

      $2-3k ? That’s barely the price of a lower end Threadripper bare cpu not a full Epyc server ???

      • wongarsu 17 hours ago

        At our supplier $2k would pay for a 1U server with a 16 core 3GHz Epyc 7313P with 32GB RAM, a tiny SSD and non-redundant power.

        $3k pays for a 1U server with a 32 core 2.6GHz Epyc 7513 with 128GB RAM and 960GB of non-redundant SSD storage (probably fine for build servers).

        All using server CPUs, since that was easier to find. If you want more cores or more than 3GHz things get considerably more expensive.

        • Timshel 15 hours ago

          Yes but thoose are Zen 3 Milan cpu released in 2021 I believe.

          Not that they are bad and would not be way better than what they have, just that I though the parent was quite the optimist with his Zen4/Zen5 pricing.

          • wtallis 14 hours ago

            OP did say "consumer Epyc", so presumably referring to the parts using the AM5 socket. From a quick check on Newegg, it looks like barebones servers for that platform start at under $1000, to which you need to add CPU, RAM, and storage. So a $3000 budget to assemble a low-end Zen4/5 EPYC server is realistic: $570 for the 16-core EPYC 4565P, a few hundred for DDR5 ECC unbuffered modules, a few hundred for an enterprise SSD, and you have a credible current-gen server from readily available parts at retail prices, without any of the enterprise pricing and procurement hassle.

            • csdreamer7 14 hours ago

              That was my intention; mATX AM5 parts.

            • BizarroLand 13 hours ago

              I imagine they would need quite a few servers to replace their current setup.

              Then there's also the overhead of setting up and maintaining the hardware in their location. It's not just a "solve this problem for ~$2,000 and be done with it".

              I don't know the actual specs or requirements. Maybe 1 build server is sufficient, but from what I know there's nearly 4,000 apps on FDroid. 1 server might be swamped handling that much overhead in a timely manner.

              • wtallis 13 hours ago

                One server with today's tech can easily replace several servers that are 12+ years old. 4000 apps doesn't sound like a lot of work for one machine, unless you assume almost all of them are releasing new builds more than once a week. A 16-core CPU can rebuild a full Gentoo desktop OS multiple times a week.

        • speckx 13 hours ago

          Is that $2k/$3k for the year?

          • wongarsu 12 hours ago

            That's $2k/3k to get a box with fully assembled hardware delivered to your doorstep or to a DC of your choice.

            Space in your basement or the colo rack of a datacenter along with power, data and cooling is an expense on top. But whatever old servers they have are going to take up more space and use more power and cooling. Upgrading servers that are 5+ years old frequently pays for itself because of the reduced operating costs (unless you opt for more processing power at equal operating cost instead)

      • c0balt 15 hours ago

        Low end EPYC (16-24 cores) especially for older generations are not that expensive 800-1.2K ime. Less when in a second hand server.

    • doublepg23 17 hours ago

      Perhaps the servers run Coreboot / Libreboot?

    • ignoramous 17 hours ago

      > about to ask people to donate, but they have $80k in their coffers

      I'd still ask folks to donate. £80k isn't much at all given the time and effort I've seen their volunteers spend on keeping the lights on.

      From what I recall, they do want to modernize their build infrastructure, but it is as big as an investment they can make. If they had enough in their "coffers", I'm sure they'd feel more confident about it.

      It isn't like they don't have any other things to fix or address.

      • csdreamer7 14 hours ago

        I would too but do you have a link to them talking about it?

    • pclmulqdq 17 hours ago

      I'm not even sure mainline Linux supports machines this old at this point. The cmpxchg16b instruction isn't that old, and I believe it's required now.

    • FirmwareBurner 18 hours ago

      >they have $80k in their coffers but I am curious why they haven't spent $2-3k on one of those Zen4 or Zen5 matx consumer Epyc servers

      I would also like to know this.

      • pastage 18 hours ago

        I would much rather they spent that on having the devs network and travel, the servers work.

        • melodyogonna 18 hours ago

          Why are the builds failing then?

          • tcfhgj 14 hours ago

            planned obsolescence by Google

            • shadowgovt 12 hours ago

              Beginning to use a CPU opcode that is 19 years old doesn't feel like planned obsolescence. if anything, it feels like unplanned obsolescence... "Oh hell what do you mean your CPU doesn't have that opcode no we've just been running the compiler with the default flags and that opcode got added to the default two months ago after a 10-year fight about the possible consequences of changing defaults!"

              Although I'm a little surprised to learn that the binary itself doesn't have enough information in its header to be able to declare that it needs SSSE3 to be executed; that feels like something that should be statically-analyzed-and-cached to avoid a lot of debugging headaches.

              • tcfhgj 11 hours ago

                > "Oh hell what do you mean your CPU doesn't have that opcode [...]"

                hobbyst dev? sure

                Google? nope

                • shadowgovt 11 hours ago

                  Did they make any explicit guarantees that their newly-cut binaries would continue to support 20-year-old architectures?

                  Googlers aren't gods. It's a 100,000-person company; they're as vulnerable to "We didn't really think of that one way or the other" as anyone else.

                  ETA: It's actually not even Google code that changed (directly); Gradle apparently began requiring SSSE3 (https://gitlab.com/fdroid/admin/-/issues/593#note_2681207153) and Google's toolchain just consumed the new constraint from its upstream.

                  Here, I'm not surprised at all; Google is not the kind of firm that keeps a test-lab of older hardware for every application they ship, so (particularly for their dev tooling) "It worked on my machine" is probably ship-worthy. I bet they don't even have an explicit architecture target for the Android build toolchain beyond the company's default (which is generally "The two most recent versions" of whatever we're talking about).

        • Angius 11 hours ago

          They clearly don't

      • Perz1val 16 hours ago

        Yeah and everybody was complaining how slow the builds are for years. I really want to know too

      • lupusreal 18 hours ago

        Probably a case of "don't fix it if it ain't broke" keeping old machines in service too long, so now they broke.

        • FirmwareBurner 13 hours ago

          That's like ignoring your 'Check Engine' light because the engine still runs.

  • benrutter a day ago

    This is pretty concerning, especially as FDroid is by far the largest non-google android store at the moment, something that I feel is really needed, regardless of your feelings about google.

    Does anyone know of plans to resolve this? Will FDroid update their servers? Are google looking into rolling back the requirement? (this last one sounds unlikely)

    • dannyw a day ago

      I agree it’s a bit concerning but please keep in mind F-Droid is a volunteer-run community project. Especially with some EU countries moving to open source software, it would be nice to see some public funding for projects like F-Droid.

      • berkes a day ago

        > please keep in mind F-Droid is a volunteer-run community project.

        To, me, that's the worrying part.

        Not that it's ran by volunteers. But that all there's left between a full-on "tech monopoly" or hegemony, and a free internet, is small bands of underfunded volunteers.

        Opposition to market dominance and monopolies by multibillion multinationals shouldn't just come from a few volunteers. If that's the case, just roll over and give up; the cause is lost. (As I've done, hence my defaitism)

        Aside from that: it being "a volunteer ran community" shouldn't be put as an excuse for why it's in trouble/has poor UX/is hard to use/is behind/etc. It should be a killer feature. Something that makes it more resilient/better attuned/easier/earlier adopting/etc.

        • Dr4kn 21 hours ago

          The EU governments should gradually start switching to open source solutions. New software projects should be open source by default and only closed if there is a real reason for it.

          The EU is already home to many OS contributors and companies. I like the Red Hat approach where you are profitable, but with open source solutions. It's great for governments because you get support, but it's much easier to compete, which reduces prices.

          Smaller companies also give more of their money to open source. Bigger companies can always fork it and develop it internally and can therefore pressure devs to do work for less. Smaller companies have to rely on the projects to keep going and doing it all in house would be way too expensive for most.

          • ethbr1 18 hours ago

            > I like the Red Hat approach where you are profitable, but with open source solutions.

            The Red Hat that was bought by IBM?

            I agree with your goals, but the devil is in the methods. If we want governments to support open source, the appropriate method is probably a legislative requirement for an open source license + a requirement to fund the developer.

          • FMecha 20 hours ago

            idk if you meant this, but I thought of F-Droid and other major open source projects being publicly funded by EU.

          • lupusreal 18 hours ago

            It seems like every other year I read a story about Munich switching to Linux. It keeps happening so evidently it's not sticking very well. Either there are usability or maintenance problems, or Microsoft's sales and lobbying is too effective.

        • croes 20 hours ago

          >But that all there's left between a full-on "tech monopoly" or hegemony, and a free internet, is small bands of underfunded volunteers.

          Always has been.

        • theLegionWithin 12 hours ago

          Apple has an iPhone app store monopoly, but Google is the bad guy here?

          hogwash

        • camdroidw a day ago

          Google has recently lost two cases against DoJ, keeping fingers crossed that Android will be divestituted.

          • shadowgovt 12 hours ago

            It's interesting to me how people panicked about the idea that 23AndMe's bankruptcy implies that some unknown, untrusted third-party will have their genetic information, but people are also crowing at the idea that a company that has purchase history on all your smartphone apps (and their permissions, and app data backup) could be compelled by the government to divest that function to some unknown, untrusted third-party.

      • benrutter a day ago

        Hope I didn't come across as criticising FDroid here- It seems sucky to have build requirements change under your feet.

        It's just I think that FDroid is an important project, and hope this doesn't block their progress.

      • nativeforks a day ago

        > Nice to see some public funding for projects like F-Droid

        Definitely, SSE4.1 instruction set based CPU, for building apps in 2025, No way!!

    • happosai a day ago

      Maybe if f-droid is important to you, donate, so they can buy newer build server?

      • benrutter a day ago

        I'm not quite sure if I'm over reading into this, but this comes across as a snarky response as if I've said "boo, fdroid sucks and owes me a free app store!".

        Appologies if I came across like that, here's what I'm trying to convey:

        - Fdroid is important

        - This sounds like a problem, not necessarily one that's any fault of fdroid

        - Does anyone know of a plan to fix the issue?

        For what it's worth, I do donate on a monthly basis to fdroid through liberapay, but I don't think that's really relevant here?

        • happosai 21 hours ago

          You are right, my message comes through as too snarky. What I wanted to give is an actionable item for the readers here.

      • nativeforks a day ago

        This has now become a major issue for F-Droid, as well as for FOSS app developers. People are starting to complain about devs because they haven't been able to release the new version for their apps (at least it doesn't show up on F-Droid) as promised

      • chasil 18 hours ago

        Is Westmere the minimum architecture needed for the required SSE?

        Server hardware at the minimum v2 functionality can be found for a few hundred dollars.

        A competent administrator with physical access could solve this quickly.

        Take a ReaR image, then restore it on the new platform.

        Where are the physical servers?

        • LtdJorge 17 hours ago

          Zen 2 Epyc would barely double the price of older platforms if you buy an entire server, and would run circles around them.

          • chasil 17 hours ago

            A slow computer that does what you want is infinitely more valuable than a fast computer that does not.

            • grim_io 16 hours ago

              why would a fast computer refuse to do what you want?

      • ratdragon a day ago

        Did and doing regularly.

    • lucb1e 16 hours ago

      > Are google looking into rolling back the requirement? (this last one sounds unlikely)

      That's apparently what they did last time. From the ticket:

      "Back in 2021 developers complained that AAPT2 from Gradle Plugin 4.1.0 was throwing errors while the older 4.0.2 worked fine. \n The issue was that 4.1.0 wanted a CPU which supports SSSE3 and on the older CPUs it would fail. \n This was fixed for Gradle Plugin 4.2.0-rc01 / Gradle 7.0.0 alpha 9"

    • Jyaif a day ago

      > FDroid is by far the largest non-google android store at the moment

      Not even sure it's in the top 10

      • benrutter a day ago

        Wait really? What other ones are there!? Somebody's already pointed out Samsumg Galaxy store, but I don't think I know of others?

        Edit: searching online found this if anyone else is interested https://www.androidauthority.com/best-app-stores-936652/

        • magnio a day ago

          There are at least six Android app stores in China that have more than 100 million MAUs each: Huawei AppGallery, Tencent MyApp, Xiaomi Mi Store (or GetApps), Oppo, Vivo, and Honor stores.

          • IceWreck a day ago

            Huawei and Honor are seperate app stores?

            And Oppo and Vivo too?

            In both instances one company owns the other - why have competing app stores?

        • lagadu a day ago

          Amazon has a big one too. I also know of a popular one called Aptoide.

          • Dr4kn a day ago

            Amazon closes their app store on 2025-08-20, so in 7 days.

            • rs186 19 hours ago

              *for non Fire devices.

              • yellowapple 12 hours ago

                I could've sworn they'd already closed it for non-Fire devices.

      • msgodel 20 hours ago

        I think we only know about F-Droid because it's the only high quality one.

        Low quality software tends to be popular among the general public because they're very bad at evaluating software quality.

    • charcircuit a day ago

      >FDroid is by far the largest non-google android store at the moment

      Samsung Galaxy Store is much much bigger.

      • ykonstant a day ago

        Funny true story: I got my first smartphone in 2018, a Samsung Galaxy A5. I have it to this day, and it is the only smartphone I ever used. This is the first time I hear about Samsung Galaxy store! (≧▽≦)

      • ozim a day ago

        Largest not run by the corporations then ;)

      • benrutter a day ago

        Yup! I missed that one because I didn't realise it still existed. Woops!

    • 1oooqooq 15 hours ago

      why you read "google build tools cannot be built from source and it was compiled with an optional optimizations as required" and assume the right thing to do is to buy newer servers?

  • ivanjermakov a day ago

    Why not recompile aapt2 to correct target? It seems to be source available.

    https://android.googlesource.com/platform/frameworks/base/+/...

    • munchlax 20 hours ago

      Have you tried building AOSP from available sources?

      Binaries everywhere. Tried to rebuild some of them with the available sources and noped the f out because that breaks the build so bad it's ridiculous.

      • zoobab 19 hours ago

        "Binaries everywhere"

        So much for "Open Source"

        • jeroenhd 19 hours ago

          The binaries are open source, but Google doesn't design their build chain to recompile from scratch every time.

          Also, you don't need to compile all of AOSP just to get the toolchain binaries.

          • orblivion 15 hours ago

            With how strict F-Droid is I would have expected them to build from source all the way down. Though that sounds like a daunting task so I don't blame them.

        • gbin 19 hours ago

          Everything is open source, if you can read assembly ;)

          • bluGill 19 hours ago

            Machine code. Assembly is higher level. since data and instructions can be mixed machine code is harder to decode - that might be a byte of data or an instruction. Mel would have [ab]used this fact to make his programs work. It is worse on x86 where instructions are not fixed length but even on arm you can run into problems at times

            • snake42 17 hours ago

              You can always lift machine code to assembly. Its a 1 to 1 process.

              • bluGill 16 hours ago

                No you cannot. While it is 1 to 1, you still need to know where to start as if you start at the wrong place data will be interrupted as an asm instruction and things will decode legally - but invalidly. It is worse on CISC (like x86) where instructions are different length and so you can jump to the middle byte of a long instruction and decode a shorter instruction. (RISC sometimes starts to get CISC features as they add more instructions as well).

                If the code was written reasonably you can usually find enough clues to figure out where to start decoding and thus get a reasonable assembly output, but even then you often need to restart the decoding several times because the decoder can get confused at function boundaries depending on what other data gets embedded and where it is embedded. Be glad self modifying code was going out of style in the 1980's and is mostly a memory today as that will kill any disassembly attempts. All the other tricks that Mel used (https://en.wikipedia.org/wiki/The_Story_of_Mel) also make your attempts at lifting machine code to assembly impossible.

              • Akronymus 15 hours ago

                It definitely isnt a 1:1 process, as there are multiple ways to encode the same instruction (with possibly even having some subtle side effects based on the encoding)

                https://youtu.be/eunYrrcxXfw

          • ignoramous 17 hours ago

            ... this is why we get DRM. Source modification is what hurts them.

      • rbanffy 19 hours ago

        Yes. Sources available means nothing without a reproducible build process.

      • ivanjermakov 16 hours ago

        So open source is only in the name, noted

      • pwdisswordfishz 16 hours ago

        Debian also seems to have given up.

    • ethan_smith 17 hours ago

      Using Docker with QEMU CPU emulation would be a more maintainable solution than recompiling aapt2, as it would handle future binary updates automatically without requiring custom patches for each release.

  • mjevans a day ago

    https://en.wikipedia.org/wiki/Streaming_SIMD_Extensions#Late...

    Even my last, crazy long in the tooth, desktop supported this and it lived to almost 10 years old before being replaced.

    However at the same time, not even offering a fallback path in non-assembly?

    • wtallis a day ago

      > However at the same time, not even offering a fallback path in non-assembly?

      There's probably not any hand-written assembly at issue here, just a compiler told to target x86_64-v2. Among others, RHEL 9 and derivatives were built with such options. (RHEL 10 bumped up the minimum spec again to x86_64-v3, allowing use of AVX.)

      • shadowgovt 10 hours ago

        Or even, a compiler told to target nothing in particular, and a default finally toggled over from "Oh, we're 'targeting x86'? So CPUs from the early 2000s then" to "Oh, we're 'targeting x86'? So CPUs from the mid-2010s then."

    • vocx2tx a day ago

      Looking at the issue their builders seem to be Opterons G3 (K10?)[0]

      [0] https://en.wikipedia.org/wiki/AMD_10h

      • SSLy a day ago

        at this point they're guzzling so much power the electricity is more expensive than replacement platform

      • ozim a day ago

        I can imagine this has to be like that as they usually get $1500 per month in donations.

        You could buy a newer one but I guess they have other stuff they have to pay for.

        • a012 a day ago

          For $500 you can get a decent refurbished server on ebay that supports those “new” extensions

        • yaro330 20 hours ago

          I am 100% sure that if they put out a call for action and asked for hardware donations they would be able to get newer stuff. Ryzen 7 1700 goes for as cheap as 50$, DDR4 ram at supported speeds (2133 MHz) is also dirt cheap.

        • delfinom 20 hours ago

          $1500 / month is probably swallowed by how much of powerpigs those Opertons are, like they are bad, real bad.

        • WesolyKubeczek 21 hours ago

          This is a bit of vicious circle. How much of that money goes even into keeping those servers running? The electricity bill alone, geez. They could do a dedicated fundraiser to get themselves two boxes that are a decade old and still have spare parts available, coming from Broadwell era, they will have enough instruction set support to cover the baseline towards which multiple distros are converging (Haswell and up).

          • Zak 19 hours ago

            Given their target audience, they could probably just request a hardware donation. Some sysadmin out there is probably getting rid of exactly what they need.

          • Palomides 18 hours ago

            if it's colocated (surely the case) they aren't paying per kWh

        • chillingeffect 19 hours ago

          >$1500/month

          Wow, i just got into newpipe/fdroid. Its neat to think even a donation the size of mine can be almost individually meaningful :)

      • yonatan8070 17 hours ago

        I have a home server with a 9th gen i7 that's doing jack sh!t most of the time, is there a way to donate some compute time to build F-Droid packages?

    • CJefferson a day ago

      The problem with offering fallbacks is testing -- there isn't any reasonable hardware which you could use, because as you say it's all very old and slow.

    • pestatije 19 hours ago

      I'm sure theyll appreciate your old desktop donation

  • karteum a day ago

    I don't fully understand: aren't gradle and aapt2 open-source ?

    If you want to build buildroot or openwrt, the first thing it will do is compiling your own toolchain (rather than reusing the one from your distro) so that it can lead to predictable results. I would have the same rationale for f-droid : why not compile the whole toolchain from source rather than using a binary gradle/aapt2 that uses unsupported instructions?

    • a2128 21 hours ago

      SDK binaries provided by Google are still used, see https://forum.f-droid.org/t/call-for-help-making-free-softwa...

    • mid-kid 17 hours ago

      I agree, this should be the case, but Gradle specifically relies on downloading prebuilt java libraries and such to build itself and anything you build with it, and sometimes these have prebuilt native code inside. Unlike buildroot and any linux distribution, there's no metadata to figure out how to build each library, and the process for them is different between each library (no standards like make, autotools and cmake), so building the gradle ecosystem from source is very tedious and difficult.

      • 1oooqooq 15 hours ago

        having worked with both mvn and gradle, i always have a good chuckle when i hear about npm "supply chain" hacks.

  • jchw a day ago

    Apparently it was fixed upstream by Google?

    https://gitlab.com/fdroid/admin/-/issues/593#note_2681207153

    Not sure how long it will take to get resolved but that thread seems reassuring even if there isn't a direct source that it was fixed.

    • AnssiH 21 hours ago

      It is not fixed.

      In the thread you linked to people are confusing a typo correction ("mas fixed" => "was fixed") as a claim about this new issue being fixed.

      The one that was fixed is this similar old issue from years ago: https://issuetracker.google.com/issues/172048751

      • jchw 12 hours ago

        Oh, that's unfortunate, very confusing thread.

    • nativeforks a day ago

      Still haven't. Currently, most of the devs aren't aware of this underlying issue!

  • micw a day ago

    As far as I can see, sse4.1 has been introduced in CPUs in 2011. That's more than 10 years ago. I wonder why such old servers are still in use. I'd assume that a modern CPU would do the same amount of work with a fraction of energy so that it does not even make economical sense to run such outdated hardware.

    Does anyone know the numbers of build servers and the specs?

    • eadmund 20 hours ago

      > I'd assume that a modern CPU would do the same amount of work with a fraction of energy so that it does not even make economical sense to run such outdated hardware.

      There are 8,760 hours in a non-leap year. Electricity in the U.S. averages 12.53 cents per kilowatt hour[1]. A really power-hungry CPU running full-bore at 500 W for a year would thus use about $550 of electricity. Even if power consumption dropped by half, that’s only about 10% of the cost of a new computer, so the payoff date of an upgrade is ten years in the future (ignoring the cost of performing the upgrade, which is non-negligible — as is the risk).

      And of course buying a new computer is a capital expense, while paying for electricity is an operating expense.

      1: https://www.eia.gov/electricity/monthly/epm_table_grapher.ph...

      • wang_li 17 hours ago

        You can buy a mini pc for less than $550. For $200 on Amazon you can get an N97 based box with 12 GB RAM and 4 cores running at 3 GHz and a 500 GB SATA SSD. That’s got to be as fast as their current build systems and supports the required instructions.

        • officeplant 12 hours ago

          Those single memory channel shitboxes aren't even fast enough to be usable during big windows updates let alone used in production.

          • wtallis 11 hours ago

            One channel of DDR5-4800 actually competes pretty well against four channels of DDR3-1333 spread across two chiplets, which was the best Opteron configuration old enough to not have SSE4.1.

        • 1oooqooq 14 hours ago

          if you don't understand bandwidths and how long componenets can run at the 80pctile before failure, you're out of your element in this discussion.

    • adrian_b a day ago

      It has been introduced in Intel Penryn, in November 2007.

      However the AMD CPUs did not implement it until Bulldozer, in mid 2011.

      While they lacked the many additional instructions provided by Bulldozer, also including AVX and FMA, for many applications the older Opteron CPUs were significantly faster than the Bulldozer-based CPUs, so there were few incentives for upgrading them, before the launch of AMD Epyc in mid 2017.

      SSE 4.1 is a cut point in supporting old CPUs for many software packages, because older CPUs have a very high overhead for divergent computations (e.g. with if ... else ...) inside loops that are parallelized with SIMD instructions.

    • cjaackie a day ago

      I haven’t seen the real answer that I suspect here - the build servers are that one dual socket AMD board which runs open firmware and has no ME/PSP .

    • ffaser5gxlsll a day ago

      On the server side, probably not, but I'd like to point out that old hardware is not uncommon, and it's going to be more and more likely as time passes especially in the desktop space.

      I was hit by this scenario in the 2000s with an old desktop pc I had, also in the 10ys range, I was using just for boring stuff and random browsing, which was old, but perfectly adequate for the purpose. With time programs got rebuilt with some version of SSE it didn't support. When even firefox switched to the new instruction set, I had to essentially trash a perfectly working desktop pc as it became useless for the purpose.

    • LukeShu a day ago

      I was going to say that I assume that the reason for such old CPUs is the ability to use Canoeboot/GNU Boot. But you absolutely can put an SSE4.2 CPU in a KGPE-D16 motherboard. So IDK.

    • whizzter 21 hours ago

      Because setting up servers is an annoying piece of grunt-work that people avoid doing more than absolutely necessary, there's an reason the expensive options of AWS,Azure and Google cloud make money because much "just works" when focusing on applications rather than the infra (until you actually need to do something advanced and the obscure commands or clicking bites you in the ass).

    • Pyrodogg 19 hours ago

      A few months ago Adobe finally updated Lightroom Classic to require these processor extensions. To squeeze all of the matrix mults it can for AI features also in CPU mode.

      It's amazing how long of a run top end hardware from ~2011 has had (just missed the cutoff by a few months). It's taken this long for stuff to really require these features.

    • heavyset_go a day ago

      Hardware after the first couple of generations of x86_64 muliticore processors are perfectly capable machines to use as servers, even for tasks you want to put off to a build farm.

  • yjftsjthsd-h a day ago

    > Google’s new aapt2 binary in AGP 8.12.0

    Given F-Droid's emphasis on isolating and protecting their build environment, I'm kind of surprised that they're just using upstream binaries and not building from source.

  • nativeforks a day ago
    • benrutter a day ago

      The Catima thread makes FDroid sound like a really difficult commmunity to work with. Although I'm basing this on one person's comment and other people agreeing, not on any knowledge or experience.

      > But this is like everything with F-Droid: everything always falls on a deaf man's ears. So I would rather not waste more time talking to a brick wall. If I had the feeling it was possible to improve F-Droid by raising issues and trying to discuss how to solve them I wouldn't have left the project out of frustration after years of putting so much time and energy into it.

      • eptcyka a day ago

        F-droid are thoroughly understaffed and yet incredibly ambitious and shrewd around their goals - they want to build all the apps in a reproducible manner. There’s lots of friction around deviating from builds that fit within their model. The system is also slow, takes a long while before a build shows up. I think f-droid could benefit immensely from more funding, saying that as someone who has never seen f-droid’s side, but have worked on an app that was published there.

      • typpilol a day ago

        I saw that too and was wondering what kind of drama happened in the past

        • noirscape 21 hours ago

          Very unexciting stuff; it's just your typical long-running FOSS project issues as I understand it. Lead maintainer of F-Droid is entrenched in his ways "cuz it works for me", which leads to stonewalling any attempts to change or improve the F-Droid workflow[0], but since he holds the keys to the kingdom (and the name recognition prevents forks), they keep him around.

          Everyone else then tries to work around him and through a mixture of emotional appealing, downplaying the importance of certain patches and doing everything in very tiny steps then try to improve things. It's an extremely mentally draining process that's prone to burnout on the part of the contributors, which eventually boils over and then some people quit... which might start a conversation on why nobody wants to contribute to the FOSS project. That conversation inevitably goes nowhere because the people you'd want to hold that conversation with are so fed up with how bad things have gotten that they'd rather just see the person causing trouble removed entirely. (Which may be the correct course of action, but this is an argument often given without putting forward a proper replacement/considering how the project might move forward without them. Some larger organizations can handle the removal of a core maintainer, most can't.) Rinse and repeat that cycle every five years or so.

          F-Droid isn't at all unique in this regard, and most people are willing to ignore it "because it's free, you shouldn't have any expectations". Any long running FOSS project that has significant infrastructure behind it will at some point have this issue and most haven't had a great history at handling it, since the bus factor of a lot of major FOSS projects is still pretty much one point five people. (As in, one actual maintainer and one guy that knows what levers to pull to seize control if the maintainer actually gets hit by a bus, with the warning that they stop being 0.5 of a bus factor and become 0 if they do that while the maintainer is still around.)

          [0]: Basically the inverse of https://xkcd.com/1172/

          • Tade0 20 hours ago

            This is the sort of stuff that makes me want to pursue FIRE. There's so much good that could be done, but isn't because people need to be making money for someone else.

            Then again who is to say that I would be a better custodian than this guy?

            • chillingeffect 19 hours ago

              I like your energy; and I like your awareness that more control/different center of power may not help. This is where community-oriented leadership techniques could go a long way. To build trust, maintain peoples' roles and dignity, but to increase that awareness and enable floodlight focus (big picture) in addition to flashlight focus.

  • wtallis a day ago

    It seems quite implausible that F-Droid is actually running on hardware that predates those instruction set extensions. They're seeing wider adoption by default these days precisely because hardware which doesn't support them is getting very rare, especially in servers still in production use. Are you sure this isn't simply a matter of F-Droid using VMs that are configured to not expose those instructions as supported?

  • roywashere a day ago

    This is sort of like a bug I hit last year when the mysql docker container suddenly started requiring x86-64-v2 after a patch level upgrade and failed to start: https://github.com/docker-library/mysql/issues/1055

  • yyyk 21 hours ago

    Their servers are so old, even an entirely different architecture emulating x86_64 would still see a performance increase... So there's no OSS argument here - they could even buy a Talos, have no closed firmware, and still see a performance increase with emulation. If they don't care about the firmware, there are plenty of very cheap x86 options which are still more modern.

    • DrewADesign 20 hours ago

      > Their servers are so old

      When I read this, pop culture has trained me to expect an insult, like: “Their servers are so old, they sat next to Ben Franklin in kindergarten.”

      • queenkjuul 6 hours ago

        My home server is so old, it gets its driver's license next year

  • tetris11 19 hours ago

    I'm a bit lost in this thread, but I've written up what I know for other dummies like me

    Aapt2 is an x86_64 standalone binary used to build android APKs for various CPU targets

    Previous versions of it used a simpler instruction set, but the new version requires an extra SIMD instruction SSE4. A lot of CPUs after 2008 support this, but not F-droid's current server farm?

    • its-summertime 19 hours ago

      > Our machines run older server grade CPUs

      So a bit of both of older hardware, and not-matched-with-consumer-featureset hardware. I'd imagine some server hardware vendors supported SSE4 way earlier than most, and some probably supported it way later than most too.

  • pabs3 4 hours ago

    Google should be compiling for the CPU baseline of the ABI their binaries are for, and then check if newer instructions are available before using them. Just like glibc and other projects do. The Debian documentation for this mentions tools to do this, like SIMDe and GCC/clang FMV.

    https://wiki.debian.org/InstructionSelection

    • wtallis an hour ago

      Am I missing something, or does SIMDe only help for cases where a program is using instruction intrinsics, and it doesn't do anything to address cases where the compiler decides to use SIMD as a result of auto-vectorization?

  • userbinator a day ago

    Fortunately the source code is available:

    https://android.googlesource.com/platform/frameworks/base/+/...

    If I had the time, I'd try to compile a binary of it that will run on Win95 just to give my fuckings to the planned obsolescence crowd.

    • maxloh a day ago

      There is no point for Google to push planned obsolescence on the PC or server space. They don't have a market there.

      • userbinator a day ago

        It does benefit them to make it harder for competitors.

        • maxloh a day ago

          When you mention "competitors," what industries or markets are you referring to?

          No one would write Android apps on a Chromebook, and making it harder to do so would only reduce the incentive for companies to develop Android apps.

          How could Google benefit from pushing a newer instruction set standard on Windows and macOS?

          • heavyset_go a day ago

            The one moderately popular competitor is the project in the OP that is suffering directly from this upstream change.

            • jeroenhd 19 hours ago

              I doubt Google even cares about F-Droid. The Play Store competes with the iOS App Store, Huawei's App Galery, and probably the Samsung Store long before F-Droid becomes relevant.

              If they required a Google-specific Linux distro to build this thing or if they went the Apple route and added closed-source components to the build system, this could be seen as a move to mess with the competition, but this is simply a developer assuming that most people compiling apps have a CPU that was produced less than 15 years ago (and that the rest can just recompile the toolchain themselves if they like running old hardware).

              With Red Hat and Oracle moving to SSE4.1 by default, the F-Droid people will run into more and more issues if they don't upgrade their old hardware.

            • maxloh a day ago

              While your perspective makes some sense, it's highly improbable. It's unlikely that Google was aware of F-Droid's infrastructure specs, or its inability to fix the issue in advance.

              It seems you're suggesting a very specific, targeted attack.

              • fsflover 21 hours ago

                > It seems you're suggesting a very specific, targeted attack.

                Yes, just like it happened with Firefox: https://news.ycombinator.com/item?id=38926156

                • pkasting 4 hours ago

                  Former Chrome team member here. Nightingale's suspicions were plausible but incorrect. The primary cause of every one of these we looked into over the years (and there were indeed many) was teams not bothering to test against Firefox because its market share was low compared to the cost of testing for it. In many cases teams tried to reduce support burden by simply marking "unsupported" any browser they didn't explicitly test, which was sometimes just Chrome and Safari. We were distressed at this and wrote internal guidance around not doing things like the above, and tried to distribute it and point back to it frequently. Unfortunately Firefox' share continued to go down, engineering teams continued to be resource-constrained, and the problem continued to occur.

                  Several years ago I glumly opined internally that Firefox had two grim choices: abandon Gecko for Chromium, or give up any hope of being a meaningful player in the market. I am well aware that many folks (especially here) would consider the first of those choices worse than the second. It's moot now, because they chose the second, and Firefox has indeed ceased to be meaningful in the market. They may cease to exist entirely in the next five years.

                  I am genuinely unhappy about this. I was hired at Google specifically to work on Firefox. I was always, and still remain, a fan of Firefox. But all things pass. Chrome too will cease to exist some day.

    • tonyhart7 a day ago

      "If I had the time, I'd try to compile a binary of it that will run on Win95 just to give my fuckings to the planned obsolescence crowd"

      The idea that not supporting a 20+ year old system is "planned obsolescence" is a bit shallow

    • jve 18 hours ago

      Like it is a one-off thing to support some system. You must maintain it and account it for all the features you bring in going forward.

    • msgodel 20 hours ago

      The Win95 API is pretty incomplete. That was actually a terrible OS. The oldest I'd go playing this game with anything serious is probably XP.

      • jeroenhd 19 hours ago

        It can read files, write files, and allocate memory. Is there anything else you need to compile software?

        • msgodel 19 hours ago

          Can it? Files on Windows 95 and files on most Unix-like OSes are very different things.

          • userbinator 11 hours ago

            They're the same from the perspective of a stream of persistent bytes.

            If you want "very different" then look at the record-based filesystems used in mainframes.

            • CodesInChaos 11 hours ago

              Do you have any recommended reading about record-based filesystems?

    • WesolyKubeczek 21 hours ago

      But you don't, so you won't, scoring one for the planned obsolescence crowd.

      And so won't anyone else who has time to complain about planned obsolescence, and that includes myself.

  • CommenterPerson 18 hours ago

    Non-hacker here. The title says "modern". I don't need modern, have a 10 year old phone, can I still get the occasional simple app from F-Droid?

    I upped my (small) monthly contribution. Hope more people contribute, and also work to build public support.

    Also, for developers .. please include old fashioned credit cards as a payment method. I'd like to contribute but don't want to sign up for yet another payment method.

  • wpm a day ago

    I’ve got an old Ivy Bridge-EP Dell workstation they can borrow goddamn SSE4.1 is nearly old enough to drink.

    • jeroenhd 19 hours ago

      SSE4.1 can legally buy lightly alcoholic beverages in various European countries already. Next year, it can buy strong spirits.

      Using AMD hardware that's "only" 13 years old can also cause this problem, though.

    • rpcope1 a day ago

      Yeah I was kind of shocked too. Core 2 could do both of those instruction sets. A used Dell Precision can be had for very little and probably would be grossly more efficient than whatever they're using.

  • edgan a day ago

    That F-Droid even requires to do the build is one of the reasons I created Discoverium.

    https://github.com/cygnusx-1-org/Discoverium/

    • MYEUHD a day ago

      That F-Droid requires to do the build ensures all apps provided by F-Droid are free software (as in freedom) and proven to be buildable by someone other than the app developer

      • edgan a day ago

        The issue is more complicated than that.

        • twodave 19 hours ago

          Do you mean the overall issue or that F-Droid’s guarantees are arguable? The guarantees may not be the whole discussion, but for many they are the most relevant piece.

          Edit: or perhaps you mean that isn’t the only way to provide such guarantees, which is the implication I got reading your other replies.

        • yjftsjthsd-h 19 hours ago

          How so?

      • mschuster91 18 hours ago

        > and proven to be buildable by someone other than the app developer

        Yup. That's a huge, huge issue - IME especially once Java enters the scene. Developers have all sorts of weird stuff in their global ~/.m2/settings.xml that they set up a decade ago and probably don't even think about... real fun when they hand over the project to someone else.

    • devrandoom a day ago

      So I should take a binary from a random stranger because trust me bro?

      • edgan a day ago

        It is a modified version of Obtainium. You get it from the author via GitHub.

  • skyzouwdev 14 hours ago

    That’s a tough one. It’s ironic that the very platform meant to keep apps open and accessible is now bottlenecked by outdated hardware.

    Upgrading the build farm CPUs seems like the obvious fix, but I’m guessing funding and coordination make it less straightforward. In the meantime, forcing devs to downgrade AGP or strip baseline profiles just to ship feels like a pretty big friction point.

    Long term, I wonder if F-Droid could offer an optional “modern build lane” with newer hardware, even if it means fewer guarantees of full reproducibility at first. That might at least keep apps from stalling out entirely.

    • 1970-01-01 13 hours ago

      I've said this before, but I'll say it again. Running on donations is not a viable strategy for any long-term goal. FOSS needs to passively invest the donations. That is a viable long-term strategy. Now when things like this happen, it becomes a major line item moment, and not a limp-along situation, with yet another WE NEED YOUR HELP banner blocking off 1/2 their website.

  • exabrial a day ago

    Man, Android could have been way cooler if it actually used real virtual machines, or at least the JVMs.

    • pjmlp a day ago

      I stood by Oracle, because in the long term as it has been proven, Android is Google's J++, and Kotlin became Google's C#.

      Hardly any different from what was in the genesis of .NET.

      Nowadays they support up to Java 17 LTS, a subset only as usual, mostly because Android was being left behind accessing the Java ecosystem on Maven central.

      And even though now ART is updatable via PlayStore, all the way down to Android 12, they see no need to move beyond Java 17 subset, until most likely they start again missing on key libraries that decided to adopt newer features.

      Also stuff like Panama, Loom, Vector, Valhala (if ever), don't count them ever being supported on ART.

      At least, they managed to push into mainstream the closest idea of OSes like Oberon, Inferno, Java OS and co, where regardless of what think about the superiotity of UNIX clones, here they have to contend themselves with a managed userspace, something that Microsoft failed at with Longhorn, Singularity and Midori due to their internal politics.

      • aembleton a day ago

        > Kotlin became Google's C#

        Are Google buying Jetbrains?

        • pjmlp a day ago

          They almost could, after all they have outsourced most of the Android tooling efforts to JetBrains, given that Android Studio is mostly InteliJ + Clion, and Kotlin is the main Android language nowadays.

          Also Kotlin Foundation is mostly JetBrains and Google employees.

    • jeroenhd 19 hours ago

      ARM phones didn't have virtualisation back in the day so that would've been impossible.

      Modern Android has virtual machines on devices with supported hardware+bootloader+kernels: https://source.android.com/docs/core/virtualization

    • tonyhart7 a day ago

      JVM??? hell no, native FTW

  • SylvieLorxu 9 hours ago

    Might be worth noting that several devs have suggested users use IzzyOnDroid instead. Due to IzzyOnDroid distributing official upstream builds (after scanning), they're not dependent on any build server.

    Although they do have build servers for the purpose of confirming upstream APKs match the source code using reproducible builds, but those are separate processes that don't block each other (unlike F-Droid's rather monolithic structure).

    IzzyOnDroid has been faster with updates than F-Droid for years, releasing app updates within 24 hours for most cases.

  • trenchpilgrim a day ago

    I thought SSE 4.1 dates back to 2008 or so?

    • starkparker a day ago

      The build servers appear to be AMD Opteron G3s, which only support part of SSE4 (SSE4a). Full SSE4 support didn't land until Bulldozer (late 2011).

      • karlgkk a day ago

        I appreciate that this is a volunteer project, but my back of the hand math suggests that if they upgraded to a $300 laptop using a 10nm intel chip, it would pay for itself in power usage within a few years. Actually, probably less, considering an i3-N305 has more cores and substantially faster single thread.

        And yes, you could get that cost down easily.

        • wtallis a day ago

          Yes, a used laptop would be an upgrade from server hardware of that vintage, in performance and probably in reliability. If they're really using hardware that old, that is itself a big red flag that F-Droid's infrastructure is fragile and unmaintained.

          (A server that old might not have any SSDs, which would be insane for a software build server unless it was doing everything in RAM.)

          • johnklos 20 hours ago

            How is it that if hardware is old, that means it's unmaintained, or that if it's old, it can't have SSDs? Neither of those things are typically inferred from age.

            I still maintain old servers, and even my Amiga server has an SSD.

            • wtallis 16 hours ago

              If they're running hardware that old, and it's causing them software compatibility problems, then we can infer that their infrastructure is unmaintained, because the cost of moving to newer hardware is so low that the cost of newer hardware could not plausibly be the reason they haven't moved to new hardware. There's dirt cheap used server hardware that would be substantially faster, cheaper to operate, and not have software compatibility issues like this. Money can't be preventing them from using newer hardware.

              We don't know for sure the servers don't have SSDs, but we do know that back in the days of server hardware that didn't support SSE4.1, SSDs had not yet displaced hard drives for mainstream storage, so it's likely that servers that old didn't originally ship with SSDs. It's not impossible to retrofit such a server with SSDs, but doing that without upgrading to a more recent platform would be a weird choice.

              A server at that age is also going to be harder to repair when something dies, and it's due for something to die. If they lose a PSU it might be cheaper to replace the whole system with something a bit less old. Other components they'd have to rely on replacing with something used, from a different manufacturer than the original, or use a newer generation component and hope it's backwards compatible. Hence why I said using hardware that old would imply their infrastructure is fragile.

              But all of this is still just speculation because nobody involved with F-Droid has actually explained what specific hardware they're using, or why. So I'm still not convinced that the possibility of a misconfigured hypervisor has been ruled out.

              • johnklos 9 hours ago

                > If they're running hardware that old [...] then we can infer that their infrastructure is unmaintained

                You lost me there. One thing has nothing to do with the other.

                People have reasons for running the hardware they run. Do you know their reasons? If you do, please share. If not, there's no connection whatsoever between old hardware and unmaintained infrastructure.

                Is my AlphaServer DS25 unmaintained? It's very old server hardware.

                Is my 1981 Chevette unmantained? It's REALLY old. Can you infer that the fact that I have a car from 1981 means it's unmaintained? I'd say that reasonable people can infer that it's definitely maintained, since it would most likely not still be running if it weren't.

                > It's not impossible to retrofit such a server with SSDs, but doing that without upgrading to a more recent platform would be a weird choice.

                I don't know where you learned about servers, but no, it's not a weird choice to use newer storage in older servers. Not at all. Not even a little bit. Maybe you've worked somewhere that bought Dell servers with storage and trashed the servers when storage needing upgrading, but that's definitely not normal.

                • wtallis 8 hours ago

                  > If not, there's no connection whatsoever between old hardware and unmaintained infrastructure.

                  See, this is just you being unreasonable.

                  Yes, we can all imagine why people might keep old hardware around. But your AlphaServer is at best your hobby, not production infrastructure that lots of people and other projects rely on. Nobody's noticing whether or not it crashes. Likewise for your Chevette: nobody cares until it stalls out in traffic, then everyone around you will make the reasonable assumption that it's behind on maintenance.

                  If F-Droid is indeed using ancient hardware, and repeatedly experiencing software failures as a result, then the most likely explanation is that their infrastructure is inadequately maintained. Sure, it's not a guarantee, it's not the only possibility, but it's a reasonable assumption to work with until such time as someone from F-Droid explains what the hell is going on over there. And if there's nobody available to explain what their infrastructure is and why it is showing symptoms of being old and unmaintained, that's more evidence for this hypothesis.

          • eimrine a day ago

            There are some more possible virtues except of performance and probably-reliability.

          • trenchpilgrim 19 hours ago

            I have computers from the early 2000s that now have SSDs in them. You can get cheap adapters to use SATA and CompactFlash storage on old machines.

        • theandrewbailey 20 hours ago

          I work in the refurb division of an ewaste recycling company[0]. $300 will get you a very nice used Thinkpad or Dell Latitude. They might even get by with some ~$50 mini desktops.

          [0] https://www.ebay.com/str/evolutionecycling

        • eimrine a day ago

          It will have Intel ME which makes the whole open-source ideology... compromised?

          • johnklos 20 hours ago

            If they're relying on binaries from Google, then it's already compromised.

          • karlgkk a day ago

            there are a handful of vendors that will sell you an intel chip with the me disabled, as well as arm vendors that ship boards without an me-equivalent at all

            the point of my post still stands

            • eimrine 13 hours ago

              Do I need to be the US Military for that?

              Intel ME is not a feature for user, it is intended to control any modern CPU except the ones coming to US Army/Navy. It is needed to make Stuxnet-class attacks. The latest chip with possibiliy to have the ME provenly disabled is the 3rd gen.

        • tmtvl a day ago

          Someone send these people a Slimbook.

      • mrheosuper a day ago

        it's insane, i would give them my old xeon haswell machine for free, but the shipping cost is likely more than the cost of the machine itself.

    • nativeforks a day ago

      Yes, SSE4.1 and SSSE3 have been introduced in ~2006. The F-Droid build server still uses that to build modern and some of the most popular FOSS apps.

  • 1vuio0pswjnm7 10 hours ago

    Perhaps there should be more than one F-Droid

    For example, if they published their exact setup for building Android apps so others could replicate it

    How many Android users compile the own apps they use

    Perhaps increasing that number would be a goal worth pursuing

  • Arech 21 hours ago

    This is super annoying how SW vendors forcefully deprecate good enough hardware.

    Genuinely hate that, as Mozilla has deprived me from Firefox's translation feature because of that.

    • crote 20 hours ago

      The problem is that your "good enough" is someone else's "woefully inadequate", and sticking to the old feature sets is going to make the software horribly inefficient - or just plain unusable.

      I'm sure there's someone out there who believe their 8086 is still "good enough", so should we restrict all software to the features supported by an 8086: 16-bit computations only, 1 MB of memory, no multithreading, no SIMD, no floats, no isolation between OS and user processes? That would obviously be ludicrous.

      At a certain point it just doesn't make any sense to support hardware that old anymore. When it is cheaper to upgrade than to keep running the old stuff, and only a handful of people are sticking with the ancient hardware for nostalgic reasons, should that tiny group really be holding back basically your entire user base?

      • Arech 20 hours ago

        Ah, com'on, spare me from these strawman arguments. Good enought is good enough. If F-Droid wasn't worried about that, you definitely have no reasons to do that for them.

        "A tiny group is holding back everyone" is another silly strawman argument - all decent packaging/installation systems support providing different binaries for different architectures. It's just a matter of compiling just another binary and putting it into a package. Nobody is being hold back by anyone, you just can't make a more silly argument than that...

        • bluGill 18 hours ago

          But it isn't good enough. SIMD provides measurable improvements to some people's code. To those people what we had before isn't good enough. Sure for the majority SIMD provides no noticeable benefit and so what we had before is good enough, but that isn't everybody.

          • johnklos 9 hours ago

            Are you SURE that nobody has figured out how to have code that uses SIMD if you have it, and not use it if you don't?

            Your suggestion falls flat on its face when you look at software where performance REALLY matters: ffmpeg. Guess what? It'll use SIMD, but can compile and run just fine without.

            I don't understand people who make things up when it comes to telling others why something shouldn't be done. What's it to you?

            • pabs3 4 hours ago

              It definitely is, you can even do that automatically with SIMDe and runtime function selection.

              https://wiki.debian.org/InstructionSelection

            • wtallis 4 hours ago

              ffmpeg is a bad example, because it's the kind of project that has lots of infrastructure around incorporating hand-optimized routines with inline assembly or SIMD intrinsics, and runtime detection to dispatch to different optimized code paths. That's not something you can get for free on any C/C++ code base; function multiversioning needs to be explicitly configured per function. By contrast, simply compiling with a newer instruction set permits the compiler's autovectorization use newer instructions whenever and wherever it finds an opportunity.

    • sparkie 20 hours ago

      OTOH, if software wants to take advantage of modern features, it becomes hell to maintain if you have to have flags for every possible feature supported by CPUID. It's also unreasonable to expect maintainers to package dozens of builds for software that is unlikely to be used.

      There's some guidelines[1][2] for developers to follow for a reasonable set of features, where they only need to manage ~4 variants. In this proposal the lowest set of features include SSE4.1, which is basically includes nearly any x86_64 CPU from the past 15 years. In theory we could use a modern CPU to compile the 4 variants and ship them all in a FatELF, so we only need to distribute one set of binaries. This of course would be completely impractical if we had to support every possible CPU's distinct features, and the binaries would be huge.

      [1]:https://lists.llvm.org/pipermail/llvm-dev/2020-July/143289.h...

      [2]:https://en.wikipedia.org/wiki/X86-64#Microarchitecture_level...

      • Arech 20 hours ago

        In most cases (and this was the case of Mozilla I referred to) it's only a matter of compiling code that already have all support necessary. They are using some upstream component that works perfectly fine on my architecture. They just decided to drop it, because they could.

        • sparkie 19 hours ago

          It's not only your own software, but also its dependencies. The link above is for glibc, and is specifically addressing incompatibliy issues between different software. Unless you are going to compile your own glibc (for example, doing Linux From Scratch), you're going to depend on features shipped by someone else. In this case that means either baseline, with no SIMD support at all, or level A, which includes SSE4.1. It makes no sense for developers to keep maintaining software for 20 year old CPUs when they can't test it.

          • johnklos 8 hours ago

            > It makes no sense for developers to keep maintaining software for 20 year old CPUs when they can't test it.

            This is horribly inaccurate. You can compile software for 20 year old CPUs and run that software on a modern CPU. You can run that software inside of qemu.

            FYI, there are plenty of methods of selecting code at run time, too.

            If we take what you're saying at face value, then we should give up on portable software, because nobody can possibly test code on all those non-x86 and/or non-modern processors. A bit ridiculous, don't you think?

          • yjftsjthsd-h 19 hours ago

            > Unless you are going to compile your own glibc (for example, doing Linux From Scratch),

            It's not that hard to use gentoo.

    • RealStickman_ 10 hours ago

      The F-Drois builds have been slow for years and with how old their servers apparently are that isn't even surprising in retrospective.

  • andix 18 hours ago

    Do I get it correctly, that they run their build infrastructure on at least 15 year old hardware?

  • o11c a day ago

    Note: the underlying blame here fundamentally belongs to whoever built AGP / Gradle with non-universal flags, then distributed it.

    It's fine to ship binaries with hard-coded cpu flag requirements if you control the universe, but otherwise not, especially if you are in an ecosystem where you make it hard for users to rebuild everything from source.

    • IshKebab a day ago

      Exactly. Everything should be compiled to target i386.

      /s (should be obvious but probably not for this audience)

      • pabs3 4 hours ago

        They should be compiled for the CPU baseline of the ABI they are using, and check if newer instructions are available before using them. This is what Debian does, so they can have maximum hardware support.

        https://wiki.debian.org/InstructionSelection

    • userbinator a day ago

      control the universe

      Guess what the company behind Android wants to do...

  • nativeforks 18 hours ago

    There are even some "Unknown problem" on IzzyOnDroid repo for app publishing, even ensuring reproducible build, izzy says >>Not necessarily "your fault" – baseline often has such issues: https://github.com/CompassMB/MBCompass/issues/90

    Seems like he is talking about the developer being responsible for that also!

    • SylvieLorxu 10 hours ago

      IzzyOnDroid can publish updates even if it's not reproducible, this is not an "app publishing" issue at all. IzzyOnDroid can deal with AGP 8.12 fine.

      Also "not necessarily your fault" means "probably not your fault", the opposite of "your fault"

  • bluGill 18 hours ago

    QEMU static on linux supports automatic emulating of missing instructions. Depending on details that I haven't figured out it can be a lot slower running this way or close enough to native. I have got that working, but it was a pain and I don't remember what was needed (most of the work was done by someone else, but I helped)

  • fancythat 21 hours ago

    I don't know how much servers are they using or server specs besides ancient Opterons, but how is this even an issue in 2025?

    On Hetnzer (not affiliated), at this moment, i7-8700 (AVX2 supported) with 128 GB RAM, 2x1 TB SSD and 1 Gbit uplink costs 42.48 eur per month, VAT included, in their server auction section.

    What are we missing here, besides that build farm was left to decay?

    • WesolyKubeczek 21 hours ago

      Either they want to run on ideologically pure hardware too, without pesky management bits in it (or even indeed UEFI), or they are just "it used to work perfectly" guys.

      In the former case, I fail to see how ME or its absence is relevant to building Android apps, which they do using Google-provided binaries that have even more opportunity to inject naughty bits into the software. In the latter case, I better forget they exist.

      • bill_mcgonigle 17 hours ago

        Well if you wanted to compromise F-Droid you could target their build server's ME or a cloud vm's hypervisor.

        To do a supply-chain attack on Google's SDK would be much more expensive and less likely to succeed. Google isn't going to be the attacker.

        The recent attack on AMI/Gigabyte's ME shows how a zero-day can bootkit a UEFI server quite easily.

        There are newer Coreboot boards than Opteron, though. Some embedded-oriented BIOS'es let you fuse out the ME. You are warned this is permanent and irreversible.

        F-Droid likely has upgrade options even in the all-open scenario.

      • fancythat 17 hours ago

        I agree with you. Unfortunately usually, the simplest explanation is often the truth, so they just probably ignored this issue, until it surfaced up.

        • WesolyKubeczek 17 hours ago

          In other words,

          > they are just "it used to work perfectly" guys.

  • shrubble 12 hours ago

    Is it the CPUs or the compilers? Or possibly a CI/CD runner that has to run something that can’t run on these CPUs?

  • flykespice 16 hours ago

    > The root cause: Google’s new aapt2 binary in AGP 8.12.0 started requiring CPU instructions (SSE4.1, SSSE3) that F-Droid’s build farm hardware doesn’t support. This is similar to a 2021 AGP 4.1.0 issue, but it has returned, and now affects hundreds of apps.

    I don't know why they have enabled modern CPU flags for a simple intermediary tool that compiles the apk resources files, it was so unneccesary

    Welp there goes my plans on savaging an old laptop to build my android apps.

  • mandown2308 17 hours ago

    On the other hand, we have "personal" data centers for AI and mining farms for crypto.

  • OldfieldFund 12 hours ago

    I think this might give Google some ideas...

  • nicman23 a day ago

    wtf they cannot be still running opterons. it was to be that they are using qemu with g3 as a cpu profile.. right?

  • jdbdnxjdhe 15 hours ago

    I don't get the issue, binary target is completely independent from host target on all but the most basic setups

  • rasz 5 hours ago

    > (SSE4.1, SSSE3)

    This means their build infrastructure burns excessive amounts of power being run by volunteers in basements/homelabs on vintage museum grade (15 year old Opterons/Phenoms) hardware.

    Gamers have been there 14 years ago with 'No Man's Sky' being the first big game requiring SSE 4.1 for no particular reason.

  • kijin a day ago

    Requiring (supposedly) universally available CPU instructions is one thing. Starting to require it in a minor version update (8.11.1 -> 8.12.0) is a whole different thing. What the heck happened to semantic versioning? We can't even trust patch updates anymore these days. The version numbers might as well be git commit IDs.

  • 1970-01-01 13 hours ago

    Put another way, Google is requiring you to have 65nm Intel chips. 2009-ish.

  • shmerl a day ago

    Can't cross compilation help for that? The CPU compiling doesn't need to match the target.

    • a99c43f2d565504 a day ago

      It's not the target that is now requiring new instructions, but one of the components in the build tools.

  • hulitu a day ago

    > The root cause: Google’s new aapt2 binary in AGP 8.12.0 started requiring CPU instructions (SSE4.1, SSSE3) that F-Droid’s build farm hardware doesn’t support.

    Very intelligent move from Google. Now you can't compile "Hello World" without SSE4.1, SSSE3. /s

    Are there any X86 tablets with Android ?

    • vardump a day ago

      There are very few 17+ years old build servers at this point. Or laptops and desktops for that matter.

  • 42lux a day ago

    >> This has led to multiple “maintenance” versions in a short time, confusing users and wasting developer time, just to work around infrastructure issues outside the developer’s control.

    What an entitled conclusion.