I loved this the moment I saw it. After looking at an example commit[1], I love it even more. The cursed knowledge entry is committed alongside the fix needed to address it. My first instinct is that every project should have a similar facility. The log is not just cathartic, but turns each frustrating speedbump into a positive learning experience. By making it public, it becomes both a tool for both commiseration and prevention.
I agree, I usually put this sort of information in the commit message itself. That way it's right there if anybody ever comes across the line and wonders "why did he write this terrible code, can't you just ___".
As a side note, it's becoming increasingly important to write down this info in places where LLMs can access it with the right context. Unfortunately commit history is not one of those spots.
You are sadly completely missing the point of ever-self-improving automation. Just also use the commit history. Better yet: don't be a bot slave that is controlled and limited by their tools.
One of their line items complains about being unable to bind 65k PostgreSQL placeholders (the linked post calls them "parameters") in a single query. This is a cursed idea to begin with, so I can't fully blame PostgreSQL.
From the linked GitHub issue comments, it looks like they adopted the sensible approach of refactoring their ORM so that it splits the big query into several smaller queries. Anecdotally, I've found 3,000 to 5,000 rows per write query to be a good ratio.
Someone else suggested first loading the data into a temp table and then joining against that, which would have further improved performance, especially if they wrote it as a COPY … FROM. But the idea was scrapped (also sensibly) for requiring too many app code changes.
Overall, this was quite an illuminating tome of cursed knowledge, all good warnings to have. Nicely done!
> This is a cursed idea to begin with, so I can't fully blame PostgreSQL.
After going through the list, I was left with the impression that the "cursed" list doesn't really refers to gotchas per se but to lessons learned by the developers who committed them. Clearly a couple of lessons are incomplete or still in progress, though. This doesn't take away from their value of significance, but it helps frame the "curses" as persona observations in an engineering log instead of statements of fact.
> One of their line items complains about being unable to bind 65k PostgreSQL placeholders (the linked post calls them "parameters") in a single query.
I've actually encountered this one, it involved an ORM upserting lots of records, and how some tables had SQL array-of-T types, where each item being inserted consumes one bind placeholder.
That made it an intermittent/unreliable error, since even though two runs might try to touch the same number of rows and columns, you the number of bind-variables needed for the array stuff fluctuated.
Or people who try to send every filename on a system through xargs in a single command process invocation as arguments (argv) without NUL-terminated strings. Just hope there are no odd or corrupt filenames, and plenty of memory. Oopsie. find -print0 with parallel -0/xargs -0 are usually your friends.
Also, sed and grep without LC_ALL=C can result in the fun "invalid multibyte sequence".
Another strategy is to pass your values as an array param (e.g., text[] or int[] etc) - PG is perfectly happy to handle those. Using ANY() is marginally slower than IN(), but you have a single param with many IDs inside it. Maybe their ORM didn’t support that.
that also popped out at me: binding that many parameters is cursed. You really gotta use COPY (in most cases).
I'll give you a real cursed Postgres one: prepared statement names are silently truncated to NAMEDATALEN-1. NAMEDATALEN is 64. This goes back to 2001...or rather, that's when NAMEDATALEN was increased in size from 32. The truncation behavior itself is older still. It's something ORMs need to know about it -- few humans are preparing statement names of sixty-plus characters.
The '50 extra packages' one is wild. The author of those packages has racked up a fuckload of downloads. What a waste of total bandwidth and disk space everywhere. I wonder if it's for clout.
The maintainer who this piece of “cursed knowledge” is referencing is a member of TC39, and has fought and died on many hills in many popular JavaScript projects, consistently providing some of the worst takes on JavaScript and software development imaginable. For this specific polyfill controversy, some people alleged a pecuniary motivation, I think maybe related to GitHub sponsors or Tidelift, but I never verified that claim, and given how little these sources pay I’m more inclined to believe he just really believes in backwards compatibility. I dare not speak his name, lest I incur the wrath of various influential JavaScript figures who are friends with him, and possibly keep him around like that guy who was trained wrong as a joke in Kung Pow: Enter the Fist. In 2025, I’ve moderated my opinion of him; he does do important maintenance work, and it’s nice to have someone who seems to be consistently wrong in the community, I guess.
Specifically Ben McCann along with other Svelte devs got tired of him polluting their dependency trees with massive amount of code and packages and called him out on it. He doubled down and it blew up and everyone started migrating away from his packages.
ljharb also does a lot of work on js standards and is the guy you can thank for globalThis. Guy has terrible taste and insists everyone else should abide by it.
It looks like if I wanted to install a particular piece of software on many modern websites and I didn't have enough resources to hack node itself, talking to this guy would be a logical choice.
> Forgive my ignorance of js matters but how does adding packages improve backward compatibility at all?
The scheme is based on providing polyfills for deprecated browsers or JavaScript runtimes.
Here is the recipe.
- check what feature is introduced by new releases of a browser/JavaScript runtime,
- put together a polyfill that implements said feature,
- search for projects that use the newly introduced feature,
- post a PR to get the project to consume your polyfill package,
- resort to bad faith arguments to pressure projects to accept your PR arguing nonsense such as "your project must support IE6/nodejs4".
Some projects accept this poisoned pill, and whoever is behind these polyfill packages further uses their popularity in bad faith arguments ("everyone does it and it's a very popular package but you are a bad developer for not using my package")
I had the displeasure of stumbling upon PRs where tis character tries to argue that LTS status does not matter at all I'm determining whether a version of node.js should be maintained, and the fact that said old version of node.js suffers from a known security issue is irrelevant because he asserts it's not a real security issue.
It's probably a clout thing, or just a weird guy (Hanlon's Razor), but a particularly paranoid interpretation is that this person is setting up for a massive, multi-pronged software supplychain attack.
Those don't have to be mutually exclusive. Often those with clout are targeted for supplychain attacks. Take xz as an example. Doesn't seem unreasonable that a solo dev or small team looks to either sell their projects or transfer them to someone else (often not even with money exchanging hands). Or even how old social media accounts are hacked so that they can appear as legitimate accounts.
I'm big on Hanlon's Razor too, but that doesn't mean the end result can't be the same.
Are you serious here? It isn't a polyfill, it's supposed to work on plain objects which isn't part of the spec at all. Besides that, Array.prototype.forEach is only unsupported in Android Browser 4.3 (from July 2013) and IE8 (from May 2008). Seems like a weird reasoning to add it to packages in 2025.
- Windows' NTFS Alternate Data Streams (ADS) allows hiding an unlimited number of files in already existing files
- macOS data forks, xattrs, and Spotlight (md) indexing every single removable volume by default adds tons of hidden files and junk to files on said removable volumes. Solution: mdutil -X /Volumes/path/to/vol
- Everything with opt-out telemetry: go, yarn, meilisearch, homebrew, vcpkg, dotnet, Windows, VS Code, Claude Code, macOS, Docker, Splunk, OpenShift, Firefox, Chrome, flutter, and zillions of other corporate abominations
By default, telemetry data is kept only on the local computer, but users may opt in to uploading an approved subset of telemetry data to https://telemetry.go.dev.
To opt in to uploading telemetry data to the Go team, run:
go telemetry on
To completely disable telemetry, including local collection, run:
Yep, but you're techsplaining to someone who already know this. But still, it's not opt-in. It's always on by default and litters stuff without asking. All that does is create a file but that doesn't remove the traces of all the tracking it leaves behind without asking. This fixes it in a oneliner:
# mac, bsd, linux, and wsl only
(d="${XDG_CONFIG_HOME:-$HOME/.config}/go/telemetry";rm -rf "$d";mkdir -p "$d"&&echo off>"$d/mode")
Looks like they're missing one. I'm pretty sure the discussion goes further back[0,1] but this one has been on going for years and seems to be the main one[2]
Datetimes in general have have a tendency to be cursed. Even when they work, something adjacent is going to blow up sooner or later. Especially if it relies on timezones or DST being in the value.
> Fetch requests in Cloudflare Workers use http by default, even if you explicitly specify https, which can often cause redirect loops.
This is whack as hell but doesn't seem to be the default? This issue was caused by the "Flexible" mode, but the docs say "Automatic" is the default? (Maybe it was the default at the time?)
Here is a direct quote of the recommendation on how this feature was designed to be used:
> Choose this option when you cannot set up an SSL certificate on your origin or your origin does not support SSL/TLS.
Furthermore, Cloudflare's page on encryption modes provides this description of their flexible mode.
> Flexible : Traffic from browsers to Cloudflare can be encrypted via HTTPS, but traffic from Cloudflare to the origin server is not. This mode is common for origins that do not support TLS, though upgrading the origin configuration is recommended whenever possible.
So, people go out of their way to set an encryption mode that was designed to forward requests to origin servers that do not or cannot support HTTPS connections, and then are surprised those outbound connections to their origin servers are not HTTPS.
I get that it's a compatibility workaround (I did look at the docs before posting) but it's a.) super dangerous and b.) apparently was surprising to the authors of this post. I'm gunnuh keep describing "communicate with your backend in plain text and get caught in infinite redirect loops mode" whack but reasonable people may disagree.
I would like to know how this setting got enabled, however. And I don't think the document should describe it as a "default" if it isn't one.
> I get that it's a compatibility workaround (...) but it's a.) super dangerous (...)
It's a custom mode where you explicitly configure your own requests to your own origin server to be HTTP instead of HTTPS. Even Cloudflare discourages the use of this mode, and you need to go way out of your way to explicitly enable it.
> (...) apparently was surprising to the authors of this post.
The post is quite old, and perhaps Cloudflare's documentation was stale back then. However, it is practically impossible to set flexible mode being aware of what it means and what it does.
> I would like to know how this setting got enabled, however.
Cloudflare's docs state this is a custom encryption mode that is not set by default and you need to purposely go to the custom encryption mode config panel to pick this option among half a dozen other options.
Perhaps this was not how things were done back then, but as it stands this is hardly surprising or a gotcha. You need to go way out of your way to configure Cloudflare to do what amounts to TLS termination at the edge, and to do so you need to skip a bunch of options that enforce https.
Reminds me a lot of phenomenal Hadoop and Kerberos: Madness beyond the gates[1], which coincidentally saved me many times from madness. Thanks Steve, I can't fathom what you had to go through to get the cursed knowledge!
I created one of the first CDDBs in 1995 when Windows 95 was in beta. It came with a file, IIRC, cdplayer.ini, that contained all the track names you'd typed in from your CDs.
I put out requests across the Net, mostly Usenet at the time, and people sent me their track listings and I would put out a new file every day with the new additions.
Until I hit 64KB which is the max size of an .ini file under Windows, I guess. And that was the end of that project.
1 is only true by default, both HFS and APFS have case sensitive options . NTFS also behaves like you described, and I believe the distinction is that the filesystems are case-retentive, so this will work fine:
Maybe the cursed version of the filesystem story is that goddamn Steam refuses to install on the case sensitive version of the filesystem, although Steam has a Linux version. Asshats
Love to see this concept condensed! This kind of knowledge will only emerge only after you dive in your project and surprisingly find things do not work as thought (inevitable if the project is niche enough). Will keep a list like that for every future project.
I think this is written unclearly. Looking at the linked issues, the root cause seems to be related to a "all file access" permission, not just fine grained location access.
It seems great that an app without location access cannot check location via EXIF, but I'm surprised that "all file access" also gates access to the metadata, perhaps one selected using the picker.
Install an SP3 or TR4 socketed CPU in a dusty, dirty room without ESD precautions and crank the torque on the top plate and heat sink like truck lug nuts until creaking and cracking noises of the PCB delaminating are noticeable. Also be sure to sneeze on the socket's chip contacts and clean it violently with an oily and dusty microfiber cloth to bend every pin.
c. 2004 and random crap on eBay: DL380 G3 standard NICs plus Cisco switches with auto speed negotiation on both sides have built-in chaos monkey duplex flapping.
Google's/Nest mesh Wi-Fi gear really, really enjoys being close together so much that it offers slower speeds than simply 1 device. Not even half speed, like dial-up before 56K on random devices randomly.
I'm torn. Maybe a better approach would be a prompt saying "you're giving access to images with embedded location data. Do you want to keep the location data in the images, or strip the location data in this application?"
I might not want an application to know my current, active location. But it might be useful for it to get location data from images I give it access to.
I do think if we have to choose between stripping nothing or always stripping if there's no location access, this is the correct and safe solution.
> saying "you're giving access to images with embedded location data. Do you want to keep the location data in the images, or strip the location data in this application?"
This is a good example of a complex setting that makes sense to the 1% of users who understand the nuances of EXIF embedded location data but confuses the 99% of users who use a product.
It would also become a nightmare to manage settings a per-image basis.
Not per-image, it would be per-app. The first time it happened it would ask you. There are already quite a few per-app toggles for things like this so it wouldn't be anything new or particularly surprising.
That said, an alternative to bugging the user might be that when the app makes the call to open the file that call should fail unless the app explicitly passes a flag to strip the location data. That way you protect users without causing needless confusion for developers when things that ought to "just work" go inexplicably wrong for them.
Why is the YAML part cursed? They serialize to same string, no? Both [1] and [2] serialize to identical strings. This seems like the ancient YAML 1.1 parser curse strikes again.
> It's the US short form, matching the word-month order we always use for regular dates: "August 7, 2025".
Counterexample: US Independence Day is called the “Fourth of July”.
I would agree that, for dates with named months, the US mostly writes “August 8, 2025” and says “August eighth, 2025” (or sometimes “August eight, 2025”, I think?), and other countries mostly write “8 August 2025” and say “the eighth of August, 2025”; but neither is absolute.
First, I use ISO8601 for everything. This is not me arguing against it.
But, I think the American-style formatting is logical for everyday use. When you're discussing a date, and you're not a historian, the most common reason is that you're making plans with someone else or talking about an upcoming event. That means most dates you discuss on a daily basis will be in the next 12 months. So starting with the month says approximately when in the next year you're talking about, giving the day next says when in that month, and then tacking on the year confirms the common case that you mean the next occurrence of it.
When's Thanksgiving? November (what part of the year?) 27 (toward the end of that November), 2025 (this year).
It's like answering how many minutes are in a day: 1 thousand, 4 hundred, and 40. You could say 40, 400, and 1000, which is still correct, but everyone's going to look at you weirdly. Answer "2025 (yeah, obviously), the 27th (of this month?) of November (why didn't you start with that?)" is also correct, but it sounds odd.
So 11/27/2025 starts with the most useful information and works its way to the least, for the most common ways people discuss dates with others. I get it. It makes since.
> So 11/27/2025 starts with the most useful information
Most useful information would be to not confuse it. E.g. you see a event date 9/8/2025 and it's either tomorrow or a month from now. Perfect 50/50% chance to miss it or make a useless trip
Red is an aggravating colour psychologically. It's pretty universally used as a warning. Red lights in cars also mean "not ready to drive". Brake lights are also red for similar reason. "Seeing red."
Because it's arbitrary. Unlike a date format where the components have relative meaning to one another, can be sorted based on various criteria, and should smoothly integrate with other things.
As a US native let me clearly state that the US convention for writing dates is utterly cursed. Our usage of it makes even less sense than our continued refusal to adopt the metric system.
> Disappointing to hear about the Cloudflare fetch issue.
You mean the one where explicitly configuring Cloudflare to forward requests to origin servers as HTTP will actually send requests as HTTP? That is not what I would describe as disappointing.
I loved this the moment I saw it. After looking at an example commit[1], I love it even more. The cursed knowledge entry is committed alongside the fix needed to address it. My first instinct is that every project should have a similar facility. The log is not just cathartic, but turns each frustrating speedbump into a positive learning experience. By making it public, it becomes both a tool for both commiseration and prevention.
1 - https://github.com/savely-krasovsky/immich/commit/aeb5368602...
I agree, I usually put this sort of information in the commit message itself. That way it's right there if anybody ever comes across the line and wonders "why did he write this terrible code, can't you just ___".
As a side note, it's becoming increasingly important to write down this info in places where LLMs can access it with the right context. Unfortunately commit history is not one of those spots.
You are sadly completely missing the point of ever-self-improving automation. Just also use the commit history. Better yet: don't be a bot slave that is controlled and limited by their tools.
>The bcrypt implementation only uses the first 72 bytes of a string. Any characters after that are ignored.
Is there any good reason for this one in particular?
One of their line items complains about being unable to bind 65k PostgreSQL placeholders (the linked post calls them "parameters") in a single query. This is a cursed idea to begin with, so I can't fully blame PostgreSQL.
From the linked GitHub issue comments, it looks like they adopted the sensible approach of refactoring their ORM so that it splits the big query into several smaller queries. Anecdotally, I've found 3,000 to 5,000 rows per write query to be a good ratio.
Someone else suggested first loading the data into a temp table and then joining against that, which would have further improved performance, especially if they wrote it as a COPY … FROM. But the idea was scrapped (also sensibly) for requiring too many app code changes.
Overall, this was quite an illuminating tome of cursed knowledge, all good warnings to have. Nicely done!
> This is a cursed idea to begin with, so I can't fully blame PostgreSQL.
After going through the list, I was left with the impression that the "cursed" list doesn't really refers to gotchas per se but to lessons learned by the developers who committed them. Clearly a couple of lessons are incomplete or still in progress, though. This doesn't take away from their value of significance, but it helps frame the "curses" as persona observations in an engineering log instead of statements of fact.
> One of their line items complains about being unable to bind 65k PostgreSQL placeholders (the linked post calls them "parameters") in a single query.
I've actually encountered this one, it involved an ORM upserting lots of records, and how some tables had SQL array-of-T types, where each item being inserted consumes one bind placeholder.
That made it an intermittent/unreliable error, since even though two runs might try to touch the same number of rows and columns, you the number of bind-variables needed for the array stuff fluctuated.
Or people who try to send every filename on a system through xargs in a single command process invocation as arguments (argv) without NUL-terminated strings. Just hope there are no odd or corrupt filenames, and plenty of memory. Oopsie. find -print0 with parallel -0/xargs -0 are usually your friends.
Also, sed and grep without LC_ALL=C can result in the fun "invalid multibyte sequence".
Another strategy is to pass your values as an array param (e.g., text[] or int[] etc) - PG is perfectly happy to handle those. Using ANY() is marginally slower than IN(), but you have a single param with many IDs inside it. Maybe their ORM didn’t support that.
that also popped out at me: binding that many parameters is cursed. You really gotta use COPY (in most cases).
I'll give you a real cursed Postgres one: prepared statement names are silently truncated to NAMEDATALEN-1. NAMEDATALEN is 64. This goes back to 2001...or rather, that's when NAMEDATALEN was increased in size from 32. The truncation behavior itself is older still. It's something ORMs need to know about it -- few humans are preparing statement names of sixty-plus characters.
> few humans are preparing statement names of sixty-plus characters.
Java developers: hold my beer
Hey, if I don’t name this class AbstractBeanFactoryVisitorCommandPatternImplementorFactoryFactoryFactorySlapObserver how would you know what it does?
The '50 extra packages' one is wild. The author of those packages has racked up a fuckload of downloads. What a waste of total bandwidth and disk space everywhere. I wonder if it's for clout.
The maintainer who this piece of “cursed knowledge” is referencing is a member of TC39, and has fought and died on many hills in many popular JavaScript projects, consistently providing some of the worst takes on JavaScript and software development imaginable. For this specific polyfill controversy, some people alleged a pecuniary motivation, I think maybe related to GitHub sponsors or Tidelift, but I never verified that claim, and given how little these sources pay I’m more inclined to believe he just really believes in backwards compatibility. I dare not speak his name, lest I incur the wrath of various influential JavaScript figures who are friends with him, and possibly keep him around like that guy who was trained wrong as a joke in Kung Pow: Enter the Fist. In 2025, I’ve moderated my opinion of him; he does do important maintenance work, and it’s nice to have someone who seems to be consistently wrong in the community, I guess.
This is Wimp Lo! We trained him wrong on purpose, as a joke.
Long time since I thought of that movie.
Looking forward to this Jia Tan sequel in a few years' time.
to save everyone else a search, it's probably ljharb. (I am not a member of JS community, so, come and attack me.)
Saga starts here:
https://x.com/BenjaminMcCann/status/1804295731626545547?lang...
https://github.com/A11yance/axobject-query/pull/354
Specifically Ben McCann along with other Svelte devs got tired of him polluting their dependency trees with massive amount of code and packages and called him out on it. He doubled down and it blew up and everyone started migrating away from his packages.
ljharb also does a lot of work on js standards and is the guy you can thank for globalThis. Guy has terrible taste and insists everyone else should abide by it.
this specific saga starts 1 year before that, arguably more insane thread
https://github.com/A11yance/aria-query/pull/497
Wow that's some deep rabbit hole. This guy gets paid per XY npm downloads and games the system through this. Awful.
There is apparently a tool, that you can upload your package.json and it will show you how much dependencies are controlled by ljharb
https://voldephobia.rschristian.dev/
It looks like if I wanted to install a particular piece of software on many modern websites and I didn't have enough resources to hack node itself, talking to this guy would be a logical choice.
Forgive my ignorance of js matters but how does adding packages improve backward compatibility at all?
> Forgive my ignorance of js matters but how does adding packages improve backward compatibility at all?
The scheme is based on providing polyfills for deprecated browsers or JavaScript runtimes.
Here is the recipe.
- check what feature is introduced by new releases of a browser/JavaScript runtime,
- put together a polyfill that implements said feature,
- search for projects that use the newly introduced feature,
- post a PR to get the project to consume your polyfill package,
- resort to bad faith arguments to pressure projects to accept your PR arguing nonsense such as "your project must support IE6/nodejs4".
Some projects accept this poisoned pill, and whoever is behind these polyfill packages further uses their popularity in bad faith arguments ("everyone does it and it's a very popular package but you are a bad developer for not using my package")
I had the displeasure of stumbling upon PRs where tis character tries to argue that LTS status does not matter at all I'm determining whether a version of node.js should be maintained, and the fact that said old version of node.js suffers from a known security issue is irrelevant because he asserts it's not a real security issue.
It's probably a clout thing, or just a weird guy (Hanlon's Razor), but a particularly paranoid interpretation is that this person is setting up for a massive, multi-pronged software supplychain attack.
Those don't have to be mutually exclusive. Often those with clout are targeted for supplychain attacks. Take xz as an example. Doesn't seem unreasonable that a solo dev or small team looks to either sell their projects or transfer them to someone else (often not even with money exchanging hands). Or even how old social media accounts are hacked so that they can appear as legitimate accounts.
I'm big on Hanlon's Razor too, but that doesn't mean the end result can't be the same.
> (...) but a particularly paranoid interpretation is that this person is setting up for a massive, multi-pronged software supplychain attack.
That person might not be doing it knowingly or on purpose, but regardless of motivations that is definitely what is being done.
A package "for-each"[0] that depends on a package "is-callable"[1], just to make forEach work on objects? Nope, not buying the goodwill here.
[0]: https://www.npmjs.com/package/for-each
[1]: https://www.npmjs.com/package/is-callable
To be fair, he himself removed his unnecessary dependency that caused the explosion of dependencies: https://github.com/A11yance/aria-query/commit/ee003d2af54b6b...
EDIT: Oops, he just did the changelog entry. The actual fix was done by someone else: https://github.com/A11yance/aria-query/commit/f5b8f4c9001ba7...
Older browsers don't support foreach, so it's not like a polyfill is unheard of
https://caniuse.com/?search=foreach
Are you serious here? It isn't a polyfill, it's supposed to work on plain objects which isn't part of the spec at all. Besides that, Array.prototype.forEach is only unsupported in Android Browser 4.3 (from July 2013) and IE8 (from May 2008). Seems like a weird reasoning to add it to packages in 2025.
The author is almost certainly ljharb.
I'm convinced he's a rage baiting account. No-one can consistently have such bad takes.
Your faith in humanity exceeds mine.
It does raise the idea of managed backward compatibility.
Especially if you could control at install time just how far back to go, that might be interesting.
Also an immediately ridiculous graph problem for all but trivial cases.
- Windows' NTFS Alternate Data Streams (ADS) allows hiding an unlimited number of files in already existing files
- macOS data forks, xattrs, and Spotlight (md) indexing every single removable volume by default adds tons of hidden files and junk to files on said removable volumes. Solution: mdutil -X /Volumes/path/to/vol
- Everything with opt-out telemetry: go, yarn, meilisearch, homebrew, vcpkg, dotnet, Windows, VS Code, Claude Code, macOS, Docker, Splunk, OpenShift, Firefox, Chrome, flutter, and zillions of other corporate abominations
>opt-out telemetry: go
By default, telemetry data is kept only on the local computer, but users may opt in to uploading an approved subset of telemetry data to https://telemetry.go.dev.
To opt in to uploading telemetry data to the Go team, run:
To completely disable telemetry, including local collection, run: https://go.dev/doc/telemetryYep, but you're techsplaining to someone who already know this. But still, it's not opt-in. It's always on by default and litters stuff without asking. All that does is create a file but that doesn't remove the traces of all the tracking it leaves behind without asking. This fixes it in a oneliner:
Opt-out telemetry is the only useful kind of telemetry
Not useful to me or most users. See, other people besides you have different values like privacy and consent.
> npm scripts make a http call to the npm registry each time they run, which means they are a terrible way to execute a health check.
Is this true? I couldn’t find another source discussing it. That would be insane behavior for a package manager.
It might be referring to the check if whether npm is up to date so it can prompt you to update if it isn't?
probably an update check? It definitely sometimes shows an update banner
Looks like they're missing one. I'm pretty sure the discussion goes further back[0,1] but this one has been on going for years and seems to be the main one[2]
[0] https://github.com/immich-app/immich/discussions/2581[1] https://github.com/immich-app/immich/issues/6623
[2] https://github.com/immich-app/immich/discussions/12292
Datetimes in general have have a tendency to be cursed. Even when they work, something adjacent is going to blow up sooner or later. Especially if it relies on timezones or DST being in the value.
> Fetch requests in Cloudflare Workers use http by default, even if you explicitly specify https, which can often cause redirect loops.
This is whack as hell but doesn't seem to be the default? This issue was caused by the "Flexible" mode, but the docs say "Automatic" is the default? (Maybe it was the default at the time?)
> Automatic SSL/TLS (default)
https://developers.cloudflare.com/ssl/origin-configuration/s...
It was indeed the default at the time.
> This is whack as hell but doesn't seem to be the default?
I don't think so. If you read about what Flexible SSL means, you are getting exactly what you are asking for.
https://developers.cloudflare.com/ssl/origin-configuration/s...
Here is a direct quote of the recommendation on how this feature was designed to be used:
> Choose this option when you cannot set up an SSL certificate on your origin or your origin does not support SSL/TLS.
Furthermore, Cloudflare's page on encryption modes provides this description of their flexible mode.
> Flexible : Traffic from browsers to Cloudflare can be encrypted via HTTPS, but traffic from Cloudflare to the origin server is not. This mode is common for origins that do not support TLS, though upgrading the origin configuration is recommended whenever possible.
So, people go out of their way to set an encryption mode that was designed to forward requests to origin servers that do not or cannot support HTTPS connections, and then are surprised those outbound connections to their origin servers are not HTTPS.
I get that it's a compatibility workaround (I did look at the docs before posting) but it's a.) super dangerous and b.) apparently was surprising to the authors of this post. I'm gunnuh keep describing "communicate with your backend in plain text and get caught in infinite redirect loops mode" whack but reasonable people may disagree.
I would like to know how this setting got enabled, however. And I don't think the document should describe it as a "default" if it isn't one.
> I get that it's a compatibility workaround (...) but it's a.) super dangerous (...)
It's a custom mode where you explicitly configure your own requests to your own origin server to be HTTP instead of HTTPS. Even Cloudflare discourages the use of this mode, and you need to go way out of your way to explicitly enable it.
> (...) apparently was surprising to the authors of this post.
The post is quite old, and perhaps Cloudflare's documentation was stale back then. However, it is practically impossible to set flexible mode being aware of what it means and what it does.
> I would like to know how this setting got enabled, however.
Cloudflare's docs state this is a custom encryption mode that is not set by default and you need to purposely go to the custom encryption mode config panel to pick this option among half a dozen other options.
Perhaps this was not how things were done back then, but as it stands this is hardly surprising or a gotcha. You need to go way out of your way to configure Cloudflare to do what amounts to TLS termination at the edge, and to do so you need to skip a bunch of options that enforce https.
(I didn't mean "I would like to know" in some sort of conspiratorial way, I just thought there was a story to be told there.)
Reminds me a lot of phenomenal Hadoop and Kerberos: Madness beyond the gates[1], which coincidentally saved me many times from madness. Thanks Steve, I can't fathom what you had to go through to get the cursed knowledge!
1 - https://steveloughran.gitbooks.io/kerberos_and_hadoop/conten...
This is awesome! Does anyone else wanna share some of the cursed knowledge they've picked up?
For me, MacOS file names are cursed:
1. Filenames in MacOS are case-INsensitive, meaning file.txt and FILE.txt are equivalent
2. Filenames in MacOS, when saved in NFC, may be converted to NFD
I created one of the first CDDBs in 1995 when Windows 95 was in beta. It came with a file, IIRC, cdplayer.ini, that contained all the track names you'd typed in from your CDs.
I put out requests across the Net, mostly Usenet at the time, and people sent me their track listings and I would put out a new file every day with the new additions.
Until I hit 64KB which is the max size of an .ini file under Windows, I guess. And that was the end of that project.
Yep. Create a case-sensitive APFS or HFS+ volume for system or data, and it guarantees problems.
1 is only true by default, both HFS and APFS have case sensitive options . NTFS also behaves like you described, and I believe the distinction is that the filesystems are case-retentive, so this will work fine:
Maybe the cursed version of the filesystem story is that goddamn Steam refuses to install on the case sensitive version of the filesystem, although Steam has a Linux version. Asshatsok but this one is not cursed tho (https://github.com/immich-app/immich/discussions/11268)
its valid privacy and security on how mobile OS handle permission
This would be a fun github repo. Kind of like Awesome X, but Cursed.
Love to see this concept condensed! This kind of knowledge will only emerge only after you dive in your project and surprisingly find things do not work as thought (inevitable if the project is niche enough). Will keep a list like that for every future project.
One can really sense the pain just reading the headings
Also a crypto library that limits passwords to 72 bytes? That’s wild
It's written with constant memory allocation in mind. Silly of them to use such a small buffer though, make it a configuration option.
> Some phones will silently strip GPS data from images when apps without location permission try to access them.
That's no curse, it's a protection hex!
I think this is written unclearly. Looking at the linked issues, the root cause seems to be related to a "all file access" permission, not just fine grained location access.
It seems great that an app without location access cannot check location via EXIF, but I'm surprised that "all file access" also gates access to the metadata, perhaps one selected using the picker.
https://gitlab.com/CalyxOS/platform_packages_providers_Media...
On the other hand, one particular app completely refuses to allow users to remove location information from their photos: https://support.google.com/photos/answer/6153599?hl=en&co=GE...
I have no idea what that means but to me it looks like it works as designed.
A ward even
This is the best thing I’ve read on hacker news all year
Install an SP3 or TR4 socketed CPU in a dusty, dirty room without ESD precautions and crank the torque on the top plate and heat sink like truck lug nuts until creaking and cracking noises of the PCB delaminating are noticeable. Also be sure to sneeze on the socket's chip contacts and clean it violently with an oily and dusty microfiber cloth to bend every pin.
c. 2004 and random crap on eBay: DL380 G3 standard NICs plus Cisco switches with auto speed negotiation on both sides have built-in chaos monkey duplex flapping.
Google's/Nest mesh Wi-Fi gear really, really enjoys being close together so much that it offers slower speeds than simply 1 device. Not even half speed, like dial-up before 56K on random devices randomly.
"Some phones will silently strip GPS data from images when apps without location permission try to access them."
Uh... good?
I'm torn. Maybe a better approach would be a prompt saying "you're giving access to images with embedded location data. Do you want to keep the location data in the images, or strip the location data in this application?"
I might not want an application to know my current, active location. But it might be useful for it to get location data from images I give it access to.
I do think if we have to choose between stripping nothing or always stripping if there's no location access, this is the correct and safe solution.
> saying "you're giving access to images with embedded location data. Do you want to keep the location data in the images, or strip the location data in this application?"
This is a good example of a complex setting that makes sense to the 1% of users who understand the nuances of EXIF embedded location data but confuses the 99% of users who use a product.
It would also become a nightmare to manage settings a per-image basis.
Not per-image, it would be per-app. The first time it happened it would ask you. There are already quite a few per-app toggles for things like this so it wouldn't be anything new or particularly surprising.
That said, an alternative to bugging the user might be that when the app makes the call to open the file that call should fail unless the app explicitly passes a flag to strip the location data. That way you protect users without causing needless confusion for developers when things that ought to "just work" go inexplicably wrong for them.
Kind of. But that means any file that goes through that mechanism may be silently modified. Which is evil.
You can load Java Classes into Oracle DB and run them natively inside the server.
Those classes can call stored procedures or functions.
Those classes can be called BY stored procedures or functions.
You can call stored procedures and functions from server-side Java code.
So you can have a java app call a stored proc call a java class call a stored proc ...
Yes. Yes, this is why they call it Legacy.
Why is the YAML part cursed? They serialize to same string, no? Both [1] and [2] serialize to identical strings. This seems like the ancient YAML 1.1 parser curse strikes again.
[1] https://play.yaml.io/main/parser?input=ICAgICAgdGVzdDogPi0KI...
[2]https://play.yaml.io/main/parser?input=ICAgICAgdGVzdDogPi0KI...
dd/mm/yyyy date formats are cursed....
Perhaps it is mm/dd/yyyy (really?!?) that is cursed....
dd/mm/yyyy is most common worldwide (particularly Europe, India, Australia) followed by yyyy/mm/dd (particularly China, Japan, South Korea).
https://wikipedia.org/wiki/Date_and_time_representation_by_c...
IMO the best format is yyyy/mm/dd because it’s unambiguous (EDIT: almost) everywhere.
For a really cursed one that breaks your last comment, check out Kazakhstan on the list by country: https://en.wikipedia.org/wiki/List_of_date_formats_by_countr...
> Short format: (yyyy.dd.mm) in Kazakh[95][obsolete source]
Even ISO has used the cursed date format.
ISO-IR-26 was registered on 1976/25/03.
Not only is YYYY/MM/DD unambiguous, but it also sorts correctly by date when you perform a naive alphabetical sort.
I believe YYYY-MM-DD is even less ambiguous than YYYY/MM/DD.
Correct. Slashes mean it's a yank date and going to be backwards. Dashes hint that it's going to be (close to) ISO standard.
Slashes are used for dd/mm/yyyy as well. Dashes are indeed better if you want a separator. or use the separator-free ISO 8601 syntax.
And it doesn't use a path-separator character for the date.
I like CCYY-MM-DD because it's also a valid file name on most systems, and using "CCYY" (century + year) instead of "YYYY" feels fancy.
Except this could get confusing because the year 1976 (for example) is actually in the 20th century.
mm.dd.yyyy is cursed, too. The not-cursed options are dd.mm.yyyy and mm/dd/yyyy
in what world could mm/dd/yyyy not be cursed!? that makes no sense whatsoever.
It's the US short form, matching the word-month order we always use for regular dates: "August 7, 2025".
Note the slashes are important, we don't use dots or dashes with this order. That's what GP was getting at.
> It's the US short form, matching the word-month order we always use for regular dates: "August 7, 2025".
Counterexample: US Independence Day is called the “Fourth of July”.
I would agree that, for dates with named months, the US mostly writes “August 8, 2025” and says “August eighth, 2025” (or sometimes “August eight, 2025”, I think?), and other countries mostly write “8 August 2025” and say “the eighth of August, 2025”; but neither is absolute.
And it makes absolutely no sense. I've lived with it all my life (I'm an American!) and it has never made any sense to me.
First, I use ISO8601 for everything. This is not me arguing against it.
But, I think the American-style formatting is logical for everyday use. When you're discussing a date, and you're not a historian, the most common reason is that you're making plans with someone else or talking about an upcoming event. That means most dates you discuss on a daily basis will be in the next 12 months. So starting with the month says approximately when in the next year you're talking about, giving the day next says when in that month, and then tacking on the year confirms the common case that you mean the next occurrence of it.
When's Thanksgiving? November (what part of the year?) 27 (toward the end of that November), 2025 (this year).
It's like answering how many minutes are in a day: 1 thousand, 4 hundred, and 40. You could say 40, 400, and 1000, which is still correct, but everyone's going to look at you weirdly. Answer "2025 (yeah, obviously), the 27th (of this month?) of November (why didn't you start with that?)" is also correct, but it sounds odd.
So 11/27/2025 starts with the most useful information and works its way to the least, for the most common ways people discuss dates with others. I get it. It makes since.
But I'll still use ISO8601.
> So 11/27/2025 starts with the most useful information
Most useful information would be to not confuse it. E.g. you see a event date 9/8/2025 and it's either tomorrow or a month from now. Perfect 50/50% chance to miss it or make a useless trip
Can you explain why on a traffic light, red means stop and green means go? Why not the other way around?
Red is an aggravating colour psychologically. It's pretty universally used as a warning. Red lights in cars also mean "not ready to drive". Brake lights are also red for similar reason. "Seeing red."
Because it's arbitrary. Unlike a date format where the components have relative meaning to one another, can be sorted based on various criteria, and should smoothly integrate with other things.
As a US native let me clearly state that the US convention for writing dates is utterly cursed. Our usage of it makes even less sense than our continued refusal to adopt the metric system.
The short form doesn’t match the word form though.
If you wanted a short form to match the word form, you go with something like:
“mmmm/dd/yyyy”
Where mmmm is either letters, or a 2-character prefix. The word form “August 7th…” is packing more info that the short form.
This is awesome. Disappointing to hear about the Cloudflare fetch issue.
> Disappointing to hear about the Cloudflare fetch issue.
You mean the one where explicitly configuring Cloudflare to forward requests to origin servers as HTTP will actually send requests as HTTP? That is not what I would describe as disappointing.
The behavior seems likely to mislead a lot of people even if it doesn't confuse you.
The infallibility of Cloudflare is sacrosanct!