57 comments

  • kace91 14 hours ago

    (Let me start clarifying that this is not at all a criticism of the author)

    I am usually amused by the way really competent people judge other's context.

    This post assumes understanding of:

    - emacs (what it is, and terminology like buffers)

    - strace

    - linux directories and "everything is a file"

    - environment variables

    - grep and similar

    - what git is

    - the fact that 'git whatever' works to run a custom script if git-whatever exists in the path (this one was a TIL for me!)

    - irc

    - CVEs

    - dynamic loaders

    - file priviledges

    but then feels important to explain to the audience that:

    >A socket is a facility that enables interprocess communication

    • ericmcer 11 hours ago

      That feels like part of why some juniors are so confident while more senior engineers are plagued with self-doubt.

      Juniors know how much they have learned whereas a 10+ year senior (like the author) forget most people don't know all this stuff intuitively.

      I still will say stuff like "yeah it's just a string" forgetting everyone else thinks a "string" is a bit of thread/cord.

      • aorloff 4 hours ago

        Well coached juniors run through brick walls

        • TeMPOraL 2 minutes ago

          Scrapping them off the wall is not pleasant, though. But throw enough of them at it, I guess the wall eventually goes down too.

      • kragen 4 hours ago

        I forget that people think strings are different from sequences of bytes.

        • sfink an hour ago

          I didn't know that at some point, then I knew that and found it obvious, and now I don't know it again.

          Strings are very very not sequences of bytes. Strings are a semantic thing. There may be a sequence of bytes in some representation of a particular string, but even then those bytes are not enough to define a string without other stuff. An encoding, at the very least. But even then, there are many things that could be described as a "string". A sequence of code points, perhaps? Or scalar values? Grapheme clusters?

          Not to mention that you may not even have a linear sequence of bytes at the bottom level. You might have a rope (cons cell), or an intern pointer, or...

    • hakunin 14 hours ago

      As a blogger who makes similar assumptions, I think we depend on how a lot of us from that time "grew up" similarly. Sockets came to relevance later in my career compared to everything else listed here.

      • kace91 14 hours ago

        That might be part of it, yes.

        As someone younger, ports and sockets appeared very early in my learning. I'd say they appeared in passing before programming even, as we had to deal with router issues to get some online games or p2p programs to work.

        And conversely, some of the other topics are in the 'completely optional' category. Many of my colleagues work on IDEs from the start, and some may not even have used git in its command line form at all, though I think that extreme is more rare.

      • Thorrez 9 hours ago

        git was released in 2005.

        >The term socket dates to the publication of RFC 147 in 1971, when it was used in the ARPANET. Most modern implementations of sockets are based on Berkeley sockets (1983), and other stacks such as Winsock (1991).

        https://en.wikipedia.org/wiki/Network_socket

        • kragen 4 hours ago

          I read RFC 147 the other day, and it turns out that by "socket" it means "port number", more or less (though maybe they were proposing to also include the host number in the 32-bit "socket", which was quietly dropped within the next few months). Also Berkeley sockets are from about 01979, which is a huge difference from 01983.

    • goranmoomin 14 hours ago

      I haven't even realized that while I was reading the article, but it is amusing!

      Though one explanation is that I think for the other stuff that the writer doesn't explain, one can just guess and be half right, and even if the reader guesses wrong, isn't critical to the bug ­— but sockets and capabilities are the concepts that are required to understand the post.

      It still is amusing and I wouldn't have even realized that until you pointed that out.

    • Retric 10 hours ago

      I found that specific clarification useful while everything else was easy to follow.

      It’s not that I was unaware that’s how Unix worked here, just that I rarely think of sockets in that context.

    • dwedge 14 hours ago

      I found it interesting that they know how to use strace, but not how to list open files held by a process which to me seems simpler. Again, not criticism just an observation and I enjoyed the article

      • parliament32 14 hours ago

        Given the "(hi Julia!)" immediately after the strace shenanigans, I interpreted this as a third-party hint; the author most likely had not used strace before.

        The author is both an example of and an example for how we can get caught in "bubbles" of tools/things we know and use and don't, and blog posts like this are great for discovery (I didn't know about git invoking a binary in the path like his "git re-edit", for example, until today).

        • dwedge 21 minutes ago

          I discovered that by accident, I had a script called git-pr that opened a pull request with github using the last commit message and then pushed it to slack for approval. I was trying to rewrite it to add a description and wondered why "git pr" pushed an empty message to slack

    • derefr 11 hours ago

      All of the things you listed are ops topics. But sockets are a programming concept.

      I would expect a person with 10+ years of Unix sysadmin experience — but who has never programmed directly against any OS APIs, “merely” scripting together invocations of userland CLI tools — to have exactly this kind of lopsided knowledge.

      (And that pattern is more common than you might think; if you remember installing early SuSE or Slackware on a random beige box, it probably applies to you!)

    • jsrcout 4 hours ago

      Yep. The Curse of Knowledge is a real thing.

    • mr_toad 11 hours ago

      Most people these days are using http and don’t need to touch sockets. (Except for the people implementing http of course).

    • kragen 12 hours ago

      To be fair, it does link the CVE, so if you don't know what a CVE is, you can click the link.

      I agree that it's amusing.

    • addled 12 hours ago

      I mean, the title is a quote from Buckaroo Banzai. Lack of context is part of the fun!

  • svat 15 hours ago

    (2016)

    Also, “direct” link: https://blog.plover.com/tech/tmpdir.html (This doesn't really matter, as the posted link is to https://blog.plover.com/2016/07/01/#tmpdir i.e. the blog post named “tmpdir” posted on 2016-07-01 and there is only post posted on that date, so the content of the page is basically the same.)

  • adrianmonk 15 hours ago

    > This computer stuff is amazingly complicated. I don't know how anyone gets anything done.

    I wonder what could be done to make this type of problem less hidden and easier to diagnose.

    The one thing that comes to mind is to have the loader fail fast. For security reasons, the loader needs to ensure TMPDIR isn't set. Right now it accomplishes this by un-setting TMPDIR, which leads to silent failures. Instead, it could check if TMPDIR is set, and if so, give a fatal error.

    This would force you to unset TMPDIR yourself before you run a privileged program, which would be tedious, but at least you'd know it was happening because you'd be the one doing it.

    (To be clear, I'm not proposing actually doing this. It would break compatibility. It's just interesting to think about alternative designs.)

    • tetha 12 hours ago

      Mh, I am starting to dislike this kind of hyper-configurability.

      I know when this was necessary and used it myself quite a bit. But today, couldn't we just open up a mount namespace and bind-mount something else to /tmp, like SystemDs private tempdirs? (Which broke a lot of assumptions about tmpdirs and caused a bit of ruckus, but on the other hand, I see their point by now)

      I'm honestly starting to wonder about a lot of these really weird, prickly and fragile environment variables which cause security vulnerabilities, if low-overhead virtualization and namespacing/containers are available. This would also raise the security floor.

      • josephcsible 6 hours ago

        > But today, couldn't we just open up a mount namespace and bind-mount something else to /tmp, like SystemDs private tempdirs?

        No, because unless you're already root (in which case you wouldn't have needed the binary with the capability in the first place), you can't make a mount namespace without also making a user namespace, and the counterproductive risk-averse craziness has led to removing unprivileged users' ability to make user namespaces.

        • kragen 4 hours ago

          It's probably true that there are setuid programs that can be exploited if you run them in a user namespace. You probably need to remove setuid (and setgid) as Plan9 did in order to do this.

          • josephcsible 4 hours ago

            I meant distros are moving towards no unprivileged user namespaces at all, not just no setuid programs inside them.

            • kragen 4 hours ago

              Is "just no setuid programs inside them" even an option?

    • ericmcer 10 hours ago

      It is complex. There was another posting on HN where commenters were musing over why software projects have a much higher failure rate than any other engineering discipline.

      Are we just shittier engineers, is it more complex, or is the culture such that we output lower quality? Does building a bridge require less cognitive load then a complex software project?

      • rout39574 10 hours ago

        I think it's a cultural acceptance of lower quality, happily traded for deft execution, over and over.

        We're better at encapsulating lower-level complexities in e.g. bridge building than we are at software.

        All the complexities of, say, martensite grain boundaries and what-not are implicit in how we use steel to reinforce concrete. But we've got enough of it in a given project that the statistical summaries are adequate. It's a member with thus strength in tension, and thus in compression, and we put a 200% safety factor in and soldier on.

        And nobody can take over the ownership of leftpad and suddenly falsify all our assumptions about how steel is supposed to act when we next deploy ibeam.js ...

        The most well understood and dependable components of our electronic infrastructure are the ones we cordially loathe because they're composed in shudder COBOL, or CICS transactions, or whatever.

        • LorenPechtel 7 hours ago

          Exactly. The properties rarely matter outside the item. The column is of such-and-such a strength, that's it. But when things get strange we see failures. Perfect example: Challenger. Was the motor safe sitting on the pad? Yes. Was the motor safe in flight? Yes. Was the motor safe at ignition? On the test stand, yes. Stacked for launch, ignition caused the whole stack to twang--and maybe the seals failed....

      • jcynix 9 hours ago

        > Are we just shittier engineers, is it more complex [...]

        Both IMO: first, anybody could buy a computer during the last three decades, dabble in programming without learning basic concepts of software construction and/or user-interface design and get a job.

        And copying bad libraries was (and is) easy. I still get angry when software tells me "this isn't a valid phone number" when I cut/copy/paste a number with a blank or a hyphen between digits. Or worse, libraries which expect the local part of an email address to only consist of alphanumeric characters and maybe a hyphen.

        Second, writing software definitely is more complex than building physical objects. Because there are "no laws" of physics which limit what can be done. In the sense that physics tell you that you need to follow certain rules to get a stable building or a bridge capable of withstanding rain, wind, etc.

        • jz391 7 hours ago

          Absolutely. As an Electrical Engineer turned software guy, Ohm's/Kirchhoff's laws remain as valid and significant as when I was taught them 35 years ago. For software however, growth of hardware architectures/constraints made it possible to add much more functionality. My first UNIX experience was on PDP-11/44, where every process (and kernel) had access to an impressive maximum of 128K of RAM (if you figured out the flag to split address and data segments). This meant everything was simple and easy to follow: the UNIX permission model (user/group/other+suid/sgid) fit it well. ACLs/capabilities etc were reserved for VMS/Multics, with manuals spanning shelves.

          Given hardware available to an average modern Linux box, it is hardly surprising that these bells and whistles were added - someone will find them useful in some scenarios and additional resource is negligible. It does however make understanding the whole beast much, much harder...

      • kragen 4 hours ago

        It's just that people are somewhat rational.

        There are no big wins left in bridge building, so there is no justification for taking big risks. Also, in most software project failures, the only cost is people's time; no animals are harmed, no irreplaceable antique guitars are smashed, no ecosystems are damaged, and no buses of schoolchildren plunge screaming into an abyss.

        Your software startup didn't get funded? Well, you can go back and finish college.

    • LorenPechtel 7 hours ago

      Yeah, this case is a good example of why many people don't like linux. Too much interconnected stuff that can go wrong.

  • lanthade 6 hours ago

    Don't tug on that applies to hardware too.

    Years ago I worked on contract for a large blue 3 letter company doing outsourced server management for the fancy credit card company. The incident in question happened before my time on the team but I heard about it first hand from the server admin (let's call him Ben) who had been at the center of it.

    The data center in question was (IIRC) 160K sqft of raised floor spread across multiple floors in a major metropolitan downtown area. It isn't there anymore. Windows, Unix, Linux, mainframe, San, all the associated fun stuff.

    Ben was working the day after thanksgiving decommissioning a system. Full software and physical decommission. Approved through all the proper change management procedures.

    As part of the decommission Ben removed the network cables from under raised floor. Standard snip the connector off and pull it back. Easy. Little did he know that network cable was ever so slightly entangled with another cable. Not enough to give him pause when pulling it though. It wouldn't have been an issue if the other cable had been properly latched in its ports. It wasn't. That little pull ended up pulling the network connection out of a completely unrelated system. A system managed by a completely different group. A system responsible for credit card processing. On USA Black Friday.

    Oops. CC processing went down. It took far too long to resolve. Amazingly Ben didnt loose his job. After all he followed all the processes and procedures. Kudos to the management team who kept him protected.

    Change management and change freezes were far more stringent by the time I joined the team. There was also now a raised floor infrastructure group and no one pulled a tile without their involvement.

    Be careful what you tug on!

  • eichin 2 hours ago

    A couple of paragraphs in I started wondering if it was going to turn out to be systemd-tmpfiles (in ubuntu, 16.04 I think? The symptom was "about 10 days after login, X11 forwarding over ssh stopped working but local apps could still open windows just fine." I remember it as an ubuntu-specific misconfiguration, though I think the systemd defaults were changed to be less of a footgun in response...)

    I was pleased that it was more interesting than that, and I want people to write more twitchy-detail-post-mortems like this :-)

  • jcynix 14 hours ago

    BTW, the author "mjd" is the author of the excellent book "Higher-Order Perl" which is available online at https://hop.perl.plover.com/book/

    • pinkmuffinere 12 hours ago

      I love mjd! He once replied to me on an HN thread and it lives forever in my memory :)

  • linsomniac 15 hours ago

    The Internet needs more Buckaroo Banzai references. Because wherever you go, there you are.

    • neilk 13 hours ago

      Yup. I nearly had this movie memorized when I was a child.

      https://www.youtube.com/watch?v=aWXuDNmO7j8

      Peter Weller, playing Buckaroo Banzai, is late for his military-particle-physics-interdimensional-jet-car test because he's helping Jeff Goldblum's character with neurosurgery. Later that day he will go play lead guitar in an ensemble.

      Scriptwriting gurus advise that your protagonist should have flaws and character progression. The writers of this movie disagree.

  • thayne 12 hours ago

    Setting a capability on the perl executable seems like a very bad idea. That effectively grants tha capability to everything that is able to invoke perl (without being restricted to NO_NEW_PRIVILEGES).

    • aftbit 7 hours ago

      Yeah why did he want non-root perl to be able to bind to low-numbered ports? Seems like one of those typical footguns of applying non-standard configurations.

      • thayne 6 hours ago

        My reading is the author didn't do that, rather his/her employers configuration system had done so.

        Setting TMPDIR to /mnt/tmp seems also to come from that.

        I would guess both were the result of someone who didn't really know what they were doing trying things until they found something that got what they needed to work, then pushed that out without understanding the broader implications.

  • markstos 16 hours ago

    And this was written 10 years ago, when computers were far less complicated and vibe coding sleeper bugs wasn't a thing.

    • WJW 15 hours ago

      Vibe coded sleeper bugs have always been a thing, they just came from the bosses' nephew who was still learning PHP at the time and left several years ago.

      Also, computers in 2015 were not meaningfully less complex than today. Certainly not when the topic is weird emacs and perl interactions.

      • marcosdumay 15 hours ago

        Even if the topic was web applications (that are where Big Complexity thrives), 2015 was about peak complexity. Things have improved a bit since then.

      • add-sub-mul-div 15 hours ago

        The problem isn't that AI is doing something new, we all know that it isn't. The problem is that the boss' nephew is becoming the rule now rather than the exception.

        • jama211 14 hours ago

          It also makes bugs easier to find and resolve. You win some you lose some. Perhaps by the time it is the rule they’ll be better at writing safer code.

    • detourdog 15 hours ago

      From my perspective vibe coding was always a thing.

  • Yehia_loay 42 minutes ago

    Cool

  • klodolph 5 hours ago

    I have an overly reductive take on this—it’s Unix environment variables.

    You have your terminal window and your .bashrc (or equivalent), and that sets a bunch of environment variables. But your GUI runs with, most likely, different environment variables. It sucks.

    And here’s my controversial take on things—the “correct” resolution is to reify some higher-level concept of environment. Each process should not have its own separate copy of environment variables. Some things should be handled… ugh, I hate to say it… through RPC to some centralized system like systemd.

    • rao-v 5 hours ago

      Windows registry just sort of hovering in the backdrop

      • klodolph 2 hours ago

        Something that is still inheritable, between “there is one and it is global” and “there is a separate copy for each process”.

  • LordGrey 13 hours ago

    Buckaroo Banzai: You can check your anatomy all you want, and even though there may be normal variation, when it comes right down to it, this far inside the head it all looks the same. No, no, no, don’t tug on that. You never know what it might be attached to.

    • buckaroobanzi 8 hours ago

      See, this is the point, for me, where it started to look like a problem. You know, I wanted to sacrifice the precentral vein in order to get some exposure, but because of this guy's normal variation, I got excited, and all of a sudden I didn't know whether I was looking at the precentral vein, or one of the internal cerebral veins, or the vein of Galen, or the vascular vein of Rosenthal. So, on my own, to me, at this point, I was ready to say that's it, let's get out.