The order of files in /etc/ssh/sshd_config.d/ matters

(utcc.utoronto.ca)

241 points | by NGRhodes 4 days ago ago

109 comments

  • egberts1 a day ago

    Purpose of first-define is the rule: In placing configuration files higher than user-defined configuration but Only with SSH client, can want user to have control from their config files: Remove from config files Place a couple under Match/MatchGroup using deny/accept.

    SSHD (server/non-client) still support admin-defined by having system-wide settings done firstly. For those who have multi-file SSHD configurations, breakdown of the many config file locations and scopes here as it covers default user, system-wide, specific user:

    https://egbert.net/blog/articles/ssh-openssh-options-ways.ht...

    Also I broken out each and every SSHD and SSH options along with their ordering by execution by using file name and numbering as well as its various state machine, dispatch, CLI equivalence, network context, and function nesting, all in:

    https://github.com/egberts/easy-admin/tree/main/490-net-ssh

    https://github.com/egberts/easy-admin/blob/main/490-net-ssh/...

    Disclaimer: I do regular code reviews of OpenSSH and my employer authorizes me to release them (per se contract and NDA)

    Also this showed how to properly mix and match authentication types using OR and AND logic(s) in

    https://serverfault.com/a/996992

    It is my dump mess so wade 'em and enjoy.

    • dfc 21 hours ago

      The way you wrote the rule makes no sense to me. Maybe it's too early in the day for me?

      "In placing configuration files higher than user-defined configuration but Only with SSH client, can want..."

    • egberts1 a day ago

      I find this SSHD snippet to be extremely useful in enterprise network, notably with OpenLDAP.

      Also the most dangerous but flexible way to authenticate a user.

      https://jpmens.net/2019/03/02/sshd-and-authorizedkeyscommand...

      • efrecon a day ago

        This is really good! Thanks!

    • egberts1 a day ago

      For those that are exploring software-based public certificate and OpenSSH, Ive broken down the settings for most PKI handlers.

      https://egbert.net/blog/articles/openssh-file-authorized_key...

      • memco a day ago

        Thanks for sharing this! I think I may now have what I need to set up a system with multi-user shared keys that only work for a given set of users.

        • egberts a day ago

          I do enjoy dual-PK-certificate authentication in my homelab: one by equipment, and one by user/group.

          Only misgiving is that the key management issues have worsen only for the key administrator(s). But it is a viable and sustainable AA model because there is the most important security component: instant denial of a user and/or a equupment.

      • gosub100 17 hours ago

        We must have knocked your site offline

        • egberts1 14 hours ago

          Uptime remains uninterrupted.

          Are you using the verboten Chrome and its inability to negotiate and defer to server absolut side of ChaCha20-Poly1305 with sha512? It refuses client-demanded Chrome-forced ChaCha/sha256, AES and then RSA.

    • ThePowerOfFuet 19 hours ago

      This comment seems to have a lot to say but it was word salad to me, quite confusing and hard to read :(

      • egberts1 13 hours ago
      • egberts1 19 hours ago

        It has been translated from OpenSSH meta-spaghetti code logic. Break it down by parts of sentence.

        • Thorrez 17 hours ago

          I've tried reading it over and over, and tried breaking it down by pars of the the sentence. It still doesn't make sense to me.

          • egberts1 13 hours ago

            For SSH clients, the naming of configuration files are read in lexical ordering by OpenSSH.

            Starts reading with /etc/ssh/sshd.d directory which can provide admins to give/takeaway what user can specify in their user config files then OpenSSH reads in the user-defined configuration in $HOME/.ssh/sshd.d.

            Inserting configuration items into system config directory takes away user's ability to use nor change.

            Removing from system directory reverts to a user-changeable default settings. Adding to user-directory (without any in system directory) gives user that choice.

            For finer granularity of option usage, remove said option from both system directory and user config files then insert into last of lexical ordering config files (typically 99-something.conf or 999-something.conf) and place a couple under Match/MatchGroup using deny/accept.

  • 0xbadcafebee a day ago

    This is perhaps old sysadmin knowledge, but different tools have very different heuristics about how they parse configuration, and you have to check every time and not assume. Among the consequences to not checking are gaping security holes.

    • INTPenis a day ago

      You mean there are consequences to making assumptions? ;) (also old sysadmin)

  • wruza 4 days ago

    That's why I erase sshd_config and put what I really meant there. You may say "but isn't it better to patch it properly?". It is not. Yet another vps hoster –> yet another /etc/ssh directory template that may have all sorts of access issues in it. Better to replace it and make it do exactly what you have planned.

    • timewizard a day ago

      I've never liked the directory.d/* infrastructure. In so many cases, even with a properly configured sshd_config, the resulting configuration file is not so large that it benefits from being split up.

      You have to deal with ordering issues, symlink management in some cases, and unless the "namespace" of sorting number prefixes is strictly defined, it's never something that's convenient or durable to "patch" new files into. The proliferation of 99_* files shows the anti-utility this actually provides.

      I much prefer configuration files with a basic "include" or "include directory" configuration item. Then I can scope and scale the configuration in ways that are useful to me and not some fragile distribution oriented mechanism. Aside from that with xz I don't think I want my configurations "patchable" in this way.

      • hnlmorg a day ago

        Config directories are there to solve change management problems like idempotency.

        If you have one big file then different tools, or even the same tool but different points of that tools life cycle, can result in old config not correctly removed, new config applied multiple times, or even a corrupt file entirely.

        This isnt an issue if you’re running a personal system which you hand edit those config files. But when you have fleets of servers, it becomes a big problem very quickly.

        With config directories, you then only need to track the lifecycle of files themselves rather than the content of those files. Which solves all of the above problems.

        • wruza a day ago

          I never managed a fleet. I mean I occasionally manage up to 30 instances, does that count?

          Either way, my notion about doing it properly is to have a set of scripts (ansible/terraform?) that rebuild the configuration from templates and rewrite, restart everything. Afaiu, there's no "let's turn that off by rm-ing and later turn it on again by cat<<EOF-ing", cause there's no state database that could track it, unless you rely on [ -e $path ], which feels not too straightforward for e.g. state monitoring.

          (I do the same basically, but without ansible. Instead I write a builder script and then paste its output into the root shell. Poor man's ansible, but I'm fine.)

          So as I understand it, these dirs are only really useful for manual management, not for fleets where you just re-apply your "provisioning", or what's the proper term, onto your instances, without temporary modifications. When you have a fleet, any state that is not in the "sources" becomes a big problem very quickly. And if you have "sources", there's no problem of turning section output on and off. For example when I need a change, I rebuild my scripts and re-paste them into corresponding instances. This ensures that if I lose any server to crash, hw failure, etc, all I have is to rent another one and right-click a script into its terminal.

          So I believe that gp has a point, or at least that I don't get the rationale that replies to gp suggest itt. Feels not that important for automatic management.

          • hnlmorg a day ago

            I’ve found the template system starts to have shortcomings if you need differences between different classes of systems within your fleet (eg VDIs for data scientists vs nodes in a ML pipeline pool)

            Yeah a good templating language will allow you to apply conditionals but honestly, it’s just generally easier to have different files inside a config directory than have to manage all these different use cases in a single monolithic template or multiple different templates in which you need to remember to keep shared components in sync.

            At the end of the day, we aren’t talking about solving problems that were impossible before but rather solving problems that were just annoying. It’s more a quality of life improvement than something that couldn’t be otherwise solved with enough time and effort.

            • wruza a day ago

              I see, thanks for the explanation.

              Although I think that it sounds so ansible, and that's exactly why I avoid using it. It is a new set of syntax, idioms, restrictions, gotchas, while all I need is a nodejs script that joins initialization template files together, controlled by json server specs. With first class ifs and fors, and without "code in config" nonsense. I mean, I know exactly what was meant to be done manually, I just need N-times parametrized automation, right? Also, my servers are non-homogeneous as well, and I don't think I ever had an issue with that.

              • hnlmorg a day ago

                It’s not just Ansible. I’ve ran into this problem many times using many different configuration systems. Even in the old days of hand-rolling deployment scripts in Bash and Perl.

                Managing files instead of one big monolithic config is just easier because there’s less chance of you foobarring something in a moment of absentmindedness.

                But as I said, I’m not criticising your approach either. Templating your config is definitely a smart approach to the same problem too.

                • wruza 14 hours ago

                  Not sure if I understand, because my templates are also multi-file and multi-js-module, where needed. It's only the result that is a single root-pasteable script (per server) which generates a single /etc/foo/foo_config file per service. So I think I'm lost again about changes.

                  You don't want instance-local changes. Do you? Afaiu, these changes are anti-pattern cause they do not persist in case of failure. You want to change the source templates and rebuild-repropagate the results. Having ./foo.d/nn-files is excessive and serves litte purpose in automated mode, unless you're dealing with clunky generators like ansible where you only have bash one-liners.

                  What am I missing?

                  • hnlmorg 3 hours ago

                    You’re not missing anything. A lot of the problem is due to clunky generators.

                    But then those clunky generators do solve different problems too. Though I’m not going to debate that topic here right now, besides saying no solution is perfect and thus choose a tech stack is always a question of choosing which tradeoffs you want to make.

                    However on the topic of monolithic config files vs config directories, the latter does provide more options for how to manage your config. So even if you have a perfect system for yourself which lends itself better for monolithic files, that doesn’t mean that config directories don’t make life a lot easier for a considerable number of other systems configurations.

      • rlpb a day ago

        The .d directories are important on Debian and Ubuntu where packaging needs to provide different snippets based on the set of installed packages, the VM environment, other configuration inputs like through cloud-init and so forth, and update them during upgrades, but also (as per policy) preserve user customisations on anything in /etc.

        Since pretty much every file has different syntax, this is virtually impossible to do any other way.

      • loodish a day ago

        The .d directories make management via tools such as ansible much much easier.

        You don't have weird file patching going on with the potential to mess things up in super creative ways if someone has applied a hand edit.

        With .d directories you have a file, you drop in that file, you manage that file, if that file changes then you change it back.

        • ecef9-8c0f-4374 a day ago

          I love that you can use validate: sshd -T -f %s To check if changes would break things.

      • vbezhenar 17 hours ago

        That's one of my issues with most Linux distros.

        1. They add huge configuration files where 99% are commented out.

        2. Sometimes they invent whole new systems of configuration management. For example debian with apache httpd does that.

        I don't need all of that. I just need simple 5-line configuration file.

        My wish: ship absolutely minimal (yet secure) configuration. Do not comment out anything. Ask user to read manuals instead. Ship your configuration management systems as a separate packages for those who need it. Keep it simple by default.

      • sneak a day ago

        The conf.d isn’t because the config file is large. It’s because it’s easier to disable or enable something with an “echo blah > conf.d/10-addin.conf” or an “rm conf.d/50-blah.conf” than it is to do sed -i or grep blah || echo blah >>

        • claudex a day ago

          Also, it allows different packages to handle the configuration and add their specific parameters.

        • shawnz a day ago

          Exactly: if your templating logic accidentally produces a syntax error, now you can't log in to SSH. There's much less chance of that scenario with include directories. This applies for infrastructure as code scenarios, changes made by third party packages, updates of ssh, manual one-off changes, etc.

          • wruza 14 hours ago

            If any logic produces a syntax error anywhere in the sshd_config include chain, ssh is broken now. And you will have templating logic in automatic configuration one way or another, at least for different dns/ips.

            I don't grep this argument at all. It feels like everyone's comparing to that "regular [bad] detergent" in this thread. A templating system will be as good and as error-prone to change and as modular etc as you make it, just like any program.

            It applies only to local patchers (like e.g. certbot nginx) and manual changes, but that's exactly out of scope of templating and configuration automation. So it can't be better, cause these two things are in XOR relationships.

            Edit to clarify: I don't disagree with foo.d approach in general. I just don't get the arguments that in automation setting it plays any positive role, when in fact you may step on a landmine by only writing your foo.d/00-my. Your DC might have put some crap into foo.d/{00,99}-cloud, so you have to erase and re-create the whole foo.d anyway. Or at least lock yourself into a specific cloud.

            • shawnz 12 hours ago

              It's still possible to break the config with a syntax error, but there are less kinds of syntax errors that are possible if you aren't writing into the middle of an existing block of syntax. For example, there's no chance that you unintentionally close an existing open block due to incorrect nesting of options or anything like that. Plus, if you are writing into the middle of an existing file, there's a chance you could corrupt other parts of the file besides the part you intended to write. For example, if you have an auto-generated section that you intend to update occasionally, you will need to make sure you only delete and recreate the auto-generated parts and don't touch any hand-written parts, which could involve complicated logic with sentinel comments, etc. Then you need to make sure that users who edit the file in future don't break your logic. In addition it's harder to test your automation code when you're writing into an existing file because there's more edge cases to deal with regarding the surrounding context, etc.

              • wruza 9 hours ago

                Templating doesn't write in the middle. Writing in the middle is a medieval rudiment of manual configuration helpers. Automated config generation simply outputs a new file every time you run "build" and then somehow syncs it to the target server. All "user" changes go into templates, not outputs. What you're talking about can exist, but it is a mutable hell that is not reproducible and thus cannot be a part of a reliable infrastructure.

                If this is not how modern devops/fleet management works, I withdraw my questions cause it's even less useful than my scripts.

      • badgersnake a day ago

        Totally, don’t use .d for ssh. The configuration is not that complicated. If it is, you’re doing it wrong.

  • eadmund a day ago

    Yeah, first-wins is definitely surprising. Off the top of my head, it feels like one would have to go out of one’s way to write a parser that does that (by storing an extra bit of state for each configuration item, and then checking it before setting the configuration item and toggling the state, rather than just applying the configuration item each time it is encountered).

    Is there a good reason for this design? I can’t think of one, again off the top of my head, but of course I could be missing something.

    • Joker_vD 21 hours ago

      The SSHD internally has a config structure. Initially, it's all initialized with -1 for flags/numbers/timeouts, and with NULL for strings. When an option is parsed, there is a check whether the config structure's corresponding field is -1/NULL (depending on the type) before the parsed value is saved.

      Another program with first-wins I've seen used dict/map for its config so the check was even simpler: "if optname not in config: config[optname] = parsed_value".

    • o11c a day ago

      It probably makes a bit more sense when you think about the fact that SSH frequently does a "try to match this host against a list of configured host-patterns" operation. In that case, "first match" is the obvious thing to do.

    • Thorrez 17 hours ago
    • ajross a day ago

      It's actually the simplest scheme. Reparse from the top whenever you need to query a setting. When you see one, exit. No need to even bother to store an intermediate representation. No idea if this matches the actual ssh implementation, but that's the way many historical parsers worked. The idea of cooking your text file on disk (into precious RAM!) is fairly modern.

      • Dylan16807 a day ago

        Loading up your parsing code and reopening the file every time a setting is queried sounds to me like it would increase the average memory use of most programs.

        • naniwaduni a day ago

          You don't care about average memory use, you care about peak memory use.

          • Dylan16807 18 hours ago

            Same criticism. When the program is in the middle of busy runtime activity, with all the memory that entails, it's the worst time to also load up the parser.

          • NekkoDroid a day ago

            Doesn't really sound much better. You still load up the file(s) and the parser either way, so parsing all once vs on-demand is just a question of computation duration and considering how many config options are used the on-demand just seems really wasteful, especially after startup.

            • hnlmorg a day ago

              The GP is correct in terms of super old systems.

              In said systems, RAM was such an expensive resource that we had to save individual bits wherever we could. Such as only storing the last two digits of the year (aka the millennium bug).

              The computational cost of infrequently rescanning the config files then freeing the memory afterwards was much cheaper than the cost of storing those config files into RAM. And I say “infrequently rescanning” because you weren’t talking about people logging in and out of TSSs at rapid intervals.

              That all said, sshd was first written in the 90s so I find it hard to believe RAM considerations was the reason for the “first match” design of sshd’s config. More likely, it inherited that design choice from rsh or some other 1970s predecessor.

              • ajross a day ago

                > hard to believe RAM considerations was the reason for the “first match” design of sshd’s config

                And I repeat: first match involves less code. It's a simpler design. The RAM point was an interesting digression, I literally put it in parentheses!

                • hnlmorg a day ago

                  I don’t think it does require less code. I don’t think it requires more code either. It’s just not a fundamental code change.

                  The difference is just either: overwriting values or exiting in the presence of a match. Either way it’s the same parser rules you have to write for the config file structure.

                  • ajross 21 hours ago

                    OK, but now that's a performance regression. The assumption upthread was that the whole file needed to be parsed into an in-memory representation. If you don't do that, sure, you can implement precedence either way. But parsing all the way to the end for every read is ridiculous. The choice is between "parse all at once", which allows for arbitrary precedence rules but involves more code, and "parse at attribute read time", which involves less code but naturally wants to be a "first match" precedence.

                    • hnlmorg 3 hours ago

                      As someone who’s written multiple parsers, I can tell you that having a parser that stops upon matched condition requires a lot more careful thought about how that parser is called in a reusable way while still allowing for multiple different types of parameters to be stored in that configuration.

                      For example:

                      - You might have one function that requires a little of all known hosts (so now your “stop” condition isn’t a match but rather a full set)

                      - another function that requires matching a specific private key for a specific host (a traditional match by your description)

                      - a third function that checks if the host IP and/or host name is a previously known host (a match but no longer scanning host names, so you now need your conditional to dynamically support different comparables)

                      - and a forth function to check what public keys are available which user accounts (now you’re after a dynamic way to generate complete sets because neither the input nor the comparison are fixed and you’re not even wanting the parser to stop on a matched condition)

                      Because these are different types of data being referenced with different input conditions, you then need your parser to either be Turing complete or different types of config files for those different input conditions thus resulting in writing multiple different parsers for each of those types of config (sshd actually does the latter).

                      Clearly the latter isn’t simpler nor less code any more.

                      If you’re just after simplicity from a code standpoint then you’d make your config YAML / JSON / TOML or any other known structured format with supporting off-the-shelf libraries. And you’d just parse the entire thing into memory and then programmatically perform your lookups in the application language rather than some config DSL you’ve just had to invent to support all of your use cases.

                    • Dylan16807 19 hours ago

                      You introduced an n^2 config algorithm but now you're worried about the much smaller performance impact from going to the end of the file instead of sometimes stopping halfway?

                      And for the record I'm not convinced your way is simpler. The code gets sprinkled with config loading calls instead of just checking a variable, and the vast majority of the parser is the same between versions.

                      • ajross 17 hours ago

                        You're not discussing in good faith. The performance comparison was to the "parse to the end" variant that you suggested as equivalent. The natural way you implement that (again, very simple) algorithm wants the early exit, yes, for obvious performance reasons.

                        We're done. You're "not convinced" my way is simpler because you're not willing to give ground at all. This is a dumb thing to argue about. Just look at some historical parsers for similar languages, I guess. Nothing I've said here is controversial at all.

                        • Dylan16807 13 hours ago

                          You have me confused with someone else.

                          Different people are making different points. Nobody is arguing in bad faith.

            • ajross a day ago

              > load up the file

              I/O is done piecewise, a line at a time. The file is never "loaded up". Again you're applying an intuition about how parsers are presented to college students (suck it all into RAM and build a big in-memory representation of the syntax tree) that doesn't match the way actual config file parsers work (read a line and interpret it, then read another).

        • ajross a day ago

          The ssh config format has almost no context, and the code is static and always "loaded up". I can all but guarantee this isn't correct. Modern hackers tend to wildly overestimate the complexity of ancient tasks like parsing.

          • Dylan16807 19 hours ago

            If you're actually concerned about the handfuls of bytes a settings object would take, you would make the page/segment containing parser code able to be unloaded from memory.

      • Joker_vD 21 hours ago

        Nope, the actual ssh implementation parses all the config files once, at the startup, using buffered file I/O and getline(). That means that on systems with modern libc, the whole config file (if it's small enough, less than 4 KiB IIRC?) gets read into the RAM and then getline() results are served from that buffer.

        The scheme you propose is insane and if it was ever used (can you actually back that up? The disk I/O would kill your performance for anything remotely loaded), it was rightfully abandoned for much faster and simpler scheme.

        • ajross 15 hours ago

          > getline() results are served from that buffer.

          So... it doesn't parse them once! It just does its own[1] buffering layer and implements... exactly the algorithm I described? Not seeing where you're getting the "Nope" here, except to be focusing on the one historical note about RAM that I put in parentheses.

          [1] Somewhat needless given the OS has already done this. It only saves the syscall overhead.

          • Joker_vD 3 hours ago

            Sorry, my comment was intended to be reply to the one of yours that said

                I/O is done piecewise, a line at a time. The file is never "loaded up". Again
                you're applying an intuition about how parsers are presented to college
                students (suck it all into RAM and build a big in-memory representation of
                the syntax tree) that doesn't match the way actual config file parsers work
                (read a line and interpret it, then read another).
            
            So, the whole file is usually loaded up (if it's short enough). At this point you might as well parse all of it, instead of re-reading it from the disk over and over, and redoing the same work over and over; parsed configs — if they are parsed into falgs/enums, and not literal strings — usually take about the same, or less, memory than a FILE structure from libc does on the whole.

            The complexity of the algorithm is about the same, either the early exit is here or it isn't (in fact, the version with the early exit, now that I think of it, has larger cyclotomatic complexity but whatever).

  • drpixie a day ago

    There is a nice sshd option (-T) that tells you what it's really doing. Just run

       sudo sshd -T | grep password
    • mmsc a day ago

      Except that doesn't tell you what it's doing, that tells you what it _might_ do, if you (re)start the server.

      sshd -T reads the configuration file and prints information. It doesn't print what the server's currently-running configuration is: https://joshua.hu/sshd-backdoor-and-configuration-parsing

      • eliaspro a day ago

        That's why I only use socket-activated per-connection instances of sshd.

        Every configuration change immediately applies to every new connection - no need to restart the service!

        • wruza a day ago

          socket-activated per-connection instances

          Yay, they reinvented inetd too!

    • aflukasz a day ago

      Yes. Run this as a validation step during base os image creation, if such image is intended to start system with sshd. That way you can verify that distro you use did not pull the carpet from under your feet by changing something with base sshd config that you implicitly rely on.

  • eapriv a day ago

    Of course the order matters, that’s why the file names have numbers in them.

    • extraduder_ire a day ago

      I think their surprise comes from earlier config wins conflicts, rather than the other way around. That's not reflected in the title.

    • lubutu a day ago

      I initially read "the order of files in /etc/ssh/sshd_config.d/" to mean the order of files in the underlying directory inode, i.e. as returned by `ls -f` — and thought, "oh god"... But the lexicographical order, that's not too surprising.

    • 1oooqooq a day ago

      yeah, this is confdir 101...

      but i guess learning is better late than never type of thing.

      also what confuses people more on this is that openssh is properly designed, so configs are first seen wins. exactly so that file 0_ wins from 99_... but most people see badly designed software where 99_ overrides 0_. openssh way is exactly so it works best with confir or ssh/options where it matches by hosts and you can place the more important stuff first without fear defaults will override it.

  • samlinnfer a day ago

    I've made a big stink about this last time: https://news.ycombinator.com/item?id=42133181

    They've updated the documentation on /etc/ssh/sshd_config https://bugs.launchpad.net/ubuntu/+source/cloud-init/+bug/20...

  • noufalibrahim 14 hours ago

    This is weird. I've been hitting funny problems while trying to get ssh to authenticate (using passwords) via. a keycloak instance. I've been trying to do using PAM script but have been pretty unsuccessful till now. Apparently, they don't play nice together.

  • jmclnx 21 hours ago

    Curious, since when is directory "/etc/ssh/sshd_config.d/" a thing ?

    I checked on my OpenBSD (7.6) System and Slackware (15.0) and that directory does not exist. I checked the man page for sshd and there is no mention of that dir.

    Is this a new thing/patch Linux people came up with ?

    • nubinetwork 14 hours ago

      Gentoo started doing this last year and I absolutely hate it.

    • dfc 21 hours ago

      It might not be an OpenBSD thing. It may be a Debian/ubuntu-ism.

      • SoftTalker 17 hours ago

        OpenBSD is the author of OpenSSH and yes they support the Include directive in sshd_config but they do not use it in a default install.

        • jmclnx 13 hours ago

          Just read the manual, nice to see they are not patching ssh for this.

  • czernobog a day ago

    This is interesting, usually it's the latter because the config is ran line by line

    Also, if it's not too much trouble, would someone help me understand why such files are required to start with numbers? In this case it's 10-no-password.conf.

    I have noticed similar structure for apt and many more packages

    • glitchcrab a day ago

      A lot of software which reads drop-in files will load them in numerical (or indeed alphabetical) order. Obviously this is important if the order your config files are loaded in matters, but otherwise it's just become a convention so people do it even if the load order doesn't actually matter.

    • 47282847 a day ago

      Typically, config files are merged into one by loading them like ./conf.d/*, the order being determined alphabetically from their file names. You do not need to use numbers but they help to see that order.

  • NikkiA 11 hours ago

    this is true of all 'config.d' schemes, and why most such schemes suggest/use number-name.ext style filenames to deal with sorting.

  • therein a day ago

    The only time I hear or see anything about cloudinit, it is always a problem. Nobody ever said "we don't need worry about that, cloudinit takes care of it".

    What good does cloudinit do really?

    • aflukasz a day ago

      In this particular case cloudinit presence in the story is incidental, delivery mechanism of said config file could have been different.

      It's useful for initializing state that could not have been initialized before booting in the target environment. Canonical example, I guess, being ssh server and client keys management, but the list of modules it implements is long.

    • nightfly a day ago

      Provides a moderately-configured starting point for new cloud VM deployments without requiring custom images

    • naniwaduni a day ago

      > Nobody ever said "we don't need worry about that, cloudinit takes care of it".

      Well, why would it come up? You don't need to worry about things you don't need to worry about.

  • barotalomey a day ago

    Hence the tradition of numeric file naming in *.d directories.

  • sneak a day ago

    Same if you use ~/.ssh/conf.d/.

    I have been bitten by this before. :(

  • N2yhWNXQN3k9 a day ago

    title could use a clean up

  • immibis a day ago

    I don't like these systems where configuration is built from a million separate files. They're unpleasant to work with.

    The best reason to do it this way seems to be that files are the unit of package management. Perhaps we need a smarter package manager.

    My nginx.conf life got better when I deleted sites-available and sites-enabled and defined my sites inline in nginx.conf.

    The only thing worse is when the configuration is actually a program that generates the configuration, like in ALSA.

    And the only thing worse than ALSA style is Xorg style, with a default configuration generated by C code and you can only apply changes to it without seeing it. Xorg also has this weird indirection thing where the options are like Option "Foo" "Bar" instead of Foo "Bar", but that's a nitpick in comparison.

    • johnisgood a day ago

      Most of my configuration files are in one file, but there are cases where it makes sense, such as /etc/modules-load.d, for one.

  • pdimitar a day ago

    Yikes. What a nasty surprise. These tools really should be phased out in their current form.

    • mrunkel a day ago

      RTFM

      No tool can protect you from your own assumptions about how said tool works.

      • shermantanktop a day ago

        In other words, prepare for maximum surprise? As a defensive posture in a hostile or random environment, that makes sense.

        But as a design approach, most designs go for the “principle of least surprise.”

        And that’s how I read the original comment: a well designed system wouldn’t do this. Joke is on them, though, because nobody designed this.

        • ablob a day ago

          Surprise is also based on convention. What could be surprising to you might be just a stroll in the park for others. In Japan people would be surprised to see others wearing shoes in a house while it's perfectly normal for people of other countries. Reading the manual is something to prevent surprises and only takes one sentence to explain. I'd go for that any day of the week!

          > ... a well designed system wouldn’t do this. ...

          A well designed system would be able to explain their decisions and document that somewhere. Perhaps in the manual.

          • shermantanktop a day ago

            A well-documented system would explain all decisions. A well-designed system would enable the user to learn a small set of principles and apply them in adjacent areas successfully without reference to documementation.

            As you say, I am used to checking the docs in Linux. It’s the convention that no convention shall be assumed. Is that good design?

      • pdimitar a day ago

        I have a feeling you were waiting to say that for a while. :D

        Glad to make your day. You are welcome.

        Ultimately we all end up reading the manual. I'd still prefer if I didn't have to remember how a certain C stdlib function works vs. what seems intuitive.

        But that's a lost cause with a lot of people. They'll happily point out how "intuitive" differs among different groups of people and all that, merrily missing the point on purpose.

        Oh well. At least I found out without locking my self out of my servers.

        • johnisgood a day ago

          > Ultimately we all end up reading the manual. I'd still prefer if I didn't have to remember how a certain C stdlib function works vs. what seems intuitive.

          Intuitive is highly subjective, it might be intuitive to you, but not for others, and vice versa, and it is part of the job to read the "manual instruction".

          > But that's a lost cause with a lot of people. They'll happily point out how "intuitive" differs among different groups of people and all that, merrily missing the point on purpose

          What is your point? Are you arguing against documentation? You told me you are not averse to reading the documentation, yet you are complaining about it and bringing "intuition" into this. I am confused. Could you clarify your point?

          • pdimitar a day ago

            That documentation is not sacred texts so yes I am arguing against it.

            My point is that intuitive is as not as subjective as you make it out to be. Which is partially reinforced by a lot of software where "last one wins" is the policy. This example here sticks out like a sore thumb.

            No point pursuing however because you seem hellbent on defending tradition which is something that tires and bores me.

            Just agree to disagree, move on.

            • johnisgood a day ago

              Yeah, you claimed that people are not averse to reading documentation, and now you, yourself, are trying to argue against documentation.

              Moving on.

              • pdimitar a day ago

                I explained why (and no it's not "avoiding", it's "prioritizing" actually) but you had to "win", did you?

                Taking a note of your username. I'll go out of my way to avoid you.

                Bye.

                • johnisgood a day ago

                  I am not winning and it is not a competition.

                  Please do, I am the guy who writes and reads documentation.

      • johnisgood a day ago

        Every time I say RTFM (like in the case of strtok), I get down-voted. Some tools really cannot be dumbed down, and they should not. I do not know why people have an aversion to reading documentation. It is bad.

        In the case of strtok, I am not going to implement my own if strtok does what I want it to do, and behaves how I know it does. Why would I?! Sometimes what I need is strtok, sometimes strsep, sometimes I may use strtok_r.

        • pdimitar a day ago

          Why would "last one wins" be dumbing down the tool, exactly?

          You're doing a big assumption that people are averse to reading documentation.

          You are likely downvoted because you prefer to make your opponents look irrational so you can easily defeat them.

          Tearing down a straw man is not a welcome discussion tactic around here. Maybe that can help you.

          • jyounker a day ago

            My understanding is that "first one whens" is intended for security. Global config is read first, and then local (per-user) configs are read later. Because the earlier config wins, the per-user configs can't override the global policy.

            • pdimitar a day ago

              That helps get it little better, thank you.

          • johnisgood a day ago

            I was only talking about the "RTFM" part.

            > Why would "last one wins" be dumbing down the tool, exactly?

            I did not refer to that as dumbing down the tool. That said, if you are unsure whether it is first or last one wins, read the documentation. There is no objective intuition here. To me "first one wins" might be intuitive TO YOU, but to me "last one wins" is.

            > You're doing a big assumption that people are averse to reading documentation.

            Some people definitely are, and they openly tell you that on here, too.

            If you look further into my comments where I discuss "strtok", you will see it for yourself.

            > You are likely downvoted because you prefer to make your opponents look irrational so you can easily defeat them.

            I got down-voted because I claimed strtok is straightforward to use once you have read the documentation. I do not see how I am making them look irrational either (nor is it my intention). I am just trying to encourage people to read the documentation.

            • a day ago
              [deleted]
            • pdimitar a day ago

              One thing a lot of traditionalists like yourself miss (sometimes I think it's on purpose) is that we have limited time.

              Modern programming is not like 30 years ago. We have literal hundreds, if not thousands, of bits and pieces to assemble. I couldn't care less what some lone cowboy thought "strtok" should do decades ago. And how genius it seemed to him.

              Apropos, why use "strtok" at all in this case, btw? Fine, the function might make perfect sense. The tool's behavior does not.

              ...But, well, he did made me care ultimately, right? But it's not welcome and now I think less of that person.

              But again -- those were different times. To me if you don't do what seems intuitive (and yes I am replying to your comment here after the other and yes I am aware we'll never agree on what's intuitive), defined also broadly as "what many other programs do" then you are just John Wayne-ing your way into my hatred.

              Nevermind though. I knew some UNIX cowboys of old. I don't miss them one bit. The way this tool behaves smells very strongly of them.

              • johnisgood a day ago

                We have always had limited time, and it is your problem (and not the tool's fault) if you do not read the manual pages or documentation. You do realize you can search, right?

                I do not know about you, but I write documentation for my programs, both a manual page, and comments in the source code. If you do this as well, then why would you do that? What if someone blames your tool just because they did not read the documentation?

                > I couldn't care less what some lone cowboy thought "strtok" should do decades ago. And how genius it seemed to him.

                If you do not know how strtok behaves, that is your problem, it is well-documented. If you do not want to read its manual page, just roll your own for all I care.