A friendly tour of process memory on Linux

(0xkato.xyz)

226 points | by 0xkato a day ago ago

24 comments

  • ramon156 11 hours ago

    I love these tiny explainers! Even if I already know what it's about, having a confirmation helps throughout reading.

  • mhavelka77 15 hours ago

    "mmap, without the fog"

    I don't know if this is just me being paranoid, but every time I see a phrase like this in an article I feel like it's co-written by an LLM and it makes me mad...

    • puika 13 hours ago

      The article does feel like Gemini when you ask it to explain you something in layman terms, but co-authored by chatgpt with nonsense like "without the fog".

  • drbig a day ago

    Instruction pipelining and this is exactly why I wish we still have the time to go back to "it is exactly as it is", think the 6502 or any architecture that does not pretend/map/table/proxy/ringaway anything.

    That, but a hell lot of it with fast interconnect!

    ... one can always dream.

    • drbig 10 hours ago

      The point is that we should acknowledged those "cheats" came with their reasons and that they did improve performance etc. But, they also did come with a cost (Meltdown, Spectre anyone?) and fundamentally introduced _complexities_, which at today's level of manufacturing and end of Moore's law may not be the best tradeoffs.

      I'm just expressing the general sentiment of distaste for piling stuff upon stuff and holding it with a duct-tape, without ever stepping back and looking at what we have, or at least should have, learnt and where we are today in the technological stack.

    • ojbyrne 20 hours ago

      The article is essentially describing virtual memory (with enhancements) which predates the 6502 by a decade or so.

      • Delk 7 hours ago

        IMO it's not even quite right in its description. The first picture that describes virtual memory shows all processes as occupying the same "logical" address space with the page table just mapping pages in the "logical" address space to physical addresses one-to-one. In reality (at least in all VM systems I know of) each process has its own independent virtual address space.

    • taeric 21 hours ago

      I'm curious how this dream is superior to where we are? Yes, things are more complex. But it isn't like this complexity didn't buy us anything. Quite the contrary.

      • harry8 20 hours ago

        > ...buy us anything.

        Totally depends on who "us" and isn't. What problem is being solved etc. In the aggregate clearly the trade off has been beneficial to the most people. If what you want to do got traded, well you can still dream.

        • taeric 7 hours ago

          Right, but that was kind of my question? What is better about not having a lot of these things?

          That is, phrasing it as a dream makes it sound like you imagine it would be better somehow. What would be better?

          • layer8 4 hours ago

            Things would be simpler, more predictable and tractable.

            For example, real-time guarantees (hard time constraints on how long a particular type of event will take to process) would be easier to provide.

            • taeric 3 hours ago

              But why do we think that? The complexity would almost certainly still exist. Would just now be up a layer. With no guarantees that you could hit the same performance characteristics that we are able to hit today.

              Put another way, if that would truly be a better place, what is stopping people from building it today?

              • layer8 3 hours ago

                Performance wouldn’t be the same, and that’s why nobody is manufacturing it. The industry prefers living with higher complexity when it yields better performance. That doesn’t mean that some people like in this thread wouldn’t prefer if things were more simple, even at the price of significantly lower performance.

                > The complexity would almost certainly still exist.

                That doesn’t follow. A lot of the complexity is purely to achieve the performance we have.

                • taeric 3 hours ago

                  I'm used to people arguing for simpler setups because the belief is that they could make them more performant. This was specifically the push for RISC back in the day, no?

                  To that end, I was assuming the idea would be that we think we could have faster systems if we didn't have this stuff. If that is not the assumption, I'm curious what the appeal is?

                  • layer8 2 hours ago

                    That’s certainly not the assumption here. The appeal is, as I said, that the systems would be more predictable and tractable, instead of being a tarpit of complexity. It would be easier to reason about them, and about their runtime characteristics. Side-channel attacks wouldn’t be a thing, or at least not as much. Nowadays it’s rather difficult to reason about the runtime characteristics of code on modern CPUs, about what exactly will be going on behind the scenes. More often than not, you have to resort to testing how specific scenarios will behave, rather than being able to predict the general case.

                    • taeric an hour ago

                      I guess I don't know that I understand why you would dream of this, though? Just go out and program on some simpler systems? Retro computing makes the rounds a lot and is perfectly doable.

    • loeg 19 hours ago

      But why?

  • sleepytimetea a day ago

    Website blocked as a threat/unsafe domain.