17 comments

  • amelius 2 hours ago

    > A higher level of preemption enables the system to respond more quickly to events; whether an event is the movement of a mouse or an "imminent meltdown" signal from a nuclear reactor, faster response tends to be more gratifying. But a higher level of preemption can hurt the overall throughput of the system; workloads with a lot of long-running, CPU-intensive tasks tend to benefit from being disturbed as little as possible. More frequent preemption can also lead to higher lock contention. That is why the different modes exist; the optimal preemption mode will vary for different workloads.

    Why isn't the level of preemption a property of the specific event, rather than of some global mode? Some events need to be handled with less latency than others.

    • btilly an hour ago

      You need CPU time to evaluate the priority of the event. This can't happen until after you've interrupted whatever process is currently on the CPU. And so the highest possible priority an event can happen is limited by how short a time slice a program gets before it has to go through a context switch.

      To stand ready to reliably respond to any one kind of event with low latency, every CPU intensive program must suffer a performance penalty all the time. And this is true no matter how rare those events may be.

      • Someone 2 minutes ago

        [delayed]

      • zeusk 19 minutes ago

        That is not true of quite a few multi-core systems. A lot of them, especially those that really care about performance will strap all interrupts to core 0 and only interrupt other cores via IPI when necessary.

    • RandomThoughts3 15 minutes ago

      > Why isn't the level of preemption a property of the specific event, rather than of some global mode?

      There are two different notions which are easy to get confused about here: when a process can be preempted and when a process will actually be preempted.

      Potential preemption point is a property of the scheduler and is what is being discussed with the global mode here. More preemption points mean more chances for processes to be preempted at inconvenient time obviously but it also means more chances to properly prioritise.

      What you call level of preemption, which is to say priority given by the scheduler, absolutely is a property of the process and can definitely be set. The Linux default scheduler will indeed do its best to allocate more time slices and preempt less processes which have priority.

    • acters 2 hours ago

      Mostly because such a system would install in fighting among programs that all will want to be prioritized as important. tbf it will mostly be larger companies who will take advantage of it for "better" user experience. Which is kind of important to either reduce to a minimal amount of running applications or simply control it manually for the short burst most users will experience. If anything cpu intensive tasks are more likely to be bad code than some really effective use of resources.

      Though when it comes to gaming, there is a delicate balance as game performance should be prioritized but not be allowed to cause the system to lock up for multitasking purposes.

      Either way, considering this is mostly for idle tasks. It has little importance to allow it to be automated beyond giving users a simple command for scripting purposes that users can use for toggling various behaviors.

      • biorach an hour ago

        You're talking about user-space preemption. The person you're replying to, and the article are about kernel preemption.

        • withinboredom 34 minutes ago

          Games run in a tight loop, they don’t (typically) yield execution. If you don’t have preemption, a game will use 100% of all the resources all the time, if given the chance.

          • Tomte 17 minutes ago

            Games run in user space. They don't have to yield (that's cooperative multitasking), they are preempted by the kernel. And don't have a say about it.

        • acters 30 minutes ago

          Yeah you are right, however some of what I said does have some merit as there are plenty of things I talked about that apply to why you would need dynamic preemption. However, the other person who mentioned the issue with needing to take cpu cycles on the dynamic system that checks and might apply a new preemptive config is more overhead. The kernel can't always know how long the tasks will take so it is possible that the overhead for dynamically changing for new tasks that have short runtime will be worse than just preemptively setting the preemptive configuration.

          But yeah thanks for making that distinction. Forgot to touch on the differences

    • biorach an hour ago

      Arguably PREEMPT_VOLUNTARY, as described in the article is an attempt in this direction which is being deprecated.

  • weinzierl 3 hours ago

    "Current kernels have four different modes that regulate when one task can be preempted in favor of another"

    Is this about kernel tasks, user tasks or both?

    • GrayShade 3 hours ago

      Kernel code, user-space code is always preemptible.

  • Hendrikto an hour ago

    > It all adds up to a lot to be done still, but the end result of the lazy-preemption work should be a kernel that is a bit smaller and simpler while delivering predictable latencies without the need to sprinkle scheduler-related calls throughout the code. That seems like a better solution, but getting there is going to take some time.

    Sounds promising. Just like EEVDF, this both simplifies and improves the status quo. Does not get better than that.

  • simfoo 2 hours ago

    Can't find any numbers in the linked thread with the patches. Surely some preliminary benchmarking must have been performed that could tell us something about the real world potential of the change?

    • spockz 39 minutes ago

      How would you benchmark something like this? Run multiple processes concurrently and then sort by total run time? Or measure individual process wait time?

    • biorach an hour ago

      From the article, second last paragraph:

      > There is also, of course, the need for extensive performance testing; Mike Galbraith has made an early start on that work, showing that throughput with lazy preemption falls just short of that with PREEMPT_VOLUNTARY.