For anyone not familiar with the meaning of '2' in this context:
The Linux kernel supports the following overcommit handling modes
0 - Heuristic overcommit handling. Obvious overcommits of
address space are refused. Used for a typical system. It
ensures a seriously wild allocation fails while allowing
overcommit to reduce swap usage. root is allowed to
allocate slightly more memory in this mode. This is the
default.
1 - Always overcommit. Appropriate for some scientific
applications. Classic example is code using sparse arrays
and just relying on the virtual memory consisting almost
entirely of zero pages.
2 - Don't overcommit. The total address space commit
for the system is not permitted to exceed swap + a
configurable amount (default is 50%) of physical RAM.
Depending on the amount you use, in most situations
this means a process will not be killed while accessing
pages but will receive errors on memory allocation as
appropriate. Useful for applications that want to
guarantee their memory allocations will be available
in the future without having to initialize every page.
There's a lot of options. If you want to go down the rabbithole try typing `sysctl -a | grep -E "^vm"` and that'll give you a lot of things to google ;)
Not sure if I understand your question but nothing "happens to the rest", overcommitting just means processes can allocate memory in excess of RAM + swap. The percentage is arbitrary, could be 50%, 100% or 1000%. Allocating additional memory is not a problem per se, it only becomes a problem when you try to actually write (and subsequently read) more than you have.
Just a guess, but I reckon it doesn't account for things like kernel memory usage, such as caches and buffers. Assigning 100% of physical RAM to applications is probably going to have a Really Bad Outcome.
But the memory being used by the kernel has already been allocated by the kernel. So obviously that RAM isn't available.
I can understand leaving some amount free in case the kernel needs to allocate additional memory in the future, but anything near half seems like a lot!
malloc() and friends may always return NULL. From the man page:
If successful, calloc(), malloc(), realloc(), reallocf(), valloc(), and aligned_alloc() functions return a pointer to allocated memory. If there is an error, they return a NULL pointer and set errno to ENOMEM.
In practice, I find a lot of code that does not check for NULL, which is rather distressing.
It's been a while but while I agree the man page says that, my limited understanding was the typical libc on linux won't really return NULL under any sane scenario. Even when the memory can't be backed
I think you're right, but "typical" is the key word. Embedded systems, systems where overcommit is disabled, bumping into low ulimit -v settings, etc can all trigger an immediate failure with malloc(). Those are edge cases, to be sure, but some of them could be applied to a typical Linux system and me, as a coder, won't be aware of it.
As an aside: To me, checking malloc() for NULL is easier than checking a pointer returned by malloc on first use. That's what you're supposed to do in the presence of overcommit.
But why would you want to violate the docs on something as fundamental as malloc? Why risk relying on implementation specific quirks in the first place?
Even with overcommit enabled, malloc may fail if there is no contiguous address space available. Not a problem in 64 bits but may occasionally happen in 32 bits
I realize this is mostly tangential to the article, but a word of warning for those who are about to mess with overcommit for the first time: In my experience, the extreme stance of "always do [thing] with overcommit" is just not defensible, because most (yes, also "server") software is just not written under the assumption that being able to deal with allocation failures in a meaningful way is a necessity. At best, there's an "malloc() or die"-like stanza in the source, and that's that.
You can and maybe even should disable overcommit this way when running postgres on the server (and only a minimum of what you would these days call sidecar processes (monitoring and backup agents, etc.) on the same host/kernel), but once you have a typical zoo of stuff using dynamic languages living there, you WILL blow someone's leg off.
I run my development VM with overcommit disabled and the way stuff fails when it runs out of memory is really confusing and mysterious sometimes. It's useful for flushing out issues that would otherwise cause system degradation w/overcommit enabled, so I keep it that way, but yeah... doing it in production with a bunch of different applications running is probably asking for trouble.
The fundamental problem is that your machine is running software from a thousand different projects or libraries just to provide the basic system, and most of them do not handle allocation failure gracefully. If program A allocates too much memory and overcommit is off, that doesn't necessarily mean that A gets an allocation failure. It might also mean that code in library B in background process C gets the failure, and fails in a way that puts the system in a state that's not easily recoverable, and is possibly very different every time it happens.
For cleanly surfacing errors, overcommit=2 is a bad choice. For most servers, it's much better to leave overcommit on, but make the OOM killer always target your primary service/container, using oom-score-adj, and/or memory.oom.group to take out the whole cgroup. This way, you get to cleanly combine your OOM condition handling with the general failure case and can restart everything from a known foundation, instead of trying to soldier on while possibly lacking some piece of support infrastructure that is necessary but usually invisible.
There's also cgroup resource controls to separately govern max memory and swap usage. Thanks to systemd and systemd-run, you can easily apply and adjust them on arbitrary processes. The manpages you want are systemd.resource-control and systemd.exec. I haven't found any other equivalent tools that expose these cgroup features to the extent that systemd does.
This is a better explanation and fix than others I've seen. There will be differences between desktop and server uses, but misbehaving applications and libraries exist on both.
> he way stuff fails when it runs out of memory is really confusing
have you checked what your `vm.overcommit_ratio` is? If its < 100%, then you will get OOM kills even if plenty of RAM is free since the default is 50 i.e. 50% of RAM can be COMMITTED and no more.
curious what kind of failures you are alluding to.
The main scenario that caused me a lot of grief is temporary RAM usage spikes, like a single process run during a build that uses ~8gb of RAM or more for a mere few seconds and then exits. In some cases the oom killer was reaping the wrong process or the build was just failing cryptically and if I examined stuff like top I wouldn't see any issue, plenty of free RAM. The tooling for examining this historical memory usage is pretty bad, my only option was to look at the oom killer logs and hope that eventually the culprit would show up.
Thanks for the tip about vm.overcommit_ratio though, I think it's set to the default.
you can get statistics off cgroups to get idea what it was (assuming it's a service and not something user ran), but that requires probing it often enough
This is completely wrong. First, disabling overcommit is wasteful because of fork and because of the way thread stacks are allocated. Sorry, you don't get exact memory accounting with C, not even Windows will do exact accounting of thread stacks.
Secondly, memory is a global resource so you don't get local failures when it's exhausted, whoever allocates first after memory has been exhausted will get an error they might be the application responsible for the exhaustion or they might not be. They might crash on the error or they might "handle it", keep going and render the system completely unusable.
No, exact accounting is not a solution. Ulimits and configuring the OOM killer are solutions.
Without going into a discussion about whether this is right or wrong, how is fork(2) wasteful?
fork(2) has been copy-on-write for decades and does not copy the entire process address space. Thread stacks are a non-sequitur either as stack uses the data pages, the thread stacks are rather small in size in most scanarios, hence the thread stacks are also subject to copy-on-write.
The only overhead that the use of fork(2) incurs is an extra copy of process' memory descriptior pages, which is a drop in the ocean for modern systems for large amounts of RAM.
Unless you reserve on fork you're still over committing because after the fork writes to basically any page in either process will trigger memory commitment.
Thread stacks come up because reserving them completely ahead of time would incur large amounts of memory usage. Typically they start small and grow when you touch the guards. This is a form of overcommit. Even windows dynamically grows stacks like this
> Thread stacks come up because reserving them completely ahead of time would incur large amounts of memory usage. Typically they start small and grow when you touch the guards. This is a form of overcommit.
Ahead of the time memory reservation entails a page entry being allocated in the process’s page catalogue («logical» allocation), and the page «sits» dormant until it is accessed and causes a memory access fault – that is the moment when the physical allocation takes place. So copying reserved but not accessed yet pages has zero effect on the physical memory consumption of the process.
What actually happens to the thread stacks depends on the actual number of active threads. In modern designs, threads are consumed from thread pools that implement some sort of a run queue where the threads sit idle until they get assigned a unit of work. So if a thread is idle, it does not use its own stack thread and, consequently, there is no side effect on the child's COW address space.
Granted, if the child was copied with a large number of active threads, the impact will be very different.
> Even windows dynamically grows stacks like this
Windows employs a distinct process/thread design, making the UNIX concept of a process foreign. Threads are the primary construct in Windows and the kernel is highly optimised for thread management rather than processes. Cygwin has outlined significant challenges in supporting fork(2) semantics on Windows and has extensively documented the associated difficulties. However, I am veering off-topic.
> […] because after the fork writes to basically any page in either process will trigger memory commitment.
This is largely not true for most processes. For a child process to start writing into its own data pages en masse, there has to exist a specific code path that causes such behaviour. Processes do not randomly modify their own data space – it is either a bug or a peculiar workload that causes it.
You would have a stronger case if you mentioned, e.g., the JVM, which has a high complexity garbage collector (rather, multiple types of garbage collectors – each with its own behaviour), but the JVM ameliorates the problem by attempting to lock in the entire heap size at startup or bailing if it fails to do so.
In most scenarios, forking a process has a negligible effect on the overall memory consumption in the system.
> because of fork and because of the way thread stacks are allocated
For modern (post-x86_64) memory allocators a common strategy is to allocate hundreds of gigabytes of virtual memory and let the kernel handle deal with actually swapping in physical memory pages upon use.
This way you can partition the virtual memory space into arenas as you like. This works really well.
Which is a major way turning off overcommit can cause problems. The expectation for disabling it is that if you request memory you're going to use it, which is frequently not true. So if you turn it off, your memory requirements go from, say, 64GB to 512GB.
Obviously you don't want to have to octuple your physical memory for pages that will never be used, especially these days, so the typical way around that is to allocate a lot of swap. Then the allocations that aren't actually used can be backed by swap instead of RAM.
Except then you've essentially reimplemented overcommit. Allocations report success because you have plenty of swap but if you try to really use that much the system grinds to a halt.
When your system is out of memory, you do not want to return an error to the next process that allocates memory. That might be an important process, it might have nothing to do with the reason the system is out of memory, and it might not be able to gracefully handle allocation failure (realistically, most programs can't).
Instead, you want to kill the process that's hogging all the memory.
The OOM killer heuristic is not perfect, but it will generally avoid killing critical processes and is fairly good at identifying memory hogs.
And if you agree that using the OOM killer is better than returning failure to a random unlucky process, then there's no reason not to use overcommit.
Besides, overcommit is useful. Virtual-memory-based copy-on-write, allocate-on-write, sparse arrays, etc. are all useful and widely-used.
Fwiw you can use pressure stall information to load shed. This is superior to disabling overcommit and then praying the first allocation to fail is in the process you want to actually respond to the resource starvation.
Fact is that by the time small allocations are failing you are almost no better off handling the null than you would be handling segfaults and the sigterms from the killer.
Often for servers performance will fall off a cliff long before the oom killer is needed, too.
For anyone feeling brave enough to disable overcommit after reading this, be mindful that default `vm.overcommit_ratio` is 50% which means that if no swap is available, on a system with 2GB of total RAM, more than 1GB of RAM can't be allocated and requests will fail with preemptive OOMs. (e.g. postgresql servers typically disable overcommit)
Author is just ignorant to the technicals and laser focusing on some particular cases that he thinks are problem but are not.
The redis engineers KNOW fork-to-save will at most result in few tens of MBs of extra memory used in vast majority of cases and benefit of seamless saving.
Like, there is a theoretical where it uses double the memory but it would require all the data in database be replaced during short interval snapshot is saved and that's just unrealistic
Because the point of forbidding overcommit is to ensure that the only time you can discover you're out of memory is when you make a syscall that tries (explicitly or implicitly) to allocate more memory. If you don't account the COW pages to both the parent and the child process, you have a situation where you can discover the out of memory condition when the process tries to dirty the RAM and there's no page available to do that with...
The described scenario (and, consequently, a concern) is mostly a philosophical question or a real concern for a very specific workload.
Memory allocation is a highly non-deterministic process which highly depends on the code path, and it is generally impossible to predict how the child will handle its own memory space – it can be little or it can be more (relatively to the parent), and it is usually somewhere in the middle. Most daemons, for example, consuming next to zero extra memory after forking.
The Ruby garbage collector «mark-and-sweep» (old versions of Ruby – 1.8 and 1.9) and Python reference counting (the Instagram case) bugs are prime examples of pathological cases when a child would walk over its data pages, making each dirty and causing a system collapse, but the bugs have been fixed or workarounds have been applied. Honourable mention goes to Redis in a situation when THP (transparent huge pages) are enabled.
No heuristics exist out there that would turn memory allocation into a deterministic process.
The point of disabling overcommit, as per the article, is that all pages in virtual memory must be backed by physical memory at all times. Therefore all virtual memory must reserve physical memory at the time of the fork call, even if the contents of the pages only get copied when they are touched.
Surely e.g. shared memory segments that are mapped by multiple processes are not double-counted? So it's only COW memory in particular that gets this treatment? Linux could just not do that.
But forking duplicates the process space and has to assume that a write might happen, so it has to defensively reserve enough for the new process if overcommit is off.
If you have overcommit on, that happens. But if you have it off, it has to assume the worst case, otherwise there can be a failure when someone writes to a page.
Disabling overcommit on V8 servers like Deno will be incredibly inefficient. Your process might only need ~100MB of memory or so but V8's cppgc caged heap requires a 64GB allocation in order to get a 32GB aligned area in which to contain its pointers. This is a security measure to prevent any possibility of out of cage access.
Setting 2 is still pretty generous. It means "Kernel does not allow allocations that exceed swap + (RAM × overcommit_ratio / 100)." It's not a "never swap or overcommit" setting. You can still get into thrashing by memory overload.
We may be entering an era when everyone in computing has to get serious about resource consumption. NVidia says GPUs are going to get more expensive for the next five years. DRAM prices are way up, and Samsung says it's not getting better for the next few years. Bulk electricity prices are up due to all those AI data centers. We have to assume for planning purposes that computing gets a little more expensive each year through at least 2030.
Somebody may make a breakthrough, but there's nothing in the fab pipeline likely to pay off before 2030, if then.
For me, on the desktop, thrashing overload is the most common way the Linux system effectively crashes... (I've left it overnight a few times, sometimes it recovered, but not always).
I'm not disabling overcommit for now, but maybe I should.
That thrashing is probably executable pages getting evicted, and then having to be reloaded from disk when the process resumes. Even with no swap and overcommit disabled, you'll still get thrashing before the OOM killer gets triggered.
I recommend everyone to enable linux's new multi-generational LRU, that can be configured to trigger the OOM when the workingset of the last N deciseconds doesn't fit in memory. And <https://github.com/hakavlad/nohang> has some more suggestions.
It's not gonna change anything. But you might get interested into software like earlyoom and similar, that basically tried to preempt oomkiller and kill something before it gets to sluggish state
This is quite the bold statement to make with RAM prices sky high.
I want to agree with the locality of errors argument, and while in simple cases, yes, it holds true, it isn't necessarily true. If we don't overcommit, the allocation that kills us is simply the one that fails. Whether this allocation is the problematic one is a different question: if we have a slow leak that, every 10k allocation allocs and leaks, we're probably (9999 / 10k, assuming spherical allocations) going to fail on one that isn't the problem. We get about as much info as the oom-killer would have, anyways: this program is allocating too much.
>Would you rather debug a crash at the allocation site
The allocation site is not necessarily what is leaking memory. What you actually want in either case is a memory dump where you can tell what is leaking or using the memory.
As I recall, this appeared in the 90’s and it was a real pain debugging then as well. Having errors deferred added a Heisenbug component to what should have been a quick, clean crash.
Has malloc ever returned zero since then? Or has somebody undone this, erm, feature at times?
Strongly agree with this article. It highlights really well why overcommit is so harmful.
Memory overcommit means that once you run out of physical memory, the OOM killer will forcefully terminate your processes with no way to handle the error. This is fundamentally incompatible with the goal of writing robust and stable software which should handle out-of-memory situations gracefully.
But it feels like a lost cause these days...
So much software breaks once you turn off overcommit, even in situations where you're nowhere close to running out of physical memory.
What's not helping the situation is the fact that the kernel has no good page allocation API that differentiates between reserving and committing memory. Large virtual memory buffers that aren't fully committed can be very useful in certain situations. But it should be something a program has to ask for, not the default behavior.
>terminate your processes with no way to handle the error. This is fundamentally incompatible with the goal of writing robust and stable software
Having an assumption that your process will never crash is not safe. There will always be freak things like CPUs taking the wrong branch or bits randomly flipping. Parting of design a robust system is being tolerant to things like this.
Another point also mentioned is this thread is that by the time you run out of memory the system already is going to be in a bad state and now you probably don't have enough memory to even get out of it. Memory should have been freed already by telling programs to lighten up on their memory usage or by killing them and reclaiming the resources.
It's not harmful. It's necessary for modern systems that are not "an ECU in a car"
> Memory overcommit means that once you run out of physical memory, the OOM killer will forcefully terminate your processes with no way to handle the error. This is fundamentally incompatible with the goal of writing robust and stable software which should handle out-of-memory situations gracefully.
The big software is not written that way. In fact, writing software that way means you will have to sacrifice performance, memory usage, or both because you either
* need to allocate exactly what you always need and free it when it gets smaller (if you want to keep memory footprint similar)m and that will add latency
* over-allocate, and waste RAM
And you'd end up with MORE memory related issues, not less. Writing app where every allocation can fail is just nightmarish waste of time for 99% of the apps that are not "onboard computer of a space ship/plane"
That's a normal failure state that happens occasionally. Out of memory errors come up all the time when writing robust async job queues. There are a lot of other reasons a failure could happen but running out of memory is just one of them. Sure I can force the system to use swap but that would degrade performance for everything else so it's better to let it die and log the result and check your dead letter queue after.
> What's not helping the situation is the fact that the kernel has no good page allocation API that differentiates between reserving and committing memory.
mmap with PROT_NONE is such a reservation and doesn't count towards the commit limit. A later mmap with MAP_FIXED and PROT_READ | PROT_WRITE can commit parts of the reserved region, and mmap calls with PROT_NONE and MAP_FIXED will decommit.
This is such an old debate. The real answer, as with all such things, is "it depends".
Two reasons why overcommit is a good idea:
- It lets you reserve memory and use the dirtying of that memory to be the thing that commits it. Some algorithms and data structures rely on this strongly (i.e. you would have to use a significantly different algorithm, which is demonstrably slower or more memory intensive, if you couldn't rely on overcommit).
- Many applications have no story for out-of-memory other halting. You can scream and yell at them to do better, but that won't help, because those apps that find themselves in that supposedly-bad situation ended up there for complex and well-considered reasons. My favorite: having complex OOM error handling paths is the worst kind of attack surface, since it's hard to get test coverage for it. So, it's better to just have the program killed instead, because that nixes the untested code path. For those programs, there's zero value in having the memory allocator be able to report OOM conditions other than by asserting in prod that mmap/madvise always succeed, which then means that the value of not overcommitting is much smaller.
Are there server apps where the value of gracefully handling out of memory errors outweighs the perf benefits of overcommit and the attack surface mitigation of halting on OOM? Yeah! But I bet that not all server apps fall into that bucket
It's also performance, as there is no penalty for asking for more RAM than you need right now, you can reduce amount of allocation calls without sacrificing memory usage (as you would have to without overcommit)
I believe (per the stuff at the bottom of https://www.kernel.org/doc/Documentation/vm/overcommit-accou... ) that the kernel does the accounting of how much memory the new child process needs and will fail the fork() if there isn't enough. All the COW pages should be in the "shared anonymous" category so get counted once per user (i.e. once for the parent process, once for the child), ensuring that the COW copy can't fail if the fork succeeded.
As pm215 states, it doubles your memory commit. It's somewhat common for large programs/runtimes that may fork at runtime to spawn an intermediary process during startup to use for runtime forks to avoid the cost of CoW on memory and mapppings and etc where the CoW isn't needed or desirable; but redis has to fork the actual service process because it uses CoW to effectively snapshot memory.
It's not really wrong. For something like redis, you could potentially fork and the child gets stuck for a long time and in the meantime the whole cache in the parent is rewritten. In that case, even though the cache is fixed size / no new allocations, all of the pages are touched and so the total used memory is double from before the fork. If you want to guarantee allocation failures rather than demand paging failures, and you don't have enough ram/swap to back twice the allocations, you must fail the fork.
On the other hand, if you have a pretty good idea that the child will finish persisting and exit before the cache is fully rewritten, double is too much. There's not really a mechanism for that though. Even if you could set an optimistic multiplier for multiple mapped CoW pages, you're back to demand paging failures --- although maybe it's still worthwhile.
> It's not really wrong. For something like redis, you could potentially fork and the child gets stuck for a long time and in the meantime the whole cache in the parent is rewritten.
It's wrong 99.99999% of the time. Because alternative is either "make it take double and waste half the RAM" or "write in memory data in a way that allows for snapshotting, throwing a bunch of performance into the trash"
Not if your goal is to make it such that OOM can only occur during allocation failure, and not during an arbitrary later write, as the OP purports to want.
If you turn overcommit off then when you fork you double the memory usage. The pages are CoW but for accounting purposes it counts as double because writes could require allocating memory and that's not allowed to fail since it's not a malloc. So the kernel has to count it as reserved.
Sure if you don't like your stuff to work well. 0 is default for a reason, and "my specific workload is buggy with 0" is not a problem with it, just the reason there are other options
Advertising for 2 with "but apps should handle it" is utter ignorance, and redis example shows that, the database is using the COW fork feature for basically the reason it exists, as do many, many servers and the warning is pretty much tailored for people thinking they are clever and not understanding memory subsystem
There's a reason nobody does this: RAM is expensive. Disabling overcommit on your typical server workload will waste a great deal of it. TFA completely ignores this.
This is one of those classic money vs idealism things. In my experience, the money always wins this particular argument: nobody is going to buy more RAM for you so you can do this.
Even if you disable overcommit, I don't think you will get pages assigned when you allocate. If your allocations don't trigger an allocation failure, you should get the same behavior with respect to disk cache using otherwise unused pages.
The difference is that you'll fail allocations, where there's a reasonable interface for errors, rather than failing at demand paging when writing to previously unused pages where there's not a good interface.
Of course, there are many software patterns where excessive allocations are made without any intent of touching most of the pages; that's fine with overcommit, but it will lead to allocation failures when you disable overcommit.
Disabling overcommit does make fork in a large process tricky; I don't think the rant about redis in the article is totally on target; fork to persist is a pretty good solution, copy on write is a reasonable cost to pay while dumping the data to disk and then it returns to normal when the dump is done. But without overcommit, it doubles the memory commitment while the dump is running, and that's likely to cause issues if redis is large relative to memory and that's worth checking for and warning about. The linked jemalloc issue seems like it could be problematic too, but I only skimmed; seems like that's worth warning about as well.
For the fork path, it might be nice if you could request overcommit in certain circumstances... fork but only commit X% rather than the whole memory space.
> Even if you disable overcommit, I don't think you will get pages assigned when you allocate. If your allocations don't trigger an allocation failure, you should get the same behavior with respect to disk cache using otherwise unused pages.
Doesn't really change the point. The RAM might not be completely wasted, but given that near every app will over-allocate and just use the pooled memory, you will waste memory that could otherwise be used to run more stuff.
And it can be quite significant, like it's pretty common for server apps to start a big process and then have COWed thread per connection in a pool, so your apache2 eating maybe 60MB per thread in pool is now in gigabytes range at very small pool sizes.
Blog is essentially call to "let's make apps like we did in the DOS era" which is ridiculus
You're correct it doesn't prefault the mappings, but that's irrelevant: it accounts them as allocated, and a later allocation which goes over the limit will immediately fail.
Remember, the limit is artificial and defined by the user with overcommit=2, by overcommit_ratio and user_reserve_kbytes. Using overcommit=2 necessarily wastes RAM (renders a larger portion of it unusable).
> Using overcommit=2 necessarily wastes RAM (renders a larger portion of it unusable).
The RAM is not unusable, it will be used. Some portion of ram may be unallocatable, but that doesn't mean it's wasted.
There's a tradeoff. With overcommit disabled, you will get allocation failure rather than OOM killer. But you'll likely get allocation failures at memory pressure below that needed to trigger the OOM killer. And if you're running a wide variety of software, you'll run into problems because overcommit is the mainstream default for Linux, so many things are only widely tested with it enabled.
> The RAM is not unusable, it will be used. Some portion of ram may be unallocatable
I think that's a meaningless distinction: if userspace can't allocate it, it is functionally wasted.
I completely agree with your second paragraph, but again, some portion of RAM obtainable with overcommit=0 will be unobtainable with overcommit=2.
Maybe a better way to say it is that a system with overcommit=2 will fail at a lower memory pressure than one with overcommit=0. Additional RAM would have to be added to the former system to successfully run the same workload. That RAM is waste.
Surely the source of the waste here is the userspace program not using the memory it allocated, rather than whether or not the kernel overcommits memory. Attributing this to overcommit behavior is invalid.
That "waste" (that overcommit turns into "not a waste") means you can do far less allocations, with overcommit you can just allocate a bunch of memory and use it gradually, instead of having to do malloc() every time you need a bit of memory and free() every time you get rid of it.
You'd also increase memory fragmentation that way possibly hitting performance.
It's also pretty much required for GCed languages to work sensibly
Reading COW memory doesn't cause a fault. It doesn't mean unused literally.
And even if it's not COW, there's nothing wrong or inefficient about opportunistically allocating pages ahead of time to avoid syscall latency. Or mmapping files and deciding halfway through you don't need the whole thing.
There are plenty of reasons overcommit is the default.
For anyone not familiar with the meaning of '2' in this context:
The Linux kernel supports the following overcommit handling modes
0 - Heuristic overcommit handling. Obvious overcommits of address space are refused. Used for a typical system. It ensures a seriously wild allocation fails while allowing overcommit to reduce swap usage. root is allowed to allocate slightly more memory in this mode. This is the default.
1 - Always overcommit. Appropriate for some scientific applications. Classic example is code using sparse arrays and just relying on the virtual memory consisting almost entirely of zero pages.
2 - Don't overcommit. The total address space commit for the system is not permitted to exceed swap + a configurable amount (default is 50%) of physical RAM. Depending on the amount you use, in most situations this means a process will not be killed while accessing pages but will receive errors on memory allocation as appropriate. Useful for applications that want to guarantee their memory allocations will be available in the future without having to initialize every page.
> exceed swap + a configurable amount (default is 50%) of physical RAM
Naive question: why is this default 50%, and more generally why is this not the entire RAM, what happens to the rest?
There's a lot of options. If you want to go down the rabbithole try typing `sysctl -a | grep -E "^vm"` and that'll give you a lot of things to google ;)
Not sure if I understand your question but nothing "happens to the rest", overcommitting just means processes can allocate memory in excess of RAM + swap. The percentage is arbitrary, could be 50%, 100% or 1000%. Allocating additional memory is not a problem per se, it only becomes a problem when you try to actually write (and subsequently read) more than you have.
it's a (then-)safe default from the age when having 1GB of RAM and 2GB of swap was the norm: https://linux-kernel.vger.kernel.narkive.com/U64kKQbW/should...
Just a guess, but I reckon it doesn't account for things like kernel memory usage, such as caches and buffers. Assigning 100% of physical RAM to applications is probably going to have a Really Bad Outcome.
But the memory being used by the kernel has already been allocated by the kernel. So obviously that RAM isn't available.
I can understand leaving some amount free in case the kernel needs to allocate additional memory in the future, but anything near half seems like a lot!
Do any of the settings actually result in "malloc" or a similar function returning NULL?
malloc() and friends may always return NULL. From the man page:
If successful, calloc(), malloc(), realloc(), reallocf(), valloc(), and aligned_alloc() functions return a pointer to allocated memory. If there is an error, they return a NULL pointer and set errno to ENOMEM.
In practice, I find a lot of code that does not check for NULL, which is rather distressing.
It's been a while but while I agree the man page says that, my limited understanding was the typical libc on linux won't really return NULL under any sane scenario. Even when the memory can't be backed
I think you're right, but "typical" is the key word. Embedded systems, systems where overcommit is disabled, bumping into low ulimit -v settings, etc can all trigger an immediate failure with malloc(). Those are edge cases, to be sure, but some of them could be applied to a typical Linux system and me, as a coder, won't be aware of it.
As an aside: To me, checking malloc() for NULL is easier than checking a pointer returned by malloc on first use. That's what you're supposed to do in the presence of overcommit.
But why would you want to violate the docs on something as fundamental as malloc? Why risk relying on implementation specific quirks in the first place?
Even with overcommit enabled, malloc may fail if there is no contiguous address space available. Not a problem in 64 bits but may occasionally happen in 32 bits
I realize this is mostly tangential to the article, but a word of warning for those who are about to mess with overcommit for the first time: In my experience, the extreme stance of "always do [thing] with overcommit" is just not defensible, because most (yes, also "server") software is just not written under the assumption that being able to deal with allocation failures in a meaningful way is a necessity. At best, there's an "malloc() or die"-like stanza in the source, and that's that.
You can and maybe even should disable overcommit this way when running postgres on the server (and only a minimum of what you would these days call sidecar processes (monitoring and backup agents, etc.) on the same host/kernel), but once you have a typical zoo of stuff using dynamic languages living there, you WILL blow someone's leg off.
I run my development VM with overcommit disabled and the way stuff fails when it runs out of memory is really confusing and mysterious sometimes. It's useful for flushing out issues that would otherwise cause system degradation w/overcommit enabled, so I keep it that way, but yeah... doing it in production with a bunch of different applications running is probably asking for trouble.
The fundamental problem is that your machine is running software from a thousand different projects or libraries just to provide the basic system, and most of them do not handle allocation failure gracefully. If program A allocates too much memory and overcommit is off, that doesn't necessarily mean that A gets an allocation failure. It might also mean that code in library B in background process C gets the failure, and fails in a way that puts the system in a state that's not easily recoverable, and is possibly very different every time it happens.
For cleanly surfacing errors, overcommit=2 is a bad choice. For most servers, it's much better to leave overcommit on, but make the OOM killer always target your primary service/container, using oom-score-adj, and/or memory.oom.group to take out the whole cgroup. This way, you get to cleanly combine your OOM condition handling with the general failure case and can restart everything from a known foundation, instead of trying to soldier on while possibly lacking some piece of support infrastructure that is necessary but usually invisible.
There's also cgroup resource controls to separately govern max memory and swap usage. Thanks to systemd and systemd-run, you can easily apply and adjust them on arbitrary processes. The manpages you want are systemd.resource-control and systemd.exec. I haven't found any other equivalent tools that expose these cgroup features to the extent that systemd does.
This is a better explanation and fix than others I've seen. There will be differences between desktop and server uses, but misbehaving applications and libraries exist on both.
> he way stuff fails when it runs out of memory is really confusing
have you checked what your `vm.overcommit_ratio` is? If its < 100%, then you will get OOM kills even if plenty of RAM is free since the default is 50 i.e. 50% of RAM can be COMMITTED and no more.
curious what kind of failures you are alluding to.
The main scenario that caused me a lot of grief is temporary RAM usage spikes, like a single process run during a build that uses ~8gb of RAM or more for a mere few seconds and then exits. In some cases the oom killer was reaping the wrong process or the build was just failing cryptically and if I examined stuff like top I wouldn't see any issue, plenty of free RAM. The tooling for examining this historical memory usage is pretty bad, my only option was to look at the oom killer logs and hope that eventually the culprit would show up.
Thanks for the tip about vm.overcommit_ratio though, I think it's set to the default.
you can get statistics off cgroups to get idea what it was (assuming it's a service and not something user ran), but that requires probing it often enough
This is completely wrong. First, disabling overcommit is wasteful because of fork and because of the way thread stacks are allocated. Sorry, you don't get exact memory accounting with C, not even Windows will do exact accounting of thread stacks.
Secondly, memory is a global resource so you don't get local failures when it's exhausted, whoever allocates first after memory has been exhausted will get an error they might be the application responsible for the exhaustion or they might not be. They might crash on the error or they might "handle it", keep going and render the system completely unusable.
No, exact accounting is not a solution. Ulimits and configuring the OOM killer are solutions.
Without going into a discussion about whether this is right or wrong, how is fork(2) wasteful?
fork(2) has been copy-on-write for decades and does not copy the entire process address space. Thread stacks are a non-sequitur either as stack uses the data pages, the thread stacks are rather small in size in most scanarios, hence the thread stacks are also subject to copy-on-write.
The only overhead that the use of fork(2) incurs is an extra copy of process' memory descriptior pages, which is a drop in the ocean for modern systems for large amounts of RAM.
Unless you reserve on fork you're still over committing because after the fork writes to basically any page in either process will trigger memory commitment.
Thread stacks come up because reserving them completely ahead of time would incur large amounts of memory usage. Typically they start small and grow when you touch the guards. This is a form of overcommit. Even windows dynamically grows stacks like this
Also,
> Thread stacks come up because reserving them completely ahead of time would incur large amounts of memory usage. Typically they start small and grow when you touch the guards. This is a form of overcommit.
Ahead of the time memory reservation entails a page entry being allocated in the process’s page catalogue («logical» allocation), and the page «sits» dormant until it is accessed and causes a memory access fault – that is the moment when the physical allocation takes place. So copying reserved but not accessed yet pages has zero effect on the physical memory consumption of the process.
What actually happens to the thread stacks depends on the actual number of active threads. In modern designs, threads are consumed from thread pools that implement some sort of a run queue where the threads sit idle until they get assigned a unit of work. So if a thread is idle, it does not use its own stack thread and, consequently, there is no side effect on the child's COW address space.
Granted, if the child was copied with a large number of active threads, the impact will be very different.
> Even windows dynamically grows stacks like this
Windows employs a distinct process/thread design, making the UNIX concept of a process foreign. Threads are the primary construct in Windows and the kernel is highly optimised for thread management rather than processes. Cygwin has outlined significant challenges in supporting fork(2) semantics on Windows and has extensively documented the associated difficulties. However, I am veering off-topic.
> […] because after the fork writes to basically any page in either process will trigger memory commitment.
This is largely not true for most processes. For a child process to start writing into its own data pages en masse, there has to exist a specific code path that causes such behaviour. Processes do not randomly modify their own data space – it is either a bug or a peculiar workload that causes it.
You would have a stronger case if you mentioned, e.g., the JVM, which has a high complexity garbage collector (rather, multiple types of garbage collectors – each with its own behaviour), but the JVM ameliorates the problem by attempting to lock in the entire heap size at startup or bailing if it fails to do so.
In most scenarios, forking a process has a negligible effect on the overall memory consumption in the system.
> because of fork and because of the way thread stacks are allocated
For modern (post-x86_64) memory allocators a common strategy is to allocate hundreds of gigabytes of virtual memory and let the kernel handle deal with actually swapping in physical memory pages upon use.
This way you can partition the virtual memory space into arenas as you like. This works really well.
Which is a major way turning off overcommit can cause problems. The expectation for disabling it is that if you request memory you're going to use it, which is frequently not true. So if you turn it off, your memory requirements go from, say, 64GB to 512GB.
Obviously you don't want to have to octuple your physical memory for pages that will never be used, especially these days, so the typical way around that is to allocate a lot of swap. Then the allocations that aren't actually used can be backed by swap instead of RAM.
Except then you've essentially reimplemented overcommit. Allocations report success because you have plenty of swap but if you try to really use that much the system grinds to a halt.
When your system is out of memory, you do not want to return an error to the next process that allocates memory. That might be an important process, it might have nothing to do with the reason the system is out of memory, and it might not be able to gracefully handle allocation failure (realistically, most programs can't).
Instead, you want to kill the process that's hogging all the memory.
The OOM killer heuristic is not perfect, but it will generally avoid killing critical processes and is fairly good at identifying memory hogs.
And if you agree that using the OOM killer is better than returning failure to a random unlucky process, then there's no reason not to use overcommit.
Besides, overcommit is useful. Virtual-memory-based copy-on-write, allocate-on-write, sparse arrays, etc. are all useful and widely-used.
Fwiw you can use pressure stall information to load shed. This is superior to disabling overcommit and then praying the first allocation to fail is in the process you want to actually respond to the resource starvation.
Fact is that by the time small allocations are failing you are almost no better off handling the null than you would be handling segfaults and the sigterms from the killer.
Often for servers performance will fall off a cliff long before the oom killer is needed, too.
For anyone feeling brave enough to disable overcommit after reading this, be mindful that default `vm.overcommit_ratio` is 50% which means that if no swap is available, on a system with 2GB of total RAM, more than 1GB of RAM can't be allocated and requests will fail with preemptive OOMs. (e.g. postgresql servers typically disable overcommit)
- https://github.com/torvalds/linux/blob/master/mm/util.c#L753
This doesn't address the fact that forking large processes requires either overcommit or a lot of swap. That may be the source of the Redis problem.
Author is just ignorant to the technicals and laser focusing on some particular cases that he thinks are problem but are not.
The redis engineers KNOW fork-to-save will at most result in few tens of MBs of extra memory used in vast majority of cases and benefit of seamless saving. Like, there is a theoretical where it uses double the memory but it would require all the data in database be replaced during short interval snapshot is saved and that's just unrealistic
Why? Most COWed pages will remain untouched. They only need to allocate when touched.
Because the point of forbidding overcommit is to ensure that the only time you can discover you're out of memory is when you make a syscall that tries (explicitly or implicitly) to allocate more memory. If you don't account the COW pages to both the parent and the child process, you have a situation where you can discover the out of memory condition when the process tries to dirty the RAM and there's no page available to do that with...
The described scenario (and, consequently, a concern) is mostly a philosophical question or a real concern for a very specific workload.
Memory allocation is a highly non-deterministic process which highly depends on the code path, and it is generally impossible to predict how the child will handle its own memory space – it can be little or it can be more (relatively to the parent), and it is usually somewhere in the middle. Most daemons, for example, consuming next to zero extra memory after forking.
The Ruby garbage collector «mark-and-sweep» (old versions of Ruby – 1.8 and 1.9) and Python reference counting (the Instagram case) bugs are prime examples of pathological cases when a child would walk over its data pages, making each dirty and causing a system collapse, but the bugs have been fixed or workarounds have been applied. Honourable mention goes to Redis in a situation when THP (transparent huge pages) are enabled.
No heuristics exist out there that would turn memory allocation into a deterministic process.
The point of disabling overcommit, as per the article, is that all pages in virtual memory must be backed by physical memory at all times. Therefore all virtual memory must reserve physical memory at the time of the fork call, even if the contents of the pages only get copied when they are touched.
Surely e.g. shared memory segments that are mapped by multiple processes are not double-counted? So it's only COW memory in particular that gets this treatment? Linux could just not do that.
But forking duplicates the process space and has to assume that a write might happen, so it has to defensively reserve enough for the new process if overcommit is off.
If you have overcommit on, that happens. But if you have it off, it has to assume the worst case, otherwise there can be a failure when someone writes to a page.
Disabling overcommit on V8 servers like Deno will be incredibly inefficient. Your process might only need ~100MB of memory or so but V8's cppgc caged heap requires a 64GB allocation in order to get a 32GB aligned area in which to contain its pointers. This is a security measure to prevent any possibility of out of cage access.
Maybe it should use MAP_NORESERVE ?
I expect it does already, but I don’t think it would help here:
> In mode 2 the MAP_NORESERVE flag is ignored.
https://www.kernel.org/doc/Documentation/vm/overcommit-accou...
Setting 2 is still pretty generous. It means "Kernel does not allow allocations that exceed swap + (RAM × overcommit_ratio / 100)." It's not a "never swap or overcommit" setting. You can still get into thrashing by memory overload.
We may be entering an era when everyone in computing has to get serious about resource consumption. NVidia says GPUs are going to get more expensive for the next five years. DRAM prices are way up, and Samsung says it's not getting better for the next few years. Bulk electricity prices are up due to all those AI data centers. We have to assume for planning purposes that computing gets a little more expensive each year through at least 2030.
Somebody may make a breakthrough, but there's nothing in the fab pipeline likely to pay off before 2030, if then.
For me, on the desktop, thrashing overload is the most common way the Linux system effectively crashes... (I've left it overnight a few times, sometimes it recovered, but not always).
I'm not disabling overcommit for now, but maybe I should.
That thrashing is probably executable pages getting evicted, and then having to be reloaded from disk when the process resumes. Even with no swap and overcommit disabled, you'll still get thrashing before the OOM killer gets triggered.
I recommend everyone to enable linux's new multi-generational LRU, that can be configured to trigger the OOM when the workingset of the last N deciseconds doesn't fit in memory. And <https://github.com/hakavlad/nohang> has some more suggestions.
It's not gonna change anything. But you might get interested into software like earlyoom and similar, that basically tried to preempt oomkiller and kill something before it gets to sluggish state
disabling overcommit does not fix trashing. Reducing the size of your swap does.
Yes, but not fully, it may still thrash on mmaped files (especially readonly ones).
This is quite the bold statement to make with RAM prices sky high.
I want to agree with the locality of errors argument, and while in simple cases, yes, it holds true, it isn't necessarily true. If we don't overcommit, the allocation that kills us is simply the one that fails. Whether this allocation is the problematic one is a different question: if we have a slow leak that, every 10k allocation allocs and leaks, we're probably (9999 / 10k, assuming spherical allocations) going to fail on one that isn't the problem. We get about as much info as the oom-killer would have, anyways: this program is allocating too much.
>Would you rather debug a crash at the allocation site
The allocation site is not necessarily what is leaking memory. What you actually want in either case is a memory dump where you can tell what is leaking or using the memory.
As I recall, this appeared in the 90’s and it was a real pain debugging then as well. Having errors deferred added a Heisenbug component to what should have been a quick, clean crash.
Has malloc ever returned zero since then? Or has somebody undone this, erm, feature at times?
This is exactly what the article’s title does
Strongly agree with this article. It highlights really well why overcommit is so harmful.
Memory overcommit means that once you run out of physical memory, the OOM killer will forcefully terminate your processes with no way to handle the error. This is fundamentally incompatible with the goal of writing robust and stable software which should handle out-of-memory situations gracefully.
But it feels like a lost cause these days...
So much software breaks once you turn off overcommit, even in situations where you're nowhere close to running out of physical memory.
What's not helping the situation is the fact that the kernel has no good page allocation API that differentiates between reserving and committing memory. Large virtual memory buffers that aren't fully committed can be very useful in certain situations. But it should be something a program has to ask for, not the default behavior.
>terminate your processes with no way to handle the error. This is fundamentally incompatible with the goal of writing robust and stable software
Having an assumption that your process will never crash is not safe. There will always be freak things like CPUs taking the wrong branch or bits randomly flipping. Parting of design a robust system is being tolerant to things like this.
Another point also mentioned is this thread is that by the time you run out of memory the system already is going to be in a bad state and now you probably don't have enough memory to even get out of it. Memory should have been freed already by telling programs to lighten up on their memory usage or by killing them and reclaiming the resources.
It's not harmful. It's necessary for modern systems that are not "an ECU in a car"
> Memory overcommit means that once you run out of physical memory, the OOM killer will forcefully terminate your processes with no way to handle the error. This is fundamentally incompatible with the goal of writing robust and stable software which should handle out-of-memory situations gracefully.
The big software is not written that way. In fact, writing software that way means you will have to sacrifice performance, memory usage, or both because you either * need to allocate exactly what you always need and free it when it gets smaller (if you want to keep memory footprint similar)m and that will add latency * over-allocate, and waste RAM
And you'd end up with MORE memory related issues, not less. Writing app where every allocation can fail is just nightmarish waste of time for 99% of the apps that are not "onboard computer of a space ship/plane"
Even besides the aforementioned fork problems not having overcommit doesn't mean you can handle oom correctly by just handling errors from malloc!
That's a normal failure state that happens occasionally. Out of memory errors come up all the time when writing robust async job queues. There are a lot of other reasons a failure could happen but running out of memory is just one of them. Sure I can force the system to use swap but that would degrade performance for everything else so it's better to let it die and log the result and check your dead letter queue after.
> What's not helping the situation is the fact that the kernel has no good page allocation API that differentiates between reserving and committing memory.
mmap with PROT_NONE is such a reservation and doesn't count towards the commit limit. A later mmap with MAP_FIXED and PROT_READ | PROT_WRITE can commit parts of the reserved region, and mmap calls with PROT_NONE and MAP_FIXED will decommit.
This is such an old debate. The real answer, as with all such things, is "it depends".
Two reasons why overcommit is a good idea:
- It lets you reserve memory and use the dirtying of that memory to be the thing that commits it. Some algorithms and data structures rely on this strongly (i.e. you would have to use a significantly different algorithm, which is demonstrably slower or more memory intensive, if you couldn't rely on overcommit).
- Many applications have no story for out-of-memory other halting. You can scream and yell at them to do better, but that won't help, because those apps that find themselves in that supposedly-bad situation ended up there for complex and well-considered reasons. My favorite: having complex OOM error handling paths is the worst kind of attack surface, since it's hard to get test coverage for it. So, it's better to just have the program killed instead, because that nixes the untested code path. For those programs, there's zero value in having the memory allocator be able to report OOM conditions other than by asserting in prod that mmap/madvise always succeed, which then means that the value of not overcommitting is much smaller.
Are there server apps where the value of gracefully handling out of memory errors outweighs the perf benefits of overcommit and the attack surface mitigation of halting on OOM? Yeah! But I bet that not all server apps fall into that bucket
It's also performance, as there is no penalty for asking for more RAM than you need right now, you can reduce amount of allocation calls without sacrificing memory usage (as you would have to without overcommit)
That's what I mean by my first reason
redis uses the copy-on-write property of fork() to implement saving
which is elegant and completely legitimate
How does fork() work with vm.overcommit=2?
A forked process would assume memory is already allocated, but I guess it would fail when writing to it as if vm.overcommit is set to 0 or 1.
I believe (per the stuff at the bottom of https://www.kernel.org/doc/Documentation/vm/overcommit-accou... ) that the kernel does the accounting of how much memory the new child process needs and will fail the fork() if there isn't enough. All the COW pages should be in the "shared anonymous" category so get counted once per user (i.e. once for the parent process, once for the child), ensuring that the COW copy can't fail if the fork succeeded.
As pm215 states, it doubles your memory commit. It's somewhat common for large programs/runtimes that may fork at runtime to spawn an intermediary process during startup to use for runtime forks to avoid the cost of CoW on memory and mapppings and etc where the CoW isn't needed or desirable; but redis has to fork the actual service process because it uses CoW to effectively snapshot memory.
It seems like a wrong accounting to count CoWed pages twice.
It's not really wrong. For something like redis, you could potentially fork and the child gets stuck for a long time and in the meantime the whole cache in the parent is rewritten. In that case, even though the cache is fixed size / no new allocations, all of the pages are touched and so the total used memory is double from before the fork. If you want to guarantee allocation failures rather than demand paging failures, and you don't have enough ram/swap to back twice the allocations, you must fail the fork.
On the other hand, if you have a pretty good idea that the child will finish persisting and exit before the cache is fully rewritten, double is too much. There's not really a mechanism for that though. Even if you could set an optimistic multiplier for multiple mapped CoW pages, you're back to demand paging failures --- although maybe it's still worthwhile.
> It's not really wrong. For something like redis, you could potentially fork and the child gets stuck for a long time and in the meantime the whole cache in the parent is rewritten.
It's wrong 99.99999% of the time. Because alternative is either "make it take double and waste half the RAM" or "write in memory data in a way that allows for snapshotting, throwing a bunch of performance into the trash"
Not if your goal is to make it such that OOM can only occur during allocation failure, and not during an arbitrary later write, as the OP purports to want.
Can you elaborate on how this comment is connected to the article?
did you read it the article? there's a large section on redis
the author says it's bad design, but has entirely missed WHY it wants overcommit
You haven't made a connection, though. What does fork have to do with overcommit? You didn't connect the dots.
If you turn overcommit off then when you fork you double the memory usage. The pages are CoW but for accounting purposes it counts as double because writes could require allocating memory and that's not allowed to fail since it's not a malloc. So the kernel has to count it as reserved.
Sure if you don't like your stuff to work well. 0 is default for a reason, and "my specific workload is buggy with 0" is not a problem with it, just the reason there are other options
Advertising for 2 with "but apps should handle it" is utter ignorance, and redis example shows that, the database is using the COW fork feature for basically the reason it exists, as do many, many servers and the warning is pretty much tailored for people thinking they are clever and not understanding memory subsystem
There's a reason nobody does this: RAM is expensive. Disabling overcommit on your typical server workload will waste a great deal of it. TFA completely ignores this.
This is one of those classic money vs idealism things. In my experience, the money always wins this particular argument: nobody is going to buy more RAM for you so you can do this.
Even if you disable overcommit, I don't think you will get pages assigned when you allocate. If your allocations don't trigger an allocation failure, you should get the same behavior with respect to disk cache using otherwise unused pages.
The difference is that you'll fail allocations, where there's a reasonable interface for errors, rather than failing at demand paging when writing to previously unused pages where there's not a good interface.
Of course, there are many software patterns where excessive allocations are made without any intent of touching most of the pages; that's fine with overcommit, but it will lead to allocation failures when you disable overcommit.
Disabling overcommit does make fork in a large process tricky; I don't think the rant about redis in the article is totally on target; fork to persist is a pretty good solution, copy on write is a reasonable cost to pay while dumping the data to disk and then it returns to normal when the dump is done. But without overcommit, it doubles the memory commitment while the dump is running, and that's likely to cause issues if redis is large relative to memory and that's worth checking for and warning about. The linked jemalloc issue seems like it could be problematic too, but I only skimmed; seems like that's worth warning about as well.
For the fork path, it might be nice if you could request overcommit in certain circumstances... fork but only commit X% rather than the whole memory space.
> Even if you disable overcommit, I don't think you will get pages assigned when you allocate. If your allocations don't trigger an allocation failure, you should get the same behavior with respect to disk cache using otherwise unused pages.
Doesn't really change the point. The RAM might not be completely wasted, but given that near every app will over-allocate and just use the pooled memory, you will waste memory that could otherwise be used to run more stuff.
And it can be quite significant, like it's pretty common for server apps to start a big process and then have COWed thread per connection in a pool, so your apache2 eating maybe 60MB per thread in pool is now in gigabytes range at very small pool sizes.
Blog is essentially call to "let's make apps like we did in the DOS era" which is ridiculus
You're correct it doesn't prefault the mappings, but that's irrelevant: it accounts them as allocated, and a later allocation which goes over the limit will immediately fail.
Remember, the limit is artificial and defined by the user with overcommit=2, by overcommit_ratio and user_reserve_kbytes. Using overcommit=2 necessarily wastes RAM (renders a larger portion of it unusable).
> Using overcommit=2 necessarily wastes RAM (renders a larger portion of it unusable).
The RAM is not unusable, it will be used. Some portion of ram may be unallocatable, but that doesn't mean it's wasted.
There's a tradeoff. With overcommit disabled, you will get allocation failure rather than OOM killer. But you'll likely get allocation failures at memory pressure below that needed to trigger the OOM killer. And if you're running a wide variety of software, you'll run into problems because overcommit is the mainstream default for Linux, so many things are only widely tested with it enabled.
> The RAM is not unusable, it will be used. Some portion of ram may be unallocatable
I think that's a meaningless distinction: if userspace can't allocate it, it is functionally wasted.
I completely agree with your second paragraph, but again, some portion of RAM obtainable with overcommit=0 will be unobtainable with overcommit=2.
Maybe a better way to say it is that a system with overcommit=2 will fail at a lower memory pressure than one with overcommit=0. Additional RAM would have to be added to the former system to successfully run the same workload. That RAM is waste.
it's absolutely wasted if apps on server don't use disk (disk cache is pretty much only thing that can use that reserved memory).
You can have simple web server that took less than 100MB of RAM take gigabytes, just because it spawned few COW-ed threads
If the overcommit ratio is 1, there is no portion rendered unusable? This seems to contradict your "necessarily" wastes RAM claim?
Read the comment again, that wasn't the only one I mentioned.
Please point out what you're talking about, because the comment is short and I read it fully multiple times now.
If you have enough swap there's no waste.
Wasted swap is still waste, and the swapping costs cycles.
With overcommit off the swap isn't used; it's only necessary for accounting purposes. I agree that it's a waste of disk space.
How does disabling overcommit waste RAM?
overcommit 0
- Apache2 runs. - Apache2 takes 50MB. - Apache2 spawns 32 threads. - Apache2 takes 50MB + (per-thread vars * 32)
overcommit 2
- Apache2 runs. - Apache2 takes 50MB. - Apache2 spawns 32 threads. - Apache2 takes 50MB + (5032) + (per-thread vars 32)
Boom, Now your simple apache server serving some static files can't fit on 512MB VM and needs in excess of 1.7GB of memory just to allocate
Because userspace rarely actually faults in all the pages it allocates.
Surely the source of the waste here is the userspace program not using the memory it allocated, rather than whether or not the kernel overcommits memory. Attributing this to overcommit behavior is invalid.
The waste comes with asterisk
That "waste" (that overcommit turns into "not a waste") means you can do far less allocations, with overcommit you can just allocate a bunch of memory and use it gradually, instead of having to do malloc() every time you need a bit of memory and free() every time you get rid of it.
You'd also increase memory fragmentation that way possibly hitting performance.
It's also pretty much required for GCed languages to work sensibly
Obviously. But all programs do that and have done it forever, it's literally the very reason overcommit exists.
Only the poorly-written ones, which are unfortunately the majority of them.
Reading COW memory doesn't cause a fault. It doesn't mean unused literally.
And even if it's not COW, there's nothing wrong or inefficient about opportunistically allocating pages ahead of time to avoid syscall latency. Or mmapping files and deciding halfway through you don't need the whole thing.
There are plenty of reasons overcommit is the default.