summaryrefslogtreecommitdiff
path: root/Hurd/ZallocPanics.mdwn
diff options
context:
space:
mode:
authorTWikiGuest <web-hurd@gnu.org>2007-04-12 20:57:00 +0000
committerTWikiGuest <web-hurd@gnu.org>2007-04-12 20:57:00 +0000
commit142dcffaf4b18c3d019a8519edd8b4776651883c (patch)
tree17061fc2cc0095ba43f573f7469683b08bccac29 /Hurd/ZallocPanics.mdwn
parentec9a50a4d552bcb506b73fb47ad99b45e0e2acdc (diff)
none
Diffstat (limited to 'Hurd/ZallocPanics.mdwn')
-rw-r--r--Hurd/ZallocPanics.mdwn65
1 files changed, 27 insertions, 38 deletions
diff --git a/Hurd/ZallocPanics.mdwn b/Hurd/ZallocPanics.mdwn
index b5b96f7e..0b00d7ec 100644
--- a/Hurd/ZallocPanics.mdwn
+++ b/Hurd/ZallocPanics.mdwn
@@ -1,54 +1,43 @@
-The Hurd often crashes under heavy load. In many cases, it's Mach doing a "Panic: zalloc failed: zone map exhausted" or something similar. Here are my observations trying to track down this issue.
+The Hurd sometimes crashes with a kernel panic saying someting like: "Panic: zalloc failed: zone map exhausted".
-# <a name="Observations"> Observations </a>
+These panics are generally caused by some kind of kernel resource exhaustion, but there are several differnt reasons for that.
-* It all started with someone (probably azeem) mentioning that builing some package always crashes Hurd at the same stage of the Debian packaging process
+It used to happen very often under heavy disk load (like large compile jobs), or in a reproducible test case by opening a large number of ports to /dev/null and then closing them all very quickly. The reason for this particular problem has been identified a while back: The multithreaded Hurd servers create a new worker thread whenever a new request (RPC) comes in while all existing threads are busy. When the server is hammered with lots of requests -- which happens both under heavy disk load, and when quickly closing many ports to one server -- it will create an absurd number of threads, causing the resource exhaustion.
+
+The Debian hurd package contains a patch by k0ro (Sergio Lopez), which fixes this by limiting the amount of created threads in a rather simplistic but very effective manner. This patch however hasn't been included in upstream CVS so far. A more elegant solution, suitable for upstream inclusion, would be desirable.
+
+Some panics still seem to happen in very specific situations, like the one described at <https://savannah.gnu.org/bugs/?19426> . These are probably the result of bugs that cause port leaks, accidental fork bombs, or similar problems.
+
+In principle, resource exhaustion can also happen by normal use, though this is rather unlikely in the absence of bugs or malicious programs. Nevertheless, all these problems could be avoided (or limited in effect) by introducing some limits on number of processes per user, number of threads and ports per process/user etc.
+
+Trying to track down causes for the panics, I got some interesting results. (UPDATE: Many of my original observations were clearly related to the server thread explosion problem. To avoid confusion, I now removed these, as this is no longer an open issue.)
+
+* It all started with someone (probably azeem) mentioning that builing some package always crashes Hurd at the same stage of the Debian packaging process (UPDATE: Almost all of these panics when building packages were a result of the thread explosion and don't happen anymore.)
* Someone (maybe he himself) pointed out that this stage is characterized by many processes being quickly created and destroyed
* Someone else (probably hde) started some experimenting, to get a reproducible test case
* He realized that just starting and killing five child processes in quick succession suffices to kill some Hurd systems
* I tried to confirm this, but it turned out my system is more robust
-* I started various other experiments with creating child processes, resulting in a number of interesting observations:
+
+As I could never reproduce the problem with a small number of quickly killed processes, I can't say whether this problem still exists. While I could reproduce such an effect with first opening and then very quickly closing many ports (which is more or less what happens when quickly killing many processes), I needed really large numbers of processes/ports for that. The thread throtteling patch fixed my test case; but it seems unlikely that killing only five processes could have caused a thread explosion, so maybe hde's observation was a different problem really...
+
+I started various other experiments with creating child processes (fork bombs), resulting in a number of interesting observations:
+
* Just forking a large number of processes crashes the Hurd reliably (not surprising)
* The number of processes at which the panic occurs is very constant (typicallly +-2) under stable conditions, as long as forking doesn't happen too fast
* The exact number depends on various conditions:
- * Run directly from the Mach console, it's around 1040 on my machine (given enough RAM); however, it drops to 940 when started through a raw ssh session, and to 990 when run under screen through ssh (TODO: check number of ports open per process depending on how it is started)
+ * Run directly from the Mach console, it's around 1040 on my machine (given enough RAM); however, it drops to 940 when started through a raw ssh session, and to 990 when run under screen through ssh (TODO: check number of ports open per process depending on how it is started) UPDATE: In a later test, I got somewhat larger numbers (don't remember exactly, but well above 1000), but still very constant between successive runs. Not sure what effected this change.
* It doesn't depend on whether normal user or root
* With only 128 MiB of RAM, the numbers drop slightly (like 100 less or so); no further change between 256 and 384 MiB
* Lowering zone\_map\_size in mach/kern/zalloc.c reduces the numbers (quite exactly half from 8 MiB to 4 MiB)
* There seems to be some saturation near 16 MiB however: The difference between 8 MiB and 16 MiB is significantly smaller
* Also, with 8 MiB or 4 MiB, the difference between console/ssh/screen becomes much more apparent (500 vs. 800, 250 vs. 400)
* With more than 16 MiB, Mach doesn't even boot
-* Creating the processes very fast results in a sooner and less predictable crash
+* Creating the processes very fast results in a sooner and less predictable crash (TODO: Check whether this is still the case with thread throtteling?)
* Creating processes recursively (fork only one child which forks the next one etc.) results in faster crash
-* rpcinfo shows that child processes have more ports open by default. Experimentation shows that indeed processes with many ports open are much more effective in crashing Mach: Quickly killing 30 processes with 1000 connections to /dev/null each will crash my system almost always
-* Just **opening** many ports from a few processes doesn't usually cause a system crash; there are only lots of open() failures and translator faults once some limit is reached... Seems the zalloc-full condition is better caught on open() than on fork() (TODO: investigate this further, with different memory sizes, different zone\_map\_size, different kinds of resources using zalloc etc.)
-* Explicitely closing the ports very quickly instead of killing/quitting the processes (which results in the ports being closed quickly automatically) yields the same result. Slowly closing ports doesn't result in any problems, even in very large amounts
-* Results are quite stochastic: 15 processes with 1000 ports each will sometimes crash the system but not always; 10 processes will seldom do, 20 often, 30 almost always...
-* Repeating will increase the likelyhood: 10 processes will almost(?) never kill the system right away, but quite likely when done a few times in succession
-* In the cases the system didn't crash immediately, often port and/or memory leaks can be observerd in /hurd/null and/or /hurd/ext2fs.static (TODO: test with different filesystems/translators, and completely different methods of creating Mach ports)
-* Killing the NULL translator after each run seems to improve robustness
-* Memory consumption in /hurd/null, /hurd/ext2fs.static and in Mach goes up when opening many ports, but in a reasonable fashion, and gets reclaimed unless leaks occur in the translators
-* After opening/leaking lots of ports (32768 it seems), the NULL translator somehow becomes disfunctional, and a new instance is started
-* 256 MiB instead of 128 seems to actually **decrease** stability (384 getting better again). However, due to the random nature of the crashes, this result is not very reliable. It might be influenced by other conditions (e.g. the method of running, see above), or simple coincidence
-
-# <a name="Conclusions"> Conclusions </a>
-
-Many processes:
-
-* Seems to be plain overuse of zone memory
-* Probably no easy fix; only proper accounting/limits would help
-* Not likely responsible for stability problems -- such numbers of processes never occur in normal usage
-* Thus not very important to fix. (Note: Even Linux is not safe against fork bombs in typical setups up to this day)
-* Still might be useful to investigate some strange behaviour (especially saturation near 16 MiB)
-
-Port closing:
-
-* Discussion with marcus yielded some guesses about the causes:
-* Fast port closing probably results in problems with processing no-sender notifications (which inform the other end when a port is closed)
-* One possiblity: Mach runs out of memory when buffering too many unprocessed notifications
-* This wouldn't explain the leaks sometimes observed in the involved servers however
-* Also wouldn't explain crashes already with very small number of ports on some systems
-* More likely: Some notifications get lost in the congestion, resulting in various kinds of failures
-* TODO: Investigate all code involved in closing files/ports; employ some kind of logging to confirm the problems are related to notifications
-
--- antrik - 17 Jul 2005
+* rpcinfo shows that child processes have more ports open by default, which is very likely the reason for the above observation
+* Opening many ports from a few processes doesn't usually cause a system crash; there are only lots of open() failures and translator faults once some limit is reached... Seems the zalloc-full condition is better caught on open() than on fork() (TODO: investigate this further, with different memory sizes, different zone\_map\_size, different kinds of resources using zalloc etc.)
+* After opening/leaking lots of ports to /dev/null (32768 it seems), the NULL translator somehow becomes disfunctional, and a new instance is started
+
+While most of these Observations clearly show an exhaustion of kernel memory which is not surprising, some of the oddities seem to indicate problems that might deserve further investigation.
+
+-- antrik (Last update: 12 Apr 2007)