From 51c95fc11727532e3b0d98c8470a6b60907a0680 Mon Sep 17 00:00:00 2001 From: Thomas Schwinge Date: Tue, 8 Jan 2013 21:31:31 +0100 Subject: IRC. --- open_issues/libpthread.mdwn | 135 +++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 134 insertions(+), 1 deletion(-) (limited to 'open_issues/libpthread.mdwn') diff --git a/open_issues/libpthread.mdwn b/open_issues/libpthread.mdwn index befc1378..05aab85f 100644 --- a/open_issues/libpthread.mdwn +++ b/open_issues/libpthread.mdwn @@ -1,4 +1,4 @@ -[[!meta copyright="Copyright © 2010, 2011, 2012 Free Software Foundation, +[[!meta copyright="Copyright © 2010, 2011, 2012, 2013 Free Software Foundation, Inc."]] [[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable @@ -1257,6 +1257,19 @@ There is a [[!FF_project 275]][[!tag bounty]] on this task. i'll add traces to know which step causes the error +### IRC, freenode, #hurd, 2012-12-11 + + braunr: mktoolnix seems like a reproducer for the libports thread + priority issue + (3 times) + youpi: thanks + youpi: where is that tool packaged ? + he probably means the mkvtoolnix source + seems so + i don't find anything else + that's it, yes + + ## IRC, freenode, #hurd, 2012-12-05 tschwinge: i'm currently working on a few easy bugs and i have @@ -1326,3 +1339,123 @@ There is a [[!FF_project 275]][[!tag bounty]] on this task. i wondered for a long time why the load average was so high on the hurd under even "light" loads now i know :) + + +## IRC, freenode, #hurd, 2012-12-27 + + btw, good news: the installer works with libpthread + (well, at least boots, I haven't tested the installation) + i can do that if the image is available publically + youpi: the one thing i suspect won't work right is the hurd + console :/ + so we might need to not enable it by default + braunr: you mean the mode setting? + youpi: i don't know what's wrong with the hurd console, but it + seems to deadlock with pthreads + ah? + I don't have such issue + ah ? i need to retest that then + +Same issue as [[term_blocking]] perhaps? + + +## IRC, freenode, #hurd, 2013-01-06 + + it seems fakeroot has become slow as hell + fakeroot is the main source of dead name notifications + well, a very heavy one + with pthreads hurd servers, their priority is raised, precisely to + give them time to handle those dead name notifications + which slows everything else down, but strongly reduces the rate at + which additional threads are created to handle dn notifications + so this is expected + ok :/ + which is why i mentioned a rewrite of io_select into a completely + synchronous io_poll + so that the client themselves remove their requests, instead of + the servers doing it asynchronously when notified + by "slows everything else down", you mean, if the servers do take + cpu time? + but considering the amount of messaging it requires, it will be + slow on moderate to large fd sets with frequent calls (non blocking or + low timeout) + yes + well here the problem is not really it gets slowed down + but that e.g. for gtk+2.0 build, it took 5h cpu time + (and counting) + ah, the hurd with pthreads is noticeably slower too + i'm not sure why, but i suspect the amount of internal function + calls could account for some of the overhead + I mean the fakeroot process + not the server process + hum + that's not normal :) + that's what I meant + well, i should try to build gtk+20 some day + i've been building glibc today and it's going fine for now + it's the install stage which poses problem + I've noticed it with the hurd package too + the hurd is easier to build + that's a good test case + there are many times when fakeroot just doesn't use cpu, and it + doesn't look like a select timeout issue (it still behaved that way with + my fixed branch) + in general, pfinet is taking really a lot of cpu time + that's surprising + why ? + fakeroot uses it a lot + I know + but still + 40% cpu time is not normal + I don't see why it would need so much cpu time + 17:57 < braunr> but considering the amount of messaging it + requires, it will be slow on moderate to large fd sets with frequent + calls (non blocking or low timeout) + by "it", what did you mean? + I thought you meant the synchronous select implementation + something irrelevant here + yes + what matters here is the second part of my sentence, which is what + i think happens now + you mean it's the IPC overhead which is taking so much time? + i mean, it doesn't matter if io_select synchronously removes + requests, or does it by destroying ports and relying on notifications, + there are lots of messages in this case anyway + yes + why "a lot" ? + more than one per select call? + yes + why ? + one per fd + then one to wait + there are two in faked + hum :) + i remember the timeout is low + but i don't remember its value + the timeout is NULL in faked + the client then + the client doesn't use select + i must be confused + i thought it did through the fakeroot library + but yes, i see the same behaviour, 30 times more cpu for pfinet + than faked-tcp + or let's say between 10 to 30 + and during my tests, these were the moments the kernel would + create lots of threads in servers and fail because of lack of memory, + either kernel memory, or virtual in the client space (filled with thread + stacks) + it could be due to threads spinning too much + (inside pfinet) + attaching a gdb shows it mostly inside __pthread_block + uh, how awful pfinet's select is + a big global lock + whenever something happens all threads get woken up + BKL! + * pinotree runs + we have many big hurd locks :p + it's rather a big translator lock + more than a global lock it seems, a global condvar too, isn't it ? + sure + we have a similar problem with the hurd-specific cancellation + code, it's in my todo list with io_select + ah, no, the condvar is not global -- cgit v1.2.3