[[!meta copyright="Copyright © 2010, 2011, 2012, 2013, 2014 Free Software Foundation, Inc."]] [[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable id="license" text="Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled [[GNU Free Documentation License|/fdl]]."]]"""]] [[!tag open_issue_glibc open_issue_libpthread]] [[!toc]] # cthreads -> pthreads Get rid of cthreads; switch to pthreads. Most of the issues raised on this page has been resolved, a few remain. ## IRC, freenode, #hurd, 2012-04-26 youpi: just to be sure: even if libpthread is compiled inside glibc (with proper symbols forwarding etc), it doesn't change that you cannot use both cthreads and pthreads in the same app, right? [[Packaging_libpthread]]. it's the same libpthread symbol forwarding does not magically resolve that libpthread lacks some libthread features :) i know, i was referring about the clash between actively using both there'll still be the issue that only one will be initialized and one that provides libc thread safety functions, etc. that's what i wanted to knew, thanks :) ## IRC, freenode, #hurd, 2012-07-23 So I am not sure what to do with the hurd_condition_wait stuff i would also like to know what's the real issue with cancellation here because my understanding is that libpthread already implements it does it look ok to you to make hurd_condition_timedwait return an errno code (like ETIMEDOUT and ECANCELED) ? braunr: that's what pthread_* function usually do, yes i thought they used their own code no thanks well, first, do you understand what hurd_condition_wait is ? it's similar to condition_wait or pthread_cond_wait with a subtle difference it differs from the original cthreads version by handling cancellation but it also differs from the second by how it handles cancellation instead of calling registered cleanup routines and leaving, it returns an error code (well simply !0 in this case) so there are two ways first, change the call to pthread_cond_wait Are you saying we could fix stuff to use pthread_cond_wait() properly? it's possible but not easy because you'd have to rewrite the cancellation code probably writing cleanup routines this can be hard and error prone and is useless if the code already exists so it seems reasonable to keep this hurd extension but now, as it *is* a hurd extension noone else uses braunr: BTW, when trying to figure out a tricky problem with the auth server, cfhammer digged into the RPC cancellation code quite a bit, and it's really a horrible complex monstrosity... plus the whole concept is actually broken in some regards I think -- though I don't remember the details antrik: i had the same kind of thoughts antrik: the hurd or pthreads ones ? not sure what you mean. I mean the RPC cancellation code -- which is involves thread management too ok I don't know how it is related to hurd_condition_wait though well i found two main entry points there hurd_thread_cancel and hurd_condition_wait and it didn't look that bad whereas in the pthreads code, there are many corner cases and even the standard itself looks insane well, perhaps the threading part is not that bad... it's not where we saw the problems at any rate :-) rpc interruption maybe ? oh, right... interruption is probably the right term yes that thing looks scary :)) the migration thread paper mentions some things about the problems concerning threads controllability I believe it's a very strong example for why building around standard Mach features is a bad idea, instead of adapting the primitives to our actual needs... i wouldn't be surprised if the "monstrosities" are work arounds right ## IRC, freenode, #hurd, 2012-07-26 Uhm, where does /usr/include/hurd/signal.h come from? head -n4 /usr/include/hurd/signal. h Ohh glibc? That makes things a little more difficult :( why ? Hurd includes it which brings in cthreads ? the hurd already brings in cthreads i don't see what you mean Not anymore :) the system cthreads header ? well it's not that difficult to trick the compiler not to include them signal.h includes cthreads.h I need to stop that just define the _CTHREADS_ macro before including anything remember that header files are normally enclosed in such macros to avoid multiple inclusions this isn't specific to cthreads converting hurd from cthreads to pthreads will make hurd and glibc break source and binary compatibility Of course reminds me of the similar issues of the late 90s Ugh, why is he using _pthread_self()? maybe because it accesses to the internals "he" ? Thomas in his modified cancel-cond.c well, you need the internals to implement it hurd_condition_wait is similar to pthread_condition_wait, except that instead of stopping the thread and calling cleanup routines, it returns 1 if cancelled not that i looked at it, but there's really no way to implement it using public api? Even if I am using glibc pthreads? unlikely God I had all of this worked out before I dropped off for a couple years.. :( this will come back :p that makes you the perfect guy to work on it ;) I can't find a pt-internal.h anywhere.. :( clone the hurd/libpthread.git repo from savannah Of course when I was doing this libpthread was still in hurd sources... So if I am using glibc pthread, why can't I use pthread_self() instead? that won't give you access to the internals OK, dumb question time. What internals? the libpthread ones that's where you will find if your thread has been cancelled or not pinotree: But isn't that assuming that I am using hurd's libpthread? if you aren't inside libpthread, no pthread_self is normally not portable you can only use it with pthread_equal so unless you *know* the internals, you can't use it and you won't be able to do much so, as it was done with cthreads, hurd_condition_wait should be close to the libpthread implementation inside, normally now, if it's too long for you (i assume you don't want to build glibc) you can just implement it outside, grabbing the internal headers for now another "not that i looked at it" question: isn't there no way to rewrite the code using that custom condwait stuff to use the standard libpthread one? and once it works, it'll get integrated pinotree: it looks very hard braunr: But the internal headers are assuming hurd libpthread which isn't in the source anymore from what i could see while working on select, servers very often call hurd_condition_wait and they return EINTR if canceleld so if you use the standard pthread_cond_wait function, your thread won't be able to return anything, unless you push the reply in a completely separate callback i'm not sure how well mig can cope with that i'd say it can't :) no really it looks ugly it's far better to have this hurd specific function and keep the existing user code as it is bddebian: you don't need the implementation, only the headers the thread, cond, mutex structures mostly I should turn to "pt-internal.h" and just put it in libshouldbelibc, no? no, that header is not installed Obviously not the "best" way pinotree: ?? pinotree: what does it change ? braunr: it == ? bddebian: you could even copy it entirely in your new cancel-cond.C and mention where it was copied from pinotree: it == pt-internal.H not being installed that he cannot include it in libshouldbelibc sources? ah, he wants to copy it? yes i want him to copy it actually :p it may be hard if there are a lot of macro options the __pthread struct changes size and content depending on other internal sysdeps headers well he needs to copy those too :p Well even if this works we are going to have to do something more "correct" about hurd_condition_wait. Maybe even putting it in glibc? sure but again, don't waste time on this for now make it *work*, then it'll get integrated Like it has already? This "patch" is only about 5 years old now... ;-P but is it complete ? Probably not :) Hmm, I wonder how many undefined references I am going to get though.. :( Shit, 5 One of which is ___pthread_self.. :( Does that mean I am actually going to have to build hurds libpthreads in libshouldbeinlibc? Seriously, do I really need ___pthread_self, __pthread_self, _pthread_self and pthread_self??? I'm still unclear what to do with cancel-cond.c. It seems to me that if I leave it the way it is currently I am going to have to either re-add libpthreads or still all of the libpthreads code under libshouldbeinlibc. then add it in libc glib glibc maybe under the name __hurd_condition_wait Shouldn't I be able to interrupt cancel-cond stuff to use glibc pthreads? interrupt ? Meaning interject like they are doing. I may be missing the point but they are just obfuscating libpthreads thread with some other "namespace"? (I know my terminology is wrong, sorry). they ? Well Thomas in this case but even in the old cthreads code, whoever wrote cancel-cond.c but they use internal thread structures .. Understood but at some level they are still just getting to a libpthread thread, no? absolutely not .. there is *no* pthread stuff in the hurd that's the problem :p Bah damnit... cthreads are directly implement on top of mach threads implemeneted* implemented* Sure but hurd_condition_wait wasn't of course it is it's almost the same as condition_wait but returns 1 if a cancelation request was made Grr, maybe I am just confusing myself because I am looking at the modified (pthreads) version instead of the original cthreads version of cancel-cond.c well if the modified version is fine, why not directly use that ? normally, hurd_condition_wait should sit next to other pthread internal stuff it could be renamed __hurd_condition_wait, i'm not sure that's irrelevant for your work anyway I am using it but it relies on libpthread and I am trying to use glibc pthreads hum what's the difference between libpthread and "glibc pthreads" ? aren't glibc pthreads the merged libpthread ? quite possibly but then I am missing something obvious. I'm getting ___pthread_self in libshouldbeinlibc but it is *UND* bddebian: with unmodified binaries ? braunr: No I added cancel-cond.c to libshouldbeinlibc And some of the pt-xxx.h headers well it's normal then i suppose braunr: So how do I get those defined without including pthreads.c from libpthreads? :) pinotree: hm... I think we should try to make sure glibc works both whith cthreads hurd and pthreads hurd. I hope that shoudn't be so hard. breaking binary compatibility for the Hurd libs is not too terrible I'd say -- as much as I'd like that, we do not exactly have a lot of external stuff depending on them :-) bddebian: *sigh* bddebian: just add cancel-cond to glibc, near the pthread code :p braunr: Wouldn't I still have the same issue? bddebian: what issue ? is hurd_condition_wait() the name of the original cthreads-based function? antrik: the original is condition_wait I'm confused is condition_wait() a standard cthreads function, or a Hurd-specific extension? antrik: as standard as you can get for something like cthreads braunr: Where hurd_condition_wait is looking for "internals" as you call them. I.E. there is no __pthread_self() in glibc pthreads :) hurd_condition_wait is the hurd-specific addition for cancelation bddebian: who cares ? bddebian: there is a pthread structure, and conditions, and mutexes you need those definitions so you either import them in the hurd braunr: so hurd_condition_wait() *is* also used in the original cthread-based implementation? or you write your code directly where they're available antrik: what do you call "original" ? not transitioned to pthreads ok, let's simply call that cthreads yes, it's used by every hurd servers virtually if not really everyone of them braunr: That is where you are losing me. If I can just use glibc pthreads structures, why can't I just use them in the new pthreads version of cancel-cond.c which is what I was originally asking.. :) you *have* to do that but then, you have to build the whole glibc * bddebian shoots himself and i was under the impression you wanted to avoid that do any standard pthread functions use identical names to any standard cthread functions? what you *can't* do is use the standard pthreads interface no, not identical but very close bddebian: there is a difference between using pthreads, which means using the standard posix interface, and using the glibc pthreads structure, which means toying with the internale implementation you *cannot* implement hurd_condition_wait with the standard posix interface, you need to use the internal structures hurd_condition_wait is actually a shurd specific addition to the threading library hurd* well, in that case, the new pthread-based variant of hurd_condition_wait() should also use a different name from the cthread-based one so it's normal to put it in that threading library, like it was done for cthreads 21:35 < braunr> it could be renamed __hurd_condition_wait, i'm not sure Except that I am trying to avoid using that threading library what ? If I am understanding you correctly it is an extention to the hurd specific libpthreads? to the threading library, whichever it is antrik: although, why not keeping the same name ? braunr: I don't think having hurd_condition_wait() for the cthread variant and __hurd_condition_wait() would exactly help clarity... I was talking about a really new name. something like pthread_hurd_condition_wait() or so braunr: to avoid confusion. to avoid accidentally pulling in the wrong one at build and/or runtime. to avoid possible namespace conflicts ok well yes, makes sense braunr: Let me state this as plainly as I hope I can. If I want to use glibc's pthreads, I have no choice but to add it to glibc? and pthread_hurd_condition_wait is a fine name bddebian: no bddebian: you either add it there bddebian: or you copy the headers defining the internal structures somewhere else and implement it there but adding it to glibc is better it's just longer in the beginning, and now i'm working on it, i'm really not sure add it to glibc directly :p That's what I am trying to do but the headers use pthread specific stuff would should be coming from glibc's pthreads yes well it's not the headers you need you need the internal structure definitions sometimes they're in c files for opacity So ___pthread_self() should eventually be an obfuscation of glibcs pthread_self(), no? i don't know what it is read the cthreads variant of hurd_condition_wait, understand it, do the same for pthreads it's easy :p For you bastards that have a clue!! ;-P I definitely vote for adding it to the hurd pthreads implementation in glibc right away. trying to do it externally only adds unnecessary complications and we seem to agree that this new pthread function should be named pthread_hurd_condition_wait(), not just hurd_condition_wait() :-) ## IRC, freenode, #hurd, 2012-07-27 OK this hurd_condition_wait stuff is getting ridiculous the way I am trying to tackle it. :( I think I need a new tactic. bddebian: what do you mean ? braunr: I know I am thick headed but I still don't get why I cannot implement it in libshouldbeinlibc for now but still use glibc pthreads internals I thought I was getting close last night by bringing in all of the hurd pthread headers and .c files but it just keeps getting uglier and uglier youpi: Just to verify. The /usr/lib/i386-gnu/libpthread.so that ships with Debian now is from glibc, NOT libpthreads from Hurd right? Everything I need should be available in glibc's libpthreads? (Except for hurd_condition_wait obviously). 22:35 < antrik> I definitely vote for adding it to the hurd pthreads implementation in glibc right away. trying to do it externally only adds unnecessary complications bddebian: yes same as antrik fuck libpthread *already* provides some odd symbols (cthread compatibility), it can provide others bddebian: don't curse :p it will be easier in the long run * bddebian breaks out glibc :( but you should tell thomas that too braunr: I know it just adds a level of complexity that I may not be able to deal with we wouldn't want him to waste too much time on the external libpthread which one ? glibc for one. hurd_condition_wait() for another which I don't have a great grasp on. Remember my knowledge/skillsets are limited currently. bddebian: tschwinge has good instructions to build glibc keep your tree around and it shouldn't be long to hack on it for hurd_condition_wait, i can help Oh I was thinking about using Debian glibc for now. You think I should do it from git? no debian rules are even more reliable (just don't build all the variants) `debian/rules build_libc` builds the plain i386 variant only So put pthread_hurd_cond_wait in it's own .c file or just put it in pt-cond-wait.c ? i'd put it in pt-cond-wait.C youpi or braunr: OK, another dumb question. What (if anything) should I do about hurd/hurd/signal.h. Should I stop it from including cthreads? it's not a dumb question. it should probably stop, yes, but there might be uncovered issues, which we'll have to take care of Well I know antrik suggested trying to keep compatibility but I don't see how you would do that compability between what ? and source and/or binary ? hurd/signal.h implicitly including cthreads.h ah well yes, it has to change obviously Which will break all the cthreads stuff of course So are we agreeing on pthread_hurd_cond_wait()? that's fine Ugh, shit there is stuff in glibc using cthreads?? like what ? hurdsig, hurdsock, setauth, dtable, ... it's just using the compatibility stuff, that pthread does provide but it includes cthreads.h implicitly s/it/they in many cases not a problem, we provide the functions Hmm, then what do I do about signal.h? It includes chtreads.h because it uses extern struct mutex ... ah, then keep the include the pthread mutexes are compatible with that we'll clean that afterwards arf, OK that's what I meant by "uncover issues" ## IRC, freenode, #hurd, 2012-07-28 Well crap, glibc built but I have no symbol for pthread_hurd_cond_wait in libpthread.so :( Hmm, I wonder if I have to add pthread_hurd_cond_wait to forward.c and Versions? (Versions obviously eventually) bddebian: most probably not about forward.c, but definitely you have to export public stuff using Versions ## IRC, freenode, #hurd, 2012-07-29 braunr: http://paste.debian.net/181078/ ugh, inline functions :/ "Tell hurd_thread_cancel how to unblock us" i think you need that one too :p ?? well, they work in pair one cancels, the other notices it hurd_thread_cancel is in the hurd though, iirc or uh wait no it's in glibc, hurd/thread-cancel.c otherwise it looks like a correct reuse of the original code, but i need to understand the pthreads internals better to really say anything ## IRC, freenode, #hurd, 2012-08-03 pinotree: what do you think of condition_implies/condition_unimplies ? the work on pthread will have to replace those ## IRC, freenode, #hurd, 2012-08-06 bddebian: so, where is the work being done ? braunr: Right now I would just like to testing getting my glibc with pthread_hurd_cond_wait installed on the clubber subhurd. It is in /home/bdefreese/glibc-debian2 we need a git branch braunr: Then I want to rebuild hurd with Thomas's pthread patches against that new libc Aye i don't remember, did thomas set a git repository somewhere for that ? He has one but I didn't have much luck with it since he is using an external libpthreads i can manage the branches I was actually patching debian/hurd then adding his patches on top of that. It is in /home/bdefreese/debian-hurd but he has updateds some stuff since then Well we need to agree on a strategy. libpthreads only exists in debian/glibc it would be better to have something upstream than to work on a debian specific branch :/ tschwinge: do you think it can be done ? ## IRC, freenode, #hurd, 2012-08-07 braunr: You mean to create on Savannah branches for the libpthread conversion? Sure -- that's what I have been suggesting to Barry and Thomas D. all the time. braunr: OK, so I installed my glibc with pthread_hurd_condition_wait in the subhurd and now I have built Debian Hurd with Thomas D's pthread patches. bddebian: i'm not sure we're ready for tests yet :p braunr: Why not? :) bddebian: a few important bits are missing braunr: Like? like condition_implies i'm not sure they have been handled everywhere it's still interesting to try, but i bet your system won't finish booting Well I haven't "installed" the built hurd yet I was trying to think of a way to test a little bit first, like maybe ext2fs.static or something Ohh, it actually mounted the partition How would I actually "test" it? git clone :p building a debian package inside removing the whole content after that sort of things Hmm, I think I killed clubber :( Yep.. Crap! :( ? how did you do that ? Mounted a new partition with the pthreads ext2fs.static then did an apt-get source hurd to it.. what partition, and what mount point ? I added a new 2Gb partition on /dev/hd0s6 and set the translator on /home/bdefreese/part6 shouldn't kill your hurd Well it might still be up but killed my ssh session at the very least :) ouch braunr: Do you have debugging enabled in that custom kernel you installed? Apparently it is sitting at the debug prompt. ## IRC, freenode, #hurd, 2012-08-12 hmm, it seems the hurd notion of cancellation is actually not the pthread one at all pthread_cancel merely marks a thread as being cancelled, while hurd_thread_cancel interrupts it ok, i have a pthread_hurd_cond_wait_np function in glibc ## IRC, freenode, #hurd, 2012-08-13 nice, i got ext2fs work with pthreads there are issues with the stack size strongly limiting the number of concurrent threads, but that's easy to fix one problem with the hurd side is the condition implications i think it should be deal separately, and before doing anything with pthreads but that's minor, the most complex part is, again, the term server other than that, it was pretty easy to do but, i shouldn't speak too soon, who knows what tricky bootstrap issue i'm gonna face ;p tschwinge: i'd like to know how i should proceed if i want a symbol in a library overriden by that of a main executable e.g. have libpthread define a default stack size, and let executables define their own if they want to change it tschwinge: i suppose i should create a weak alias in the library and a normal variable in the executable, right ? hm i'm making this too complicated don't mind that stupid question braunr: A simple variable definition would do, too, I think? braunr: Anyway, I'd first like to know why we can'T reduce the size of libpthread threads from 2 MiB to 64 KiB as libthreads had. Is that a requirement of the pthread specification? tschwinge: it's a requirement yes the main reason i see is that hurd threadvars (which are still present) rely on common stack sizes and alignment to work Mhm, I see. so for now, i'm using this approach as a hack only I'm working on phasing out threadvars, but we're not there yet. [[glibc/t/tls-threadvar]]. Yes, that's fine for the moment. tschwinge: a simple definition wouldn't work tschwinge: i resorted to a weak symbol, and see how it goes tschwinge: i supposed i need to export my symbol as a global one, otherwise making it weak makes no sense, right ? suppose* tschwinge: also, i'm not actually sure what you meant is a requirement about the stack size, i shouldn't have answered right away no there is actually no requirement i misunderstood your question hm when adding this weak variable, starting a program segfaults :( apparently on ___pthread_self, a tls variable fighting black magic begins arg, i can't manage to use that weak symbol to reduce stack sizes :( ah yes, finally git clone /path/to/glibc.git on a pthread-powered ext2fs server :> tschwinge: seems i have problems using __thread in hurd code tschwinge: they produce undefined symbols tschwinge: forget that, another mistake on my part so, current state: i just need to create another patch, for the code that is included in the debian hurd package but not in the upstream hurd repository (e.g. procfs, netdde), and i should be able to create hurd packages taht completely use pthreads ## IRC, freenode, #hurd, 2012-08-14 tschwinge: i have weird bootstrap issues, as expected tschwinge: can you point me to important files involved during bootstrap ? my ext2fs.static server refuses to start as a rootfs, whereas it seems to work fine otherwise hm, it looks like it's related to global signal dispositions ## IRC, freenode, #hurd, 2012-08-15 ahah, a subhurd running pthreads-powered hurd servers only braunr: \o/ i can even long on ssh log pinotree: for reference, i uploaded my debian-specific changes there : http://git.sceen.net/rbraun/debian_hurd.git/ darnassus is now running a pthreads-enabled hurd system :) ## IRC, freenode, #hurd, 2012-08-16 my pthreads-enabled hurd systems can quickly die under load youpi: with hurd servers using pthreads, i occasionally see thread storms apparently due to a deadlock youpi: it makes me think of the problem you sometimes have (and had often with the page cache patch) in cthreads, mutex and condition operations are macros, and they check the mutex/condition queue without holding the internal mutex/condition lock i'm not sure where this can lead to, but it doesn't seem right isn't that a bit dangerous? i believe it is i mean it looks dangerous but it may be perfectly safe could it be? aiui, it's an optimization, e.g. "dont take the internal lock if there are no thread to wake" but if there is a thread enqueuing itself at the same time, it might not be waken yeah pthreads don't have this issue and what i see looks like a deadlock anything can happen between the unlocked checking and the following instruction so i'm not sure how a situation working around a faulty implementation would result in a deadlock with a correct one on the other hand, the error youpi reported (http://lists.gnu.org/archive/html/bug-hurd/2012-07/msg00051.html) seems to indicate something is deeply wrong with libports it could also be the current code does not really "works around" that, but simply implicitly relies on the so-generated behaviour luckily not often maybe i think we have to find and fix these issues before moving to pthreads entirely (ofc, using pthreads to trigger those bugs is a good procedure) indeed i wonder if tweaking the error checking mode of pthreads to abort on EDEADLK is a good approach to detecting this problem let's try ! youpi: eh, i think i've spotted the libports ref mistake ooo! .oOo.!! Same problem but different patches look at libports/bucket-iterate.c in the HURD_IHASH_ITERATE loop, pi->refcnt is incremented without a lock Mmm, the incrementation itself would probably be compiled into an INC, which is safe in UP it's an add currently actually 0x00004343 <+163>: addl $0x1,0x4(%edi) 40c4: 83 47 04 01 addl $0x1,0x4(%edi) that makes it SMP unsafe, but not UP unsafe right too bad that still deserves fixing :) the good side is my mind is already wired for smp well, it's actually not UP either in general when the processor is not able to do the add in one instruction sure youpi: looks like i'm wrong, refcnt is protected by the global libports lock braunr: but aren't there pieces of code which manipulate the refcnt while taking another lock than the global libports lock it'd not be scalable to use the global libports lock to protect refcnt youpi: imo, the scalability issues are present because global locks are taken all the time, indeed urgl yes .. when enabling mutex checks in libpthread, pfinet dies :/ grmbl, when trying to start "ls" using my deadlock-detection libpthread, the terminal gets unresponsive, and i can't even use ps .. :( braunr: one could say your deadlock detection works too good... :P pinotree: no, i made a mistake :p it works now :) well, works is a bit fast i can't attach gdb now :( *sigh* i guess i'd better revert to a cthreads hurd and debug from there eh, with my deadlock-detection changes, recursive mutexes are now failing on _pthread_self(), which for some obscure reason generates this => 0x0107223b <+283>: jmp 0x107223b <__pthread_mutex_timedlock_internal+283> *sigh* ## IRC, freenode, #hurd, 2012-08-17 aw, the thread storm i see isn't a deadlock seems to be mere contention .... youpi: what do you think of the way ports_manage_port_operations_multithread determines it needs to spawn a new thread ? it grabs a lock protecting the number of threads to determine if it needs a new thread then releases it, to retake it right after if a new thread must be created aiui, it could lead to a situation where many threads could determine they need to create threads braunr: there's no reason to release the spinlock before re-taking it that can indeed lead to too much thread creations youpi: a harder question youpi: what if thread creation fails ? :/ if i'm right, hurd servers simply never expect thread creation to fail indeed and as some patterns have threads blocking until another produce an event i'm not sure there is any point handling the failure at all :/ well, at least produce some output i added a perror so we know that happened async messaging is quite evil actually the bug i sometimes have with pfinet is usually triggered by fakeroot it seems to use select a lot and select often destroys ports when it has something to return to the caller which creates dead name notifications and if done often enough, a lot of them uh and as pfinet is creating threads to service new messages, already existing threads are starved and can't continue which leads to pfinet exhausting its address space with thread stacks (at about 30k threads) i initially thought it was a deadlock, but my modified libpthread didn't detect one, and indeed, after i killed fakeroot (the whole dpkg-buildpackage process hierarchy), pfinet just "cooled down" with almost all 30k threads simply waiting for requests to service, and the few expected select calls blocking (a few ssh sessions, exim probably, possibly others) i wonder why this doesn't happen with cthreads there's a 4k guard between stacks, otherwise I don't see anything obvious i'll test my pthreads package with the fixed ports_manage_port_operations_multithread but even if this "fix" should reduce thread creation, it doesn't prevent the starvation i observed evil concurrency :p youpi: hm i've just spotted an important difference actually youpi: glibc sched_yield is __swtch(), cthreads is thread_switch(MACH_PORT_NULL, SWITCH_OPTION_DEPRESS, 10) i'll change the glibc implementation, see how it affects the whole system youpi: do you think bootsting the priority or cancellation requests is an acceptable workaround ? boosting of* workaround for what? youpi: the starvation i described earlier well, I guess I'm not into the thing enough to understand you meant the dead port notifications, right? yes they are the cancellation triggers cancelling whaT? a blocking select for example ports_do_mach_notify_dead_name -> ports_dead_name -> ports_interrupt_notified_rpcs -> hurd_thread_cancel so it's important they are processed quickly, to allow blocking threads to unblock, reply, and be recycled you mean the threads in pfinet? the issue applies to all servers, but yes k well, it can not not be useful :) whatever the choice, it seems to be there will be a security issue (a denial of service of some kind) well, it's not only in that case you can always queue a lot of requests to a server sure, i'm just focusing on this particular problem hm max POLICY_TIMESHARE or min POLICY_FIXEDPRI ? i'd say POLICY_TIMESHARE just in case (and i'm not sure mach handles fixed priority threads first actually :/) hm my current hack which consists of calling swtch_pri(0) from a freshly created thread seems to do the job eh (it may be what cthreads unintentionally does by acquiring a spin lock from the entry function) not a single issue any more with this hack Nice bddebian: well it's a hack :p and the problem is that, in order to boost a thread's priority, one would need to implement that in libpthread there isn't thread priority in libpthread? it's not implemented Interesting if you want to do it, be my guest :p mach should provide the basic stuff for a partial implementation but for now, i'll fall back on the hack, because that's what cthreads "does", and it's "reliable enough" braunr: I don't think the locking approach in ports_manage_port_operations_multithread() could cause issues. the worst that can happen is that some other thread becomes idle between the check and creating a new thread -- and I can't think of a situation where this could have any impact... antrik: hm ? the worst case is that many threads will evalute spawn to 1 and create threads, whereas only one of them should have braunr: I'm not sure perror() is a good way to handle the situation where thread creation failed. this would usually happen because of resource shortage, right? in that case, it should work in non-debug builds too perror isn't specific to debug builds i'm building glibc packages with a pthreads-enabled hurd :> (which at one point run the test allocating and filling 2 GiB of memory, which passed) (with a kernel using a 3/1 split of course, swap usage reached something like 1.6 GiB) braunr: BTW, I think the observation that thread storms tend to happen on destroying stuff more than on creating stuff has been made before... ok braunr: you are right about perror() of course. brain fart -- was thinking about assert_perror() (which is misused in some places in existing Hurd code...) braunr: I still don't see the issue with the "spawn" locking... the only situation where this code can be executed concurrently is when multiple threads are idle and handling incoming request -- but in that case spawning does *not* happen anyways... unless you are talking about something else than what I'm thinking of... well imagine you have idle threads, yes let's say a lot like a thousand and the server gets a thousand requests a one more :p normally only one thread should be created to handle it but here, the worst case is that all threads run internal_demuxer roughly at the same time and they all determine they need to spawn a thread leading to another thousand (that's extreme and very unlikely in practice of course) oh, I see... you mean all the idle threads decide that no spawning is necessary; but before they proceed, finally one comes in and decides that it needs to spawn; and when the other ones are scheduled again they all spawn unnecessarily? no, spawn is a local variable it's rather, all idle threads become busy, and right before servicing their request, they all decide they must spawn a thread I don't think that's how it works. changing the status to busy (by decrementing the idle counter) and checking that there are no idle threads is atomic, isn't it? no oh I guess I should actually look at that code (again) before commenting ;-) let me check no sorry you're right so right, you can't lead to that situation i don't even understand how i can't see that :/ let's say it's the heat :p 22:08 < braunr> so right, you can't lead to that situation it can't lead to that situation ## IRC, freenode, #hurd, 2012-08-18 one more attempt at fixing netdde, hope i get it right this time some parts assume a ddekit thread is a cthread, because they share the same address it's not as easy when using pthread_self :/ good, i got netdde work with pthreads youpi: for reference, there are now glibc, hurd and netdde packages on my repository youpi: the debian specific patches can be found at my git repository (http://git.sceen.net/rbraun/debian_hurd.git/ and http://git.sceen.net/rbraun/debian_netdde.git/) except a freeze during boot (between exec and init) which happens rarely, and the starvation which still exists to some extent (fakeroot can cause many threads to be created in pfinet and pflocal), the glibc/hurd packages have been working fine for a few days now the threading issue in pfinet/pflocal is directly related to select, which the io_select_timeout patches should fix once merged well, considerably reduce at least and maybe fix completely, i'm not sure ## IRC, freenode, #hurd, 2012-08-27 braunr: wrt a78a95d in your pthread branch of hurd.git, shouldn't that job theorically been done using pthread api (of course after implementing it)? pinotree: sure, it could be done through pthreads pinotree: i simply restricted myself to moving the hurd to pthreads, not augment libpthread (you need to remember that i work on hurd with pthreads because it became a dependency of my work on fixing select :p) and even if it wasn't the reason, it is best to do these tasks (replace cthreads and implement pthread scheduling api) separately braunr: hm ok implementing the pthread priority bits could be done independently though youpi: there are more than 9000 threads for /hurd/streamio kmsg on ironforge oO kmsg ?! it's only /dev/klog right? not sure but it seems so which syslog daemon is running? inetutils I've restarted the klog translator, to see whether when it grows again 6 hours and 21 minutes to build glibc on darnassus pfinet still runs only 24 threads the ext2 instance used for the build runs 2k threads, but that's because of the pageouts so indeed, the priority patch helps a lot (pfinet used to have several hundreds, sometimes more than a thousand threads after a glibc build, and potentially increasing with each use of fakeroot) exec weights 164M eww, we definitely have to fix that leak the leaks are probably due to wrong mmap/munmap usage [[service_solahart_jakarta_selatan__082122541663/exec_memory_leaks]]. ### IRC, freenode, #hurd, 2012-08-29 youpi: btw, after my glibc build, there were as little as between 20 and 30 threads for pflocal and pfinet with the priority patch ext2fs still had around 2k because of pageouts, but that's expected ok overall the results seem very good and allow the switch to pthreads yep, so it seems youpi: i think my first integration branch will include only a few changes, such as this priority tuning, and the replacement of condition_implies sure so we can push the move to pthreads after all its small dependencies yep, that's the most readable way ## IRC, freenode, #hurd, 2012-09-03 braunr: Compiling yodl-3.00.0-7: pthreads: real 13m42.460s, user 0m0.000s, sys 0m0.030s cthreads: real 9m 6.950s, user 0m0.000s, sys 0m0.020s thanks i'm not exactly certain about what causes the problem though it could be due to libpthread using doubly-linked lists, but i don't think the overhead would be so heavier because of that alone there is so much contention sometimes that it could the hurd would have been better off with single threaded servers :/ we should probably replace spin locks with mutexes everywhere on the other hand, i don't have any more starvation problem with the current code ### IRC, freenode, #hurd, 2012-09-06 braunr: Yes you are right, the new pthread-based Hurd is _much_ slower. One annoying example is when compiling, the standard output is written in bursts with _long_ periods of no output in between:-( that's more probably because of the priority boost, not the overhead that's one of the big issues with our mach-based model we either give high priorities to our servers, or we can suffer from message floods that's in fact more a hurd problem than a mach one braunr: any immediate ideas how to speed up responsiveness the pthread-hurd. It is annoyingly slow (slow-witted) gnu_srs: i already answered that it doesn't look that slower on my machines though you said you had some ideas, not which. except for mcsims work. i have ideas about what makes it slower it doesn't mean i have solutions for that if i had, don't you think i'd have applied them ? :) ok, how to make it more responsive on the console? and printing stdout more regularly, now several pages are stored and then flushed. give more details please it behaves like a loaded linux desktop, with little memory left... details about what you're doing apt-get source any big package and: fakeroot debian/rules binary 2>&1 | tee ../binary.logg isee well no, we can't improve responsiveness without reintroducing the starvation problem they are linked and what you're doing involes a few buffers, so the laggy feel is expected if we can fix that simply, we'll do so after it is merged upstream ### IRC, freenode, #hurd, 2012-09-07 gnu_srs: i really don't feel the sluggishness you described with hurd+pthreads on my machines gnu_srs: what's your hardware ? and your VM configuration ? Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz kvm -m 1024 -net nic,model=rtl8139 -net user,hostfwd=tcp::5562-:22 -drive cache=writeback,index=0,media=disk,file=hurd-experimental.img -vnc :6 -cdrom isos/netinst_2012-07-15.iso -no-kvm-irqchip what is the file system type where your disk image is stored ? ext3 and how much physical memory on the host ? (paste meminfo somewhere please) 4G, and it's on the limit, 2 kvm instances+gnome,etc 80% in use by programs, 14% in cache. ok, that's probably the reason then the writeback option doesn't help a lot if you don't have much cache well the other instance is cthreads based, and not so sluggish. we know hurd+pthreads is slower i just wondered why i didn't feel it that much try to fire up more kvm instances, and do a heavy compile... i don't do that :) that's why i never had the problem most of the time i have like 2-3 GiB of cache and of course more on shattrath (the host of the sceen.net hurdboxes, which has 16 GiB of ram) ### IRC, freenode, #hurd, 2012-09-11 Monitoring the cthreads and the pthreads load under Linux shows: cthread version: load can jump very high, less cpu usage than pthread version pthread version: less memory usage, background cpu usage higher than for cthread version that's the expected behaviour gnu_srs: are you using the lifothreads gnumach kernel ? for experimental, yes. i.e. pthreads i mean, you're measuring on it right now, right ? yes, one instance running cthreads, and one pthreads (with lifo gnumach) ok no swap used in either instance, will try a heavy compile later on. what for ? E.g. for memory when linking. I have swap available, but no swap is used currently. yes but, what do you intend to measure ? don't know, just to see if swap is used at all. it seems to be used not very much. depends be warned that using the swap means there is pageout, which is one of the triggers for global system freeze :p anonymous memory pageout for linux swap is used constructively, why not on hurd? because of hard to squash bugs aha, so it is bugs hindering swap usage:-/ yup :/ Let's find them thenO:-), piece of cake remember my page cache branch in gnumach ? :) [[gnumach_page_cache_policy]]. not much i started it before fixing non blocking select anyway, as a side effect, it should solve this stability issue too, but it'll probably take time is that branch integrated? I only remember slab and the lifo stuff. and mcsims work no it's not it's unfinished k! it correctly extends the page cache to all available physical memory, but since the hurd doesn't scale well, it slows the system down ## IRC, freenode, #hurd, 2012-09-14 arg darnassus seems to eat 100% cpu and make top freeze after some time seems like there is an important leak in the pthreads version could be the lifothreads patch :/ there's a memory leak? in pthreads? i don't think so, and it's not a memory leak it's a port leak probably in the kernel ### IRC, freenode, #hurd, 2012-09-17 nice, the port leak is actually caused by the exim4 loop bug ### IRC, freenode, #hurd, 2012-09-23 the port leak i observed a few days ago is because of exim4 (the infamous loop eating the cpu we've been seeing regularly) [[service_solahart_jakarta_selatan__082122541663/fork_deadlock]]? oh next time it happens, and if i have the occasion, i'll examine the problem tip: when you can't use top or ps -e, you can use ps -e -o pid=,args= or -M ? haven't tested ### IRC, freenode, #hurd, 2013-01-26 ah great, one of the recent fixes (probably select-eintr or setitimer) fixed exim4 :) ## IRC, freenode, #hurd, 2012-09-23 tschwinge: i committed the last hurd pthread change, http://git.savannah.gnu.org/cgit/hurd/hurd.git/log/?h=master-pthreads tschwinge: please tell me if you consider it ok for merging ### IRC, freenode, #hurd, 2012-11-27 braunr: btw, I forgot to forward here, with the glibc patch it does boot fine, I'll push all that and build some almost-official packages for people to try out what will come when eglibc gets the change in unstable youpi: great :) thanks for managing the final bits of this (and thanks for everybody involved) sorry again for the non obvious parts if you need the debian specific parts refined (e.g. nice commits for procfs & others), i can do that I'll do that, no pb ok after that (well, during also), we should focus more on bug hunting ## IRC, freenode, #hurd, 2012-10-26 hello. What does following error message means? "unable to adjust libports thread priority: Operation not permitted" It appears when I set translators. Seems has some attitude to libpthread. Also following appeared when I tried to remove translator: "pthread_create: Resource temporarily unavailable" Oh, first message appears very often, when I use translator I set. mcsim1: it's related to a recent patch i sent mcsim1: hurd servers attempt to increase their priority on startup (when a thread is created actually) to reduce message floods and thread storms (such sweet names :)) but if you start them as an unprivileged user, it fails, which is ok, it's just a warning the second way is weird it normally happens when you're out of available virtual space, not when shutting a translator donw braunr: you mean this patch: libports: reduce thread starvation on message floods? yes remember you're running on darnassus with a heavily modified hurd/glibc you can go back to the cthreads version if you wish it's better to check translators privileges, before attempting to increase their priority, I think. no it's just a bit annoying privileges can be changed during execution well remove it But warning should not appear. what could be done is to limit the warning to one occurrence mcsim1: i prefer that it appears ok it's always better to be explicit and verbose well not always, but very often one of the reasons the hurd is so difficult to debug is the lack of a "message server" à la dmesg [[service_solahart_jakarta_selatan__082122541663/translator_stdout_stderr]]. ### IRC, freenode, #hurd, 2012-12-10 braunr: unable to adjust libports thread priority: (ipc/send) invalid destination port I'll see what package brought that (that was on a buildd) wow mkvtoolnix_5.9.0-1: shouldn't that code be done in pthreads and then using such pthread api? :p pinotree: you've already asked that question :p i know :p the semantics of pthreads are larger than what we need, so that will be done "later" but this error shouldn't happen it looks more like a random mach bug youpi: anything else on the console ? nope i'll add traces to know which step causes the error #### IRC, freenode, #hurd, 2012-12-11 braunr: mktoolnix seems like a reproducer for the libports thread priority issue (3 times) youpi: thanks youpi: where is that tool packaged ? he probably means the mkvtoolnix source seems so i don't find anything else that's it, yes #### IRC, freenode, #hurd, 2013-03-01 braunr: btw, "unable to adjust libports thread priority: (ipc/send) invalid destination port" is actually not a sign of fatality bach recovered from it youpi: well, it never was a sign of fatality but it means that, for some reason, a process looses a right for a very obscure reason :/ weird sentence, agreed :p #### IRC, freenode, #hurd, 2013-06-14 Hi, when running check for gccgo the following occurs (multiple times) locking up the console unable to adjust libports thread priority: (ipc/send) invalid destination port (not locking up the console, it was just completely filled with messages)) gnu_srs: are you running your translator as root ? or, do you have a translator running as an unprivileged user ? hm, invalid dest port that's a bug :p but i don't know why i'll have to take some time to track it down it might be a user ref overflow or something similarly tricky gnu_srs: does it happen everytime you run gccgo checks or only after the system has been alive for some time ? (some time being at least a few hours, more probably days) #### IRC, freenode, #hurd, 2013-07-05 ok, found the bug about invalid ports when adjusting priorities thhe hurd must be plagued with wrong deallocations :( i have so many problems when trying to cleanly destroy threads [[libpthread/t/fix_have_kernel_resources]]. #### IRC, freenode, #hurd, 2013-11-25 youpi: btw, my last commit on the hurd repo fixes the urefs overflow we've sometimes seen in the past in the priority adjusting code of libports #### IRC, freenode, #hurd, 2013-11-29 See also [[open_issues/libpthread/t/fix_have_kernel_resources]]. there still are some leak ports making servers spawn threads with non-elevated priorities :/ leaks* issues with your thread destruction work ? err, wait why does a port leak cause that ? because it causes urefs overflows and the priority adjustment code does check errors :p ^^ ah yes, urefs... apparently it only affects the root file system hm i'll spend an hour looking for it, and whatever i find, i'll install the upstream debian packages so you can build glibc without too much trouble we need a clean build chroot on darnassus for this situation ah yes i should have time to set things up this week end 1: send (refs: 65534) i wonder what the first right is in the root file system hm search doesn't help so i'm pretty sure it's a kernel object perhaps the host priv port could be the thread port or something ? no, not the thread port why would it have so many refs ? the task port maybe but it's fine if it overflows also, some urefs are clamped at max, so maybe this is fine ? it may be fine yes err = get_privileged_ports (&host_priv, NULL); iirc, this function should pass copies of the name, not increment the urefs counter it may behave differently if built statically o_O y would it ? no idea something doesn't behave as it should :) i'm not asking why, i'm asking where :) the proc server is also affected so it does look like it has something to do with bootstrap I'm not surprised :/ #### IRC, freenode, #hurd, 2013-11-30 so yes, the host_priv port gets a reference when calling get_privileged_ports but only in the rootfs and proc servers, probably because others use the code path to fetch it from proc ah well, it shouldn't behave differently ? get_privileged_ports get_privileged_ports is explictely described to cache references i don't get it you said it behaved differently for proc and the rootfs that's undesireable, isn't it ? yes ok so it should behave differently than it does yes right teythoon: during your work this summer, have you come across the bootstrap port of a task ? i wonder what the bootstrap port of the root file system is maybe i got the description wrong since references on host or master are deallocated where get_privileged_ports is used .. no, I do not believe i did anything bootstrap port related ok i don't need that any more fortunately i just wonder how someone could write a description so error-prone .. and apparently, this problem should affect all servers, but for some reason i didn't see it there, problem fixed ? last leak eliminated cool :) how ? i simply deallocate host_priv in addition to the others when adjusting thread priority as simple as that .. uh sure ? so many system calls just for reference counting yes i did that, and broke the rootfs well i'm using one right now ok maybe i should let it run a bit :) no, for me it failed on the first write teythoon: looks weird so i figured it was wrong to deallocate that port i'll reboot it and see if there may be a race thought i didn't get a reference after all or something I believe there is a race in ext2fs teythoon: that's not good news for me when doing fsysopts --update / (which remounts /) sometimes, the system hangs :/ might be a deadlock, or the rootfs dies and noone notices with my protected payload stuff, the system would reboot instead of just hanging oh which might point to a segfault in ext2fs maybe the exception message carries a bad payload makes sense exception handling in ext2fs is messy .. braunr: and, doing sleep 0.1 before remounting / makes the problem less likely to appear ugh and system load on my host system seems to affect this but it is hard to tell sometimes, this doesn't show up at all sometimes several times in a row the system load might simply indicate very short lived processes (or threads) system load on my host ah this makes me believe that it is a race somewhere all of this well, i can't get anything wrong with my patched rootfs braunr: ok, maybe I messed up or maybe you were very unlucky and there is a rare race but i'll commit anyway no, i never got it to work, always hung at the first write it won't be the first or last rare problem we'll have to live with hm then you probably did something wrong, yes that's reassuring ### IRC, freenode, #hurd, 2013-03-11 youpi: oh btw, i noticed a problem with the priority adjustement code a thread created by a privileged server (e.g. an ext2fs translator) can then spawn a server from a node owned by an unprivileged user which inherits the priority easy to fix but worth saying to keep in mind uh indeed ### IRC, freenode, #hurd, 2013-07-01 braunr: it seems as if pfinet is not prioritized enough I'm getting network connectivity issues when the system is quite loaded loaded with what ? it could be ext2fs having a lot more threads than other servers building packages I'm talking about the buildds ok ironforge or others ? they're having troubles uploading packages while building stuff ironforge and others that happened already in the past sometimes but at the moment it's really pronounced i don't think it's a priority issue i think it's swapping ah, that's not impossible indeed but why would it swap? there's a lot of available memory a big file is enough it pushes anonymous memory out to fill 900MiB memory ? i see 535M of swap on if yes ironforge is just building libc and for some reason, swapping is orders of magnitude slower than anything else not linking it yet i also see 1G of free memory on it that's what I meant with 900MiB so at some point, it needed a lot of memory, caused swapping and from time to time it's probably swapping back well, pfinet had all the time to swap back already I don't see why it should be still suffering from it swapping is a kernel activity ok, but once you're back, you're back unless something else pushes you out if the kernel is busy waiting for the default pager, nothing makes progress (eccept the default pager hopefully) sure but pfinet should be back already, since it does work so I don't see why it should wait for something the kernel is waiting and the kernel isn't preemptibl e although i'm not sure preemption is the problem here well what I don't understand is what we have changed that could have so much impact the only culprit I can see is the priorities we have changed recently do you mean it happens a lot more frequently than before ? yes way ok ironforge is almost unusable while building glibc I've never seen that that's weird, i don't have these problems on darnassus but i think i reboot it more often could be a scalability issue then combined with the increased priorities if is indeed running full time on the host, whereas swapping issues show the cpu being almost idle loadavg is high too so i guess there are many threads 0 971 3 -20 -20 1553 305358625 866485906 523M 63M * S 0 972 3 -20 -20 1434 125237556 719443981 483M 5.85M * S around 1k5 each that's quite usual could be the priorities then but i'm afraid that if we lower them, the number of threads will grow out of control (good thing is i'm currently working on a way to make libpthread actually remove kernel resources) but the priorities should be the same in ext2fs and pfinet, shouldn't they? yes but ext2 has a lot more threads than pfinet the scheduler only sees threads, we don't have a grouping feature right we also should remove priority depressing in glibc (in sched_yield) it affects spin locks youpi: is it normal to see priorities of 26 ? braunr: we have changed the nice factor ah, factor Mm, I'm however realizing the gnumach kernel running these systems hasn't been upgraded in a while it may not even have the needed priority levels ar euare you using top right now on if ? hm no i don't see it any more well yes, could be the priorities .. I've rebooted with an upgraded kernel no issue so far package uploads will tell me on the long run i bet it's also a scalability issue but why would it appear now only? until the cache and other data containers start to get filled, processing is fast enough that we don't see it hapenning sure, but I haven't seen that in the past oh it's combined with the increased priorities even after a week building packages what i mean is, increased priorities don't affect much if threads porcess things fast things get longer with more data, and then increased prioritis give more time to these threads and that's when the problem appears but increased priorities give more time to the pfinet threads too, don't they? yes so what is different ? but again, there are a lot more threads elsewhere with a lot more data to process sure, but that has alwasy been so hm really, 1k5 threads does not surprise me at all :) 10k would there aren't all active either yes but right, i don't know why pfinet would be given less time than other threads .. compared to before particularly on xen-based buildds libpthread is slower than cthreads where it doesn't even have to wait for netdde threads need more quanta to achieve the same ting perhaps processing could usually be completed in one go before, and not any more we had a discussion about this with antrik youpi: concerning the buildd issue, i don't think pfinet is affected actually but the applications using the network may be why using the network would be a difference ? normal applications have a lower priority what i mean is, pfinet simply has nothing to do, because normal applications don't have enough cpu time (what you said earlier seemed to imply pfinet had issues, i don't think it has) it should be easy to test by pinging the machine while under load we should also check the priority of the special thread used to handle packets, both in pfinet and netdde this one isn't spawned by libports and is likely to have a lower priority as well youpi: you're right, something very recent slowed things down a lot perhaps the new priority factor well not the factor but i suppose the priority range has been increased [[service_solahart_jakarta_selatan__082122541663/nice_vs_mach_thread_priorities]]. braunr: haven't had any upload issue so far over 20 uploads while it was usually 1 every 2 before... so it was very probably the kernel missing the priorities levels ok i think i've had the same problem on another virtual machine with a custom kernel i built a few weeks ago same kind of issue i guess it's fine now, and always was on darnassus ## IRC, freenode, #hurd, 2012-12-05 tschwinge: i'm currently working on a few easy bugs and i have planned improvements for libpthreads soon wotwot, which ones? pinotree: first, fixing pthread_cond_timedwait (and everything timedsomething actually) pinotree: then, fixing cancellation pinotree: and last but not least, optimizing thread wakeup i also want to try replacing spin locks and see if it does what i expect which fixes do you plan applying to cond_timedwait? see sysdeps/generic/pt-cond-timedwait.c the FIXME comment ah that well that's important :) did you have something else in mind ? hm, __pthread_timedblock... do you plan fixing directly there? i remember having seem something related to that (but not on conditions), but wasn't able to see further it has the same issue i don't remember the details, but i wrote a cthreads version that does it right in the io_select_timeout branch see http://git.savannah.gnu.org/cgit/hurd/hurd.git/tree/libthreads/cancel-cond.c?h=rbraun/select_timeout for example * pinotree looks what matters is the msg_delivered member used to synchronize sleeper and waker the waker code is in http://git.savannah.gnu.org/cgit/hurd/hurd.git/tree/libthreads/cprocs.c?h=rbraun/select_timeout never seen cthreads' code before :) soon you shouldn't have any more reason to :p ah, so basically the cthread version of the pthread cleanup stack + cancelation (ie the cancel hook) broadcasts the condition yes so a similar fix would be needed in all the places using __pthread_timedblock, that is conditions and mutexes and that's what's missing in glibc that prevents deploying a pthreads based hurd currently no that's unrelated ok the problem is how __pthread_block/__pthread_timedblock is synchronized with __pthread_wakeup libpthreads does exactly the same thing as cthreads for that, i.e. use messages but the message alone isn't enough, since, as explained in the FIXME comment, it can arrive too late it's not a problem for __pthread_block because this function can only resume after receiving a message but it's a problem for __pthread_timedblock which can resume because of a timeout my solution is to add a flag that says whether a message was actually sent, and lock around sending the message, so that the thread resume can accurately tell in which state it is and drain the message queue if needed i see, race between the "i stop blocking because of timeout" and "i stop because i got a message" with the actual check for the real cause locking around mach_msg may seem overkill but it's not in practice, since there can only be one message at most in the message queue and i checked that in practice by limiting the message queue size and check for such errors but again, it would be far better with mutexes only, and no spin locks i wondered for a long time why the load average was so high on the hurd under even "light" loads now i know :) ## IRC, freenode, #hurd, 2012-12-27 btw, good news: the installer works with libpthread (well, at least boots, I haven't tested the installation) i can do that if the image is available publically youpi: the one thing i suspect won't work right is the hurd console :/ so we might need to not enable it by default braunr: you mean the mode setting? youpi: i don't know what's wrong with the hurd console, but it seems to deadlock with pthreads ah? I don't have such issue ah ? i need to retest that then Same issue as [[service_solahart_jakarta_selatan__082122541663/term_blocking]] perhaps? ## IRC, freenode, #hurd, 2013-01-06 it seems fakeroot has become slow as hell [[pfinet_timers]]. fakeroot is the main source of dead name notifications well, a very heavy one with pthreads hurd servers, their priority is raised, precisely to give them time to handle those dead name notifications which slows everything else down, but strongly reduces the rate at which additional threads are created to handle dn notifications so this is expected ok :/ which is why i mentioned a rewrite of io_select into a completely synchronous io_poll so that the client themselves remove their requests, instead of the servers doing it asynchronously when notified by "slows everything else down", you mean, if the servers do take cpu time? but considering the amount of messaging it requires, it will be slow on moderate to large fd sets with frequent calls (non blocking or low timeout) yes well here the problem is not really it gets slowed down but that e.g. for gtk+2.0 build, it took 5h cpu time (and counting) ah, the hurd with pthreads is noticeably slower too i'm not sure why, but i suspect the amount of internal function calls could account for some of the overhead I mean the fakeroot process not the server process hum that's not normal :) that's what I meant well, i should try to build gtk+20 some day i've been building glibc today and it's going fine for now it's the install stage which poses problem I've noticed it with the hurd package too the hurd is easier to build that's a good test case there are many times when fakeroot just doesn't use cpu, and it doesn't look like a select timeout issue (it still behaved that way with my fixed branch) in general, pfinet is taking really a lot of cpu time that's surprising why ? fakeroot uses it a lot I know but still 40% cpu time is not normal I don't see why it would need so much cpu time 17:57 < braunr> but considering the amount of messaging it requires, it will be slow on moderate to large fd sets with frequent calls (non blocking or low timeout) by "it", what did you mean? I thought you meant the synchronous select implementation something irrelevant here yes what matters here is the second part of my sentence, which is what i think happens now you mean it's the IPC overhead which is taking so much time? i mean, it doesn't matter if io_select synchronously removes requests, or does it by destroying ports and relying on notifications, there are lots of messages in this case anyway yes why "a lot" ? more than one per select call? yes why ? one per fd then one to wait there are two in faked hum :) i remember the timeout is low but i don't remember its value the timeout is NULL in faked the client then the client doesn't use select i must be confused i thought it did through the fakeroot library but yes, i see the same behaviour, 30 times more cpu for pfinet than faked-tcp or let's say between 10 to 30 and during my tests, these were the moments the kernel would create lots of threads in servers and fail because of lack of memory, either kernel memory, or virtual in the client space (filled with thread stacks) it could be due to threads spinning too much (inside pfinet) attaching a gdb shows it mostly inside __pthread_block uh, how awful pfinet's select is a big global lock whenever something happens all threads get woken up BKL! * pinotree runs we have many big hurd locks :p it's rather a big translator lock more than a global lock it seems, a global condvar too, isn't it ? sure we have a similar problem with the hurd-specific cancellation code, it's in my todo list with io_select ah, no, the condvar is not global ## IRC, freenode, #hurd, 2013-01-14 *sigh* thread cancellable is totally broken :( cancellation* it looks like playing with thread cancellability can make some functions completely restart (e.g. one call to printf to write twice its output) [[service_solahart_jakarta_selatan__082122541663/git_duplicated_content]], [[service_solahart_jakarta_selatan__082122541663/git-core-2]]. * braunr is cooking a patch to fix pthread cancellation in pthread_cond_{,timed}wait, smells good youpi: ever heard of something that would make libc functions "restart" ? you mean as a feature, or as a bug ? when changing the pthread cancellation state of a thread, i sometimes see printf print its output twice or perhaps after a signal dispatch? i'll post my test code that could be a duplicate write due to restarting after signal http://www.sceen.net/~rbraun/pthreads_test_cancel.c #include #include #include #include #include static pthread_cond_t cond = PTHREAD_COND_INITIALIZER; static pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER; static int predicate; static int ready; static int cancelled; static void uncancellable_printf(const char *format, ...) { int oldstate; va_list ap; va_start(ap, format); pthread_setcancelstate(PTHREAD_CANCEL_DISABLE, &oldstate); vprintf(format, ap); pthread_setcancelstate(oldstate, &oldstate); va_end(ap); } static void * run(void *arg) { uncancellable_printf("thread: setting ready\n"); ready = 1; uncancellable_printf("thread: spin until cancellation is sent\n"); while (!cancelled) sched_yield(); uncancellable_printf("thread: locking mutex\n"); pthread_mutex_lock(&mutex); uncancellable_printf("thread: waiting for predicate\n"); while (!predicate) pthread_cond_wait(&cond, &mutex); uncancellable_printf("thread: unlocking mutex\n"); pthread_mutex_unlock(&mutex); uncancellable_printf("thread: exit\n"); return NULL; } int main(int argc, char *argv[]) { pthread_t thread; uncancellable_printf("main: create thread\n"); pthread_create(&thread, NULL, run, NULL); uncancellable_printf("main: spin until thread is ready\n"); while (!ready) sched_yield(); uncancellable_printf("main: sending cancellation\n"); pthread_cancel(thread); uncancellable_printf("main: setting cancelled\n"); cancelled = 1; uncancellable_printf("main: joining thread\n"); pthread_join(thread, NULL); uncancellable_printf("main: exit\n"); return EXIT_SUCCESS; } youpi: i'd see two calls to write, the second because of a signal, as normal, as long as the second call resumes, but not restarts after finishing :/ or restarts because nothing was done (or everything was entirely rolled back) well, with an RPC you may not be sure whether it's finished or not ah we don't really have rollback i don't really see the difference with a syscall there the kernel controls the interruption in the case of the syscall except that write is normally atomic if i'm right it can't happen on the way back to userland but that could be exactly the same with RPCs while perhaps it can happen on the mach_msg back to userland back to userland ok, back to the application, no anyway, that's a side issue i'm fixing a few bugs in libpthread and noticed that (i should soon have patches to fix - at least partially - thread cancellation and timed blocking) i was just wondering how cancellation how handled in glibc wrt libpthread I don't know (because the non standard hurd cancellation has nothing to do with pthread cancellation)à ok s/how h/is h/ ### IRC, freenode, #hurd, 2013-01-15 braunr: Re »one call to printf to write twice its output«: sounds familiar: http://www.gnu.org/software/hurd/open_issues/git_duplicated_content.html and http://www.gnu.org/software/hurd/open_issues/git-core-2.html tschwinge: what i find strange with the duplicated operations i've seen is that i merely use pthreads and printf, nothing else no setitimer, no alarm, no select so i wonder how cancellation/syscall restart is actually handled in our glibc but i agree with you on the analysis ### IRC, freenode, #hurd, 2013-01-16 neal: do you (by any chance) remember if there could possibly be spurious wakeups in your libpthread implementation ? braunr: There probably are. but I don't recall i think the duplicated content issue is due to the libmach/glibc mach_msg wrapper which restarts a message send if interrupted Hrm, depending on which point it has been interrupted you mean? yes not sure yet and i could be wrong but i suspect that if interrupted after send and during receive, the restart might be wrongfully done i'm currently reworking the timed* pthreads functions, doing the same kind of changes i did last summer when working on select (since implement the timeout at the server side requires pthread_cond_timedwait) and i limit the message queue size of the port used to wake up threads to 1 and it seems i have the same kind of problems, i.e. blocking because of a second, unexpected send i'll try using __mach_msg_trap directly and see how it goes Hrm, mach/msg.c:__mach_msg does look correct to me, but yeah, won't hurd to confirm this by looking what direct usage of __mach_msg_trap is doing. tschwinge: can i ask if you still have a cthreads based hurd around ? tschwinge: and if so, to send me libthreads.so.0.3 ... :) braunr: darnassus:~tschwinge/libthreads.so.0.3 call 19c0 so, cthreads were also using the glibc wrapper and i never had a single MACH_SEND_INTERRUPTED or a busy queue :/ (IOW, no duplicated messages, and the wrapper indeed looks correct, so it's something else) (Assuming Mach is doing the correct thing re interruptions, of course...) mach doesn't implement it it's explicitely meant to be done in userspace mach merely reports the error i checked the osfmach code of libmach, it's almost exactly the same as ours Yeah, I meant Mach returns the interurption code but anyway completed the RPC. ok i don't expect mach wouldn't do it right the only difference in osf libmach is that, when retrying, MACH_SEND_INTERRUPT|MACH_RCV_INTERRUPT are both masked (for both the send/send+receive and receive cases) Hrm. but they say it's for performance, i.e. mach won't take the slow path because of unexpected bits in the options we probably should do the same anyway ### IRC, freenode, #hurd, 2013-01-17 tschwinge: i think our duplicated RPCs come from hurd/intr-msg.c:148 (err == MACH_SEND_INTERRUPTED but !(option & MACH_SEND_MSG)) a thread is interrupted by a signal meant for a different thread hum no, still not that .. or maybe .. :) Hrm. Why would it matter for for the current thread for which reason (different thread) mach_msg_trap returns *_INTERRUPTED? mach_msg wouldn't return it, as explained in the comment the signal thread would, to indicate the send was completed but the receive must be retried however, when retrying, the original user_options are used again, which contain MACH_SEND_MSG i'll test with a modified version that masks it tschwinge: hm no, doesn't fix anything :( ### IRC, freenode, #hurd, 2013-01-18 the duplicated rpc calls is one i find very very frustrating :/ you mean the dup writes we've seen lately? yes k ### IRC, freenode, #hurd, 2013-01-19 all right, i think the duplicated message sends are due to thread creation the duplicated message seems to be sent by the newly created thread arg no, misread ### IRC, freenode, #hurd, 2013-01-20 tschwinge: youpi: about the diplucated messages issue, it seems to be caused by two threads (with pthreads) doing an rpc concurrently duplicated* ### IRC, freenode, #hurd, 2013-01-21 ah, found something interesting tschwinge: there seems to be a race on our file descriptors the content written by one thread seems to be retained somewhere and another thread writing data to the file descriptor will resend what the first already did it could be a FILE race instead of fd one though yes, it's not at the fd level, it's above so good news, seems like the low level message/signalling code isn't faulty here all right, simple explanation: our IO_lockfile functions are no-ops braunr: i found that out days ago, and samuel said they were okay [[glibc]], `flockfile`/`ftrylockfile`/`funlockfile`. ## IRC, freenode, #hurd, 2013-01-15 hmm, looks like subhurds have been broken by the pthreads patch :/ arg, we really do have broken subhurds :(( time for an immersion in the early hurd bootstrapping stuff Hrm. Narrowed down to cthreads -> pthread you say. i think so but i think the problem is only exposed it was already present before even for the main hurd, i sometimes have systems blocking on exec there must be a race there that showed far less frequently with cthreads youpi: we broke subhurds :/ ? i can't start one exec seems to die and prevent the root file system from progressing there must be a race, exposed by the switch to pthreads arg, looks like exec doesn't even reach main :( now, i'm wondering if it could be the tls support that stops exec although i wonder why exec would start correctly on a main hurd, and not on a subhurd :( i even wonder how much progress ld.so.1 is able to make, and don't have much idea on how to debug that ### IRC, freenode, #hurd, 2013-01-22 hm, subhurds seem to be broken because of select damn select ! hm i see, we can't boot a subhurd that still uses libthreads from a main hurd that doesn't the linker can't find it and doesn't start exec pinotree: do you understand what the fmh function does in sysdeps/mach/hurd/dl-sysdep.c ? i think we broke subhurds by fixing vm_map with size 0 braunr: no idea, but i remember thomas talking about this code [[service_solahart_jakarta_selatan__082122541663/vm_map_kernel_bug]] it checks for KERN_INVALID_ADDRESS and KERN_NO_SPACE and calls assert_perror(err); to make sure it's one of them but now, KERN_INVALID_ARGUMENT can be returned ok i understand what it does and youpi has changed the code, so he does too (now i'm wondering why he didn't think of it when we fixed vm_map size with 0 but his head must already be filled with other things so ..) anyway, once this is dealt with, we get subhurds back :) yes, with a slight change, my subhurd starts again \o/ youpi: i found the bug that prevents subhurds from booting it's caused by our fixing of vm_map with size 0 when ld.so.1 starts exec, the code in sysdeps/mach/hurd/dl-sysdep.c fails because it doesn't expect the new error code we introduced (the fmh functions) ah :) good :) adding KERN_INVALID_ARGUMENT to the list should do the job, but if i understand the code correctly, checking if fmhs isn't 0 before calling vm_map should do the work too s/do the work/work/ i'm not sure which is the preferred way otherwise I believe fmh could be just fixed to avoid calling vm_map in the !fmhs case yes that's what i currently do at the start of the loop, just after computing it seems to work so far ## IRC, freenode, #hurd, 2013-01-22 i have almost completed fixing both cancellation and timeout handling, but there are still a few bugs remaining fyi, the related discussion was https://lists.gnu.org/archive/html/bug-hurd/2012-08/msg00057.html ## IRC, freenode, #hurd, 2014-01-01 braunr: I have an issue with tls_thread_leak int main(void) { pthread_create(&t, NULL, foo, NULL); pthread_exit(0); } this fails at least with the libpthread without your libpthread thread termination patch because for the main thread, tcb->self doesn't contain thread_self where is tcb->self supposed to be initialized for the main thread? there's also the case of fork()ing from main(), then calling pthread_exit() (calling pthread_exit() from the child) the child would inherit the tcb->self value from the parent, and thus pthread_exit() would try to kill the father can't we still do tcb->self = self, even if we don't keep a reference over the name? (the pthread_exit() issue above should be fixed by your thread termination patch actually) Mmm, it seems the thread_t port that the child inherits actually properly references the thread of the child, and not the thread of the father? “For the name we use for our own thread port, we will insert the thread port for the child main user thread after we create it.” Oh, good :) and, “Skip the name we use for any of our own thread ports.”, good too :) youpi: reading youpi: if we do tcb->self = self, we have to keep the reference this is strange though, i had tests that did exactlt what you're talking about, and they didn't fail why? if you don't keep the reference, it means you deallocate self with the thread termination patch, tcb->self is not used for destruction hum no it isn't but it must be deallocated at some point if it's not temporary normally, libpthread should set it for the main thread too, i don't understand I don't see which code is supposed to do it sure it needs to be deallocated at some point but does tcb->self has to wear the reference? init_routine should do it it calls __pthread_create_internal which allocates the tcb i think at some point, __pthread_setup should be called for it too but what makes pthread->kernel_thread contain the port for the thread? but i have to check that __pthread_thread_alloc does that so normally it should work is your libpthread up to date as well ? no, as I said it doesn't contain the thread destruction patch ah that may explain but the tcb->self uninitialized issue happens on darnassus too it just doesn't happen to crash because it's not used that's weird :/ see ~youpi/test.c there for instance humpf i don't see why :/ i'll debug that later youpi: did you find the problem ? no I'm working on fixing the libpthread hell in the glibc debian package :) i.e. replace a dozen patches with a git snapshot ah you reverted commit +a i imagine it's hairy :) not too much actually wow :) with the latest commits, things have converged it's now about small build details I just take time to make sure I'm getting the same source code in the end :) :) i hope i can determine what's going wrong tonight youpi: avec mach_print, je vois bien self setté par la libpthread .. mais à autre chose que 0 ? oui bizarrement, l'autre thread n'as pas la même valeur tu es bien sûr que c'est self que tu affiches avec l'assembleur ? oops, english see test2 so I'm positive well, there obviously is a bug but are you certain your assembly code displays the thread port name ? I'm certain it displays tcb->self oh wait, hexadecimal, ok and the value happens to be what mach_thread_self returns ah right ah, right, names are usually decimals :) hm what's the problem with test2 ? none ok I was just checking what happens on fork from another thread ok i do have 0x68 now so the self field gets erased somehow 15:34 < youpi> this fails at least with the libpthread without your libpthread thread termination patch how does it fail ? ../libpthread/sysdeps/mach/pt-thread-halt.c:44: __pthread_thread_halt: Unexpected error: (ipc/send) invalid destination port. hm i don't have that problem on darnassus with the new libc? the pthread destruction patch actually doesn't use the tcb->self name if i'm right yes what is tcb->self used for ? it used to be used by pt-thread-halt but is darnassus using your thread destruction patch? as I said, since your thread destruction pathc doesn't use tcb->self, it doesn't have the issue the patched libpthread merely uses the sysdeps kernel_thread member ok it's the old libpthread against the new libc which has issues yes it is so for me, the only thing to do is make sure tcb->self remains valid we could simply add a third user ref but i don't like the idea well, as you said the issue is rather that tcb->self gets overwritten there is no reason why it should the value is still valid when init_routine exits, so it must be in libc or perhaps for some reason tls gets initialized twice maybe and thus what libpthread's init writes to is not what's used later i've add a print in pthread_create, to see if self actually got overwritten and it doesn't there is a disrepancy between the tcb member in libpthread and what libc uses for tls added* (the print is at the very start of pthread_create, and displays the thread name of the caller only) well, yes, for the main thread libpthread shouldn't be allocating a new tcb and just use the existing one ? the main thread's tcb is initialized before the threading library iirc hmm it would make sense if we actually had non-threaded programs :) at any rate, the address of the tcb allocated by libpthread is not put into registers how does it get there for the other threads ? __pthread_setup does it so looks like dl_main is called after init_routine and it then calls init_tls init_tls returns the tcb for the main thread, and that's what overrides the libpthread one yes, _hurd_tls_init is called very early, before init_routine __pthread_create_internal could fetch the tcb pointer from gs:0 when it's the main thread so there is something i didn't get right i thought _hurd_tls_init was called as part of dl_main well, it's not a bug of yours, it has always been bug :) which is called *after* init_routine and that explains why the libpthread tcb isn't the one installed in the thread register i can actually check that quite easily where do you see dl_main called after init_routine? well no i got that wrong somehow or i'm unable to find it again let's see init_routine is called by init which is called by _dl_init_first which i can only find in the macro RTLD_START_SPECIAL_INIT with print traces, i see dl_main called before init_routine so yes, libpthread should reuse it the tcb isn't overriden, it's just never installed i'm not sure how to achieve that cleanly well, it is installed, by _hurd_tls_init it's the linker which creates the main thread's tcb and calls _hurd_tls_init to install it before the thread library enters into action agreed ### IRC, freenode, #hurd, 2014-01-14 btw, are you planning to do something with regard to the main thread tcb initialization issue ? well, I thought you were working on it ok i wasn't sure ### IRC, freenode, #hurd, 2014-01-19 i have some fixup code for the main thread tcb but it sometimes crashes on tcb deallocation is there anything particular that you would know about the tcb of the main thread ? (that could help explaining this) Mmmm, I don't think there is anything particular doesn't look like the first tcb can be reused safely i think we should instead update the thread register to point to the pthread tcb what do you mean by "the first tcb" exactly? ## IRC, freenode, #hurd, 2014-01-03 braunr: hurd from your repo can't boot. restored debian one gg0: it does boot gg0: but you need everything (gnumach and glibc) in order to make it work i think youpi did take care of compatibility with older kernels braunr: so do we need a rebuilt libc for the latest hurd from git ? teythoon: no, the hurd isn't the problem ok good the problem is the libports_stability patch what about it ? the hurd can't work correctly without it since the switch to pthreads because of subtle bugs concerning resource recycling ok these have been fixed recently by youpi and me (youpi fixed them exactly as i did, which made my life very easy when merging :)) there is also the problem of the stack sizes, which means the hurd servers could use 2M stacks with an older glibc or perhaps it chokes on an error when attempting to set the stack size because it was unsupported i don't know that may be what gg0 suffered from yes, both gnumach and eglibc were from debian. seems i didn't manually upgrade eglibc from yours i'll reinstall them now. let's screw it up once again :) bbl ok it boots # apt-get install {hurd,hurd-dev,hurd-libs0.3}=1:0.5.git20131101-1+rbraun.7 {libc0.3,libc0.3-dev,libc0.3-dbg,libc-dev-bin}=2.17-97+hurd.0+rbraun.1+threadterm.1 there must a simpler way besides apt-pinning making it a real "experimental" release might help with -t option for instance btw locales still segfaults rpctrace from teythoon gets stuck at http://paste.debian.net/plain/74072/ ("rpctrace locale-gen", last 300 lines)