[[!meta copyright="Copyright © 2010, 2011, 2012 Free Software Foundation, Inc."]] [[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable id="license" text="Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled [[GNU Free Documentation License|/fdl]]."]]"""]] [[!tag open_issue_hurd]] Hurd servers / VFS libraries are multithreaded. # Implementation * well-known threading libraries * [[hurd/libthreads]] * [[hurd/libpthread]] # Design See [[hurd/libports]]: roughly using one thread per incoming request. This is not the best approach: it doesn't really make sense to scale the number of worker threads with the number of incoming requests, but instead they should be scaled according to the backends' characteristics. The [[hurd/Critique]] should have some more on this. [*Event-based Concurrency Control*](http://soft.vub.ac.be/~tvcutsem/talks/presentations/T37_nobackground.pdf), Tom Van Cutsem, 2009. ## IRC, freenode, #hurd, 2012-07-08 braunr: about limiting number of threads, IIRC the problem is that for some threads, completing their work means triggering some action in the server itself, and waiting for it (with, unfortunately, some lock held), which never terminates when we can't create new threads any more youpi: the number of threads should be limited, but not globally by libports pagers should throttle their writeback requests right ## IRC, freenode, #hurd, 2012-07-16 hm interesting when many threads are creating to handle requests, they automatically create a pool of worker threads by staying around for some time this time is given in the libport call but the thread always remain they must be used in turn each time a new requet comes in ah no :(, they're maintained by the periodic sync :( hm, still not that, so weird braunr: yes, that's a known problem: unused threads should go away after some time, but that doesn't actually happen don't remember though whether it's broken for some reason, or simply not implemented at all... (this was already a known issue when thread throttling was discussed around 2005...) antrik: ok hm threads actually do finish .. libthreads retain them in a pool for faster allocations hm, it's worse than i thought i think the hurd does its job well the cthreads code never reaps threads when threads are finished, they just wait until assigned a new invocation i don't understand ports_manage_port_operations_multithread :/ i think i get it why do people write things in such a complicated way .. such code is error prone and confuses anyone i wonder how well nested functions interact with threads when sharing variables :/ the simple idea of nested functions hurts my head do you see my point ? :) variables on the stack automatically shared between threads, without the need to explicitely pass them by address braunr: I don't understand. why would variables on the stack be shared between threads?... antrik: one function declares two variables, two nested functions, and use these in separate threads are the local variables still "local" ? braunr: I would think so? why wouldn't they? threads have separate stacks, right?... I must admit though that I have no idea how accessing local variables from the parent function works at all... me neither why don't demuxers get a generic void * like every callback does :(( ? antrik: they get pointers to the input and output messages only why is this a problem? ports_manage_port_operations_multithread can be called multiple times in the same process each call must have its own context currently this is done by using nested functions also, why demuxers return booleans while mach_msg_server_timeout happily ignores them :( callbacks shouldn't return anything anyway but then you have a totally meaningless "return 1" in the middle of the code i'd advise not using a single nested function I don't understand the remark about nested function they're just horrible extensions the compiler completely hides what happens behind the scenes, and nasty bugs could come out of that i'll try to rewrite ports_manage_port_operations_multithread without them and see if it changes anything but it's not easy also, it makes debugging harder :p i suspect gdb hangs are due to that, since threads directly start on a nested function and if i'm right, they are created on the stack (which is also horrible for security concerns, but that's another story) (at least the trampolines) I seriously doubt it will change anything... but feel free to prove me wrong :-) well, i can see really weird things, but it may have nothing to do with the fact functions are nested (i still strongly believe those shouldn't be used at all) # Alternative approaches: * * Continuation-passing style * [[microkernel/Mach]] internally [[uses continuations|microkernel/mach/continuation]], too. * [[Erlang-style_parallelism]] * [[!wikipedia Actor_model]]; also see overlap with {{$capability#wikipedia_object-capability_model}}. * [libtcr - Threaded Coroutine Library](http://oss.linbit.com/libtcr/) * --- See also: [[multiprocessing]].