summaryrefslogtreecommitdiff
path: root/open_issues/multithreading.mdwn
diff options
context:
space:
mode:
authorThomas Schwinge <thomas@codesourcery.com>2012-12-11 11:04:11 +0100
committerThomas Schwinge <thomas@codesourcery.com>2012-12-11 11:04:11 +0100
commit1c36eb6c025084af76c5b930ca4adc5953560fd7 (patch)
tree8ac3bcf1f785997cce064c65dcd729be4c5dcb0b /open_issues/multithreading.mdwn
parenta0290d994030cd14bdccbb97d2a2c022d1d2428c (diff)
parentbcfc058a332da0a2bd2e09e13619be3e2eb803a7 (diff)
Merge remote-tracking branch 'fp/master'
Diffstat (limited to 'open_issues/multithreading.mdwn')
-rw-r--r--open_issues/multithreading.mdwn154
1 files changed, 154 insertions, 0 deletions
diff --git a/open_issues/multithreading.mdwn b/open_issues/multithreading.mdwn
index 5924d3f9..f42601b4 100644
--- a/open_issues/multithreading.mdwn
+++ b/open_issues/multithreading.mdwn
@@ -49,6 +49,160 @@ Tom Van Cutsem, 2009.
<youpi> right
+## IRC, freenode, #hurd, 2012-07-16
+
+ <braunr> hm interesting
+ <braunr> when many threads are creating to handle requests, they
+ automatically create a pool of worker threads by staying around for some
+ time
+ <braunr> this time is given in the libport call
+ <braunr> but the thread always remain
+ <braunr> they must be used in turn each time a new requet comes in
+ <braunr> ah no :(, they're maintained by the periodic sync :(
+ <braunr> hm, still not that, so weird
+ <antrik> braunr: yes, that's a known problem: unused threads should go away
+ after some time, but that doesn't actually happen
+ <antrik> don't remember though whether it's broken for some reason, or
+ simply not implemented at all...
+ <antrik> (this was already a known issue when thread throttling was
+ discussed around 2005...)
+ <braunr> antrik: ok
+ <braunr> hm threads actually do finish ..
+ <braunr> libthreads retain them in a pool for faster allocations
+ <braunr> hm, it's worse than i thought
+ <braunr> i think the hurd does its job well
+ <braunr> the cthreads code never reaps threads
+ <braunr> when threads are finished, they just wait until assigned a new
+ invocation
+
+ <braunr> i don't understand ports_manage_port_operations_multithread :/
+ <braunr> i think i get it
+ <braunr> why do people write things in such a complicated way ..
+ <braunr> such code is error prone and confuses anyone
+
+ <braunr> i wonder how well nested functions interact with threads when
+ sharing variables :/
+ <braunr> the simple idea of nested functions hurts my head
+ <braunr> do you see my point ? :) variables on the stack automatically
+ shared between threads, without the need to explicitely pass them by
+ address
+ <antrik> braunr: I don't understand. why would variables on the stack be
+ shared between threads?...
+ <braunr> antrik: one function declares two variables, two nested functions,
+ and use these in separate threads
+ <braunr> are the local variables still "local"
+ <braunr> ?
+ <antrik> braunr: I would think so? why wouldn't they? threads have separate
+ stacks, right?...
+ <antrik> I must admit though that I have no idea how accessing local
+ variables from the parent function works at all...
+ <braunr> me neither
+
+ <braunr> why don't demuxers get a generic void * like every callback does
+ :((
+ <antrik> ?
+ <braunr> antrik: they get pointers to the input and output messages only
+ <antrik> why is this a problem?
+ <braunr> ports_manage_port_operations_multithread can be called multiple
+ times in the same process
+ <braunr> each call must have its own context
+ <braunr> currently this is done by using nested functions
+ <braunr> also, why demuxers return booleans while mach_msg_server_timeout
+ happily ignores them :(
+ <braunr> callbacks shouldn't return anything anyway
+ <braunr> but then you have a totally meaningless "return 1" in the middle
+ of the code
+ <braunr> i'd advise not using a single nested function
+ <antrik> I don't understand the remark about nested function
+ <braunr> they're just horrible extensions
+ <braunr> the compiler completely hides what happens behind the scenes, and
+ nasty bugs could come out of that
+ <braunr> i'll try to rewrite ports_manage_port_operations_multithread
+ without them and see if it changes anything
+ <braunr> but it's not easy
+ <braunr> also, it makes debugging harder :p
+ <braunr> i suspect gdb hangs are due to that, since threads directly start
+ on a nested function
+ <braunr> and if i'm right, they are created on the stack
+ <braunr> (which is also horrible for security concerns, but that's another
+ story)
+ <braunr> (at least the trampolines)
+ <antrik> I seriously doubt it will change anything... but feel free to
+ prove me wrong :-)
+ <braunr> well, i can see really weird things, but it may have nothing to do
+ with the fact functions are nested
+ <braunr> (i still strongly believe those shouldn't be used at all)
+
+
+## IRC, freenode, #hurd, 2012-08-31
+
+ <braunr> and the hurd is all but scalable
+ <gnu_srs> I thought scalability was built-in already, at least for hurd??
+ <braunr> built in ?
+ <gnu_srs> designed in
+ <braunr> i guess you think that because you read "aggressively
+ multithreaded" ?
+ <braunr> well, a system that is unable to control the amount of threads it
+ creates for no valid reason and uses global lock about everywhere isn't
+ really scalable
+ <braunr> it's not smp nor memory scalable
+ <gnu_srs> most modern OSes have multi-cpu support.
+ <braunr> that doesn't mean they scale
+ <braunr> bsd sucks in this area
+ <braunr> it got better in recent years but they're way behind linux
+ <braunr> linux has this magic thing called rcu
+ <braunr> and i want that in my system, from the beginning
+ <braunr> and no, the hurd was never designed to scale
+ <braunr> that's obvious
+ <braunr> a very common mistake of the early 90s
+
+
+## IRC, freenode, #hurd, 2012-09-06
+
+ <braunr> mel-: the problem with such a true client/server architecture is
+ that the scheduling context of clients is not transferred to servers
+ <braunr> mel-: and the hurd creates threads on demand, so if it's too slow
+ to process requests, more threads are spawned
+ <braunr> to prevent hurd servers from creating too many threads, they are
+ given a higher priority
+ <braunr> and it causes increased latency for normal user applications
+ <braunr> a better way, which is what modern synchronous microkernel based
+ systems do
+ <braunr> is to transfer the scheduling context of the client to the server
+ <braunr> the server thread behaves like the client thread from the
+ scheduler perspective
+ <gnu_srs> how can creating more threads ease the slowness, is that a design
+ decision??
+ <mel-> what would be needed to implement this?
+ <braunr> mel-: thread migration
+ <braunr> gnu_srs: is that what i wrote ?
+ <mel-> does mach support it?
+ <braunr> mel-: some versions do yes
+ <braunr> mel-: not ours
+ <gnu_srs> 21:49:03) braunr: mel-: and the hurd creates threads on demand,
+ so if it's too slow to process requests, more threads are spawned
+ <braunr> of course it's a design decision
+ <braunr> it doesn't "ease the slowness"
+ <braunr> it makes servers able to use multiple processors to handle
+ requests
+ <braunr> but it's a wrong design decision as the number of threads is
+ completely unchecked
+ <gnu_srs> what's the idea of creating more theads then, multiple cpus is
+ not supported?
+ <braunr> it's a very old decision taken at a time when systems and machines
+ were very different
+ <braunr> mach used to support multiple processors
+ <braunr> it was expected gnumach would do so too
+ <braunr> mel-: but getting thread migration would also require us to adjust
+ our threading library and our servers
+ <braunr> it's not an easy task at all
+ <braunr> and it doesn't fix everything
+ <braunr> thread migration on mach is an optimization
+ <mel-> interesting
+ <braunr> async ipc remains available, which means notifications, which are
+ async by nature, will create messages floods anyway
+
+
# Alternative approaches:
* <http://www.concurrencykit.org/>