summaryrefslogtreecommitdiff
path: root/open_issues/multiprocessing.mdwn
diff options
context:
space:
mode:
Diffstat (limited to 'open_issues/multiprocessing.mdwn')
-rw-r--r--open_issues/multiprocessing.mdwn37
1 files changed, 34 insertions, 3 deletions
diff --git a/open_issues/multiprocessing.mdwn b/open_issues/multiprocessing.mdwn
index 224c0826..562ccd83 100644
--- a/open_issues/multiprocessing.mdwn
+++ b/open_issues/multiprocessing.mdwn
@@ -1,4 +1,4 @@
-[[!meta copyright="Copyright © 2010 Free Software Foundation, Inc."]]
+[[!meta copyright="Copyright © 2010, 2011 Free Software Foundation, Inc."]]
[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
id="license" text="Permission is granted to copy, distribute and/or modify this
@@ -8,7 +8,7 @@ Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
is included in the section entitled [[GNU Free Documentation
License|/fdl]]."]]"""]]
-[[!tag open_issue_hurd]]
+[[!tag open_issue_documentation open_issue_hurd]]
We would expect that fine-grained, compartmentalized systems, that is,
microkernel-based multi-server systems in particular, would be ideal candidates
@@ -16,7 +16,7 @@ for applying multiprocessing. That is, however, only true from a first and
inexperienced point of view: there are many difficulties.
-IRC, #hurd, August / September 2010
+IRC, freenode, #hurd, August / September 2010
<marcusb> silver_hook: because multi-server systems depend on inter-process
communication, and inter-process communication is many times more
@@ -31,6 +31,37 @@ IRC, #hurd, August / September 2010
serious research challenges
+IRC, freenode, #hurd, 2011-07-26
+
+ < braunr> 12:03 < CTKArcher> and does the hurd take more advantages in a
+ multicore architecture than linux ?
+ < braunr> CTKArcher: short answer: no
+ < CTKArcher> it's easier to imagine one server pro core than the linux
+ kernel divided to be executed on multiple cores
+ < braunr> CTKArcher: this approach is less efficient
+ < braunr> CTKArcher: threads carry state, both explicit and implicit (like
+ cache data)
+ < braunr> CTKArcher: switching to another core means resetting and
+ refetching this state
+ < braunr> it's expensive and there is no gain obtained by doing this
+ < braunr> thread migration (having a thread from a client also run in
+ servers when making synchronous RPC, even handling its own page faults)
+ was implemented in mach4 and is imo a very good thing we should have
+ < braunr> CTKArcher: and concerning linux, it's actually very scalable
+ < braunr> it's already like if all client threads run in servers (the
+ kernel is the servers there)
+ < braunr> rcu is used a lot
+ < braunr> thread migration already takes into account smt, cores, and numa
+ < braunr> it's hard to do something better
+ < braunr> (here, thread migration means being dispatched on another cpu)
+ < braunr> some systems like dragonflybsd go as far as to pin threads on one
+ processor for their entire lifetime
+ < braunr> in order to have rcu-like locking almost everywhere
+ < braunr> (you could argue it's less efficient since in the worst case
+ everything runs on the same cpu, but it's very unlikely, and in practice
+ most patterns are well balanced)
+
+
debian-hurd list
On Thu, Jan 02, 2003 at 05:40:00PM -0800, Thomas Bushnell, BSG wrote: