[[!meta copyright="Copyright © 2012 Free Software Foundation, Inc."]] [[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable id="license" text="Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled [[GNU Free Documentation License|/fdl]]."]]"""]] [[!tag open_issue_documentation open_issue_gnumach]] # IRC, freenode, #hurd, 2012-06-29 I do not understand what are the deficiencies of Mach, the content I find on this is vague... the major problems are that the IPC architecture offers poor performance; and that resource usage can not be properly accounted to the right parties antrik: the more i study it, the more i think ipc isn't the problem when it comes to performance, not directly i mean, the implementation is a bit heavy, yes, but it's fine the problems are resource accounting/scheduling and still too much stuff inside kernel space and with a very good implementation, the performance problem would come from crossing address spaces (and even more on SMP, i've been thinking about it lately, since it would require syncing mmu state on each processor currently using an address space being modified) braunr: the problem with Mach IPC is that it requires too many indirections to ever be performant AIUI antrik: can you mention them ? the semantics are generally quite complex, compared to Coyotos for example, or even Viengoos antrik: the semantics are related to the message format, which can be simplified i think everybody agrees on that i'm more interested in the indirections but then it's not Mach IPC anymore :-) right 22:03 < braunr> i mean, the implementation is a bit heavy, yes, but it's fine that's not an implementation issue that's what i meant by heavy :) well, yes and no Mach IPC have changed over time it would be newer Mach IPC ... :) the fact that data types are (supposed to be) transparent to the kernel is a major part of the concept, not just an implementation detail but it's not just the message format transparent ? but they're not :/ the option to buffer in the kernel also adds a lot of complexity buffer in the kernel ? ah you mean message queues yes braunr: eh? the kernel parses all the type headers during transfer yes, so it's not transparent at all maybe you have a different understanding of "transparent" ;-) i guess I think most of the other complex semantics are kinda related to the in-kernel buffering... i fail to see why :/ well, it allows ports rights to be destroyed while a message is in transfer. a lot of semantics revolve around what happens in that case yes but it doesn't affect performance a lot sure it does. it requires a lot of extra code and indirections not a lot of it "a lot" is quite a relative term :-) compared to L4 for example, it *is* a lot and those indirections (i think you refer to more branching here) are taken only when appropriate, and can be isolated, improved through locality, etc.. the features they add are also huge L4 is clearly insufficient all current L4 forks have added capabilities .. (that, with the formal verification, make se4L one of the "hottest" recent system projects) seL4* yes, but with very few extra indirection I think... similar to EROS (which claims to have IPC almost as efficient as the original L4) possibly I still fail to see much real benefit in formal verification :-) but compared to other problems, this added code is negligible antrik: for a microkernel, me too :/ the kernel is already so small you can simply audit it :) no, it's not neglible, if you go from say two cache lines touched per IPC (original L4) to dozens (Mach) every additional variable that needs to be touched to resolve some indirection, check some condition adds significant overhead if you compare the dozens to the huge amount of inter processor interrupt you get each time you change the kernel map, it's next to nothing .. change the kernel map? not sure what you mean syncing address spaces on hundreds of processors each time you send a message is a real scalability issue here (as an example), where Mach to L4 IPC seem like microoptimization braunr: modify, you mean? yes (not switchp ) but that's only one example yes, modify, not switch also, we could easily get rid of the ihash library making the message provide the address of the object associated to a receive right so the only real indirection is the capability, like in other systems, and yes, buffering adds a bit of complexity there are other optimizations that could be made in mach, like merging structures to improve locality "locality"? having rights close to their target port when there are only a few pinotree: locality of reference for cache efficiency hundreds of processors? let's stay realistic here :-) i am .. a microkernel based system is also a very good environment for RCU (i yet have to understand how liburcu actually works on linux) I'm not interested in systems for supercomputers. and I doubt desktop machines will get that many independant cores any time soon. we still lack software that could even romotely exploit that hum, the glibc build system ? :> lol we have done a survey over the nix linux distribution quite few packages actually benefit from a lot of cores and we already know them :) what i'm trying to say is that, whenever i think or even measure system performance, both of the hurd and others, i never actually see the IPC as being the real performance problem there are many other sources of overhead to overcome before getting to IPC I completely agree and with the advent of SMP, it's even more important to focus on contention (also, 8 cores aren't exactly a lot...) antrik: s/8/7/ , or even 6 ;) braunr: it depends a lot on the use case. most of the problems we see in the Hurd are probably not directly related to IPC performance; but I pretty sure some are (such as X being hardly usable with UNIX domain sockets) antrik: these have more to do with the way mach blocks than IPC itself similar to the ext2 "sleep storm" a lot of overhead comes from managing ports (for for example), which also mostly comes down to IPC performance antrik: yes, that's the main indirection antrik: but you need such management, and the related semantics in the kernel interface (although i wonder if those should be moved away from the message passing call) you mean a different interface for kernel calls than for IPC to other processes? that would break transparency in a major way. not sure we really want that... antrik: no antrik: i mean calls specific to right management admittedly, transparency for port management is only useful in special cases such as rpctrace, and that probably could be served better with dedicated debugging interfaces... antrik: i.e. not passing rights inside messages passing rights inside messages is quite essential for a capability system. the problem with Mach IPC in regard to that is that the message format allows way more flexibility than necessary in that regard... antrik: right antrik: i don't understand why passing rights inside messages is important though antrik: essential even braunr: I guess he means you need at least one way to pass rights braunr: well, for one, you need to pass a reply port with each RPC request... youpi: well, as he put, the message passing call is overpowered, and this leads to many branches in the code antrik: the reply port is obvious, and can be optimized antrik: but the case i worry about is passing references to objects between tasks antrik: rights and identities with the auth server for example antrik: well ok forget it, i just recall how it actually works :) antrik: don't forget we lack thread migration antrik: you may not think it's important, but to me, it's a major improvement for RPC performance braunr: how can seL4 be the most interesting microkernel then?... ;-) antrik: hm i don't know the details, but if it lacks thread migration, something is wrong :p antrik: they should work on viengoos :) (BTW, AIUI thread migration is quite related to passive objects -- something Hurd folks never dared seriously consider...) i still don't know what passive objects are, or i have forgotten it :/ no own control threads hm, i'm still missing something what do you refer to by control thread ? with* i.e. no main loop etc.; only activated by incoming calls ok well, if i'm right, thomas bushnel himself wrote (recently) that the ext2 "sleep" performance issue was expected to be solved with thread migration so i guess they definitely considered having it braunr: don't know what the "sleep peformance issue" is... http://lists.gnu.org/archive/html/bug-hurd/2011-12/msg00032.html antrik: also, the last message in the thread, http://lists.gnu.org/archive/html/bug-hurd/2011-12/msg00050.html antrik: do you consider having a reply port being an avoidable overhead ? braunr: not sure. I don't remember hearing of any capability system doing this kind of optimisation though; so I guess there are reasons for that... antrik: yes me too, even more since neal talked about it on viengoos I wonder whether thread management is also such a large overhead with fully sync IPC, on L4 or EROS for example... antrik: it's still a very handy optimization for thread scheduling antrik: it makes solving priority inversions a lot easier actually, is thread scheduling a problem at all with a thread activation approach like in Viengoos? antrik: thread activation is part of thread migration antrik: actually, i'd say they both refer to the same thing err... scheduler activation was the term I wanted to use same well scheduler activation is too vague to assert that antrik: do you refer to scheduler activations as described in http://en.wikipedia.org/wiki/Scheduler_activations ? my understanding was that Viengoos still has traditional threads; they just can get scheduled directly on incoming IPC braunr: that Wikipedia article is strange. it seems to use "scheduler activations" as a synonym for N:M multithreading, which is not at all how I understood it antrik: I used to try to keep a look at those pages, to fix such wrong things, but left it antrik: that's why i ask IIRC Viengoos has a thread associated with each receive buffer. after copying the message, the kernel would activate the processes activation handler, which in turn could decide to directly schedule the thead associated with the buffer or something along these lines antrik: that's similar to mach handoff antrik: generally enough, all the thread-related pages on wikipedia are quite bogus nah, handoff just schedules the process; which is not useful, if the right thread isn't activated in turn... antrik: but i think it's more than that, even in viengoos for instance, the french "thread" page was basically saying that they were invented for GUIs to overlap computation with user interaction .. :) youpi: good to know... antrik: the "misunderstanding" comes from the fact that scheduler activations is the way N:M threading was implemented on netbsd youpi: that's a refreshing take on the matter... ;-) antrik: i'll read the critique and viengoos doc/source again to be sure about what we're talking :) antrik: as threading is a major issue in mach, and one of the things i completely changed (and intend to change) in x15, whenever i get to work on that again ..... :) antrik: interestingly, the paper about scheduler activations was written (among others) by brian bershad, in 92, when he was actively working on research around mach braunr: BTW, I have little doubt that making RPC first-class would solve a number of problems... I just wonder how many others it would open