From 6c7d45e4631784d0e077e806521a736da6b0266e Mon Sep 17 00:00:00 2001 From: Thomas Schwinge Date: Sun, 7 Apr 2013 18:18:44 +0200 Subject: IRC. --- microkernel/mach/deficiencies.mdwn | 136 +++++++++++++++++++++++++++++++++++++ 1 file changed, 136 insertions(+) (limited to 'microkernel') diff --git a/microkernel/mach/deficiencies.mdwn b/microkernel/mach/deficiencies.mdwn index b3e1758c..dcabb56e 100644 --- a/microkernel/mach/deficiencies.mdwn +++ b/microkernel/mach/deficiencies.mdwn @@ -615,3 +615,139 @@ In context of [[open_issues/multithreading]] and later [[open_issues/select]]. way it is in mach and netbsd but the arch-specific interfaces aren't well defined yet because there is only one (and incomplete) arch + + +### IRC, freenode, #hurd, 2013-03-08 + + BTW, what is your current direction? did you follow through with + abandonning Mach resemblance?... + no + it's very similar to mach in many ways + unless mach is defined by its ipc in which case it's not mach at + all + the ipc interface will be similar to the qnx one + well, Mach is pretty much defined by it's IPC and VM interface... + the vm interface remains + its + although vm maps will be first class objects + so that it will be possible to move parts of the vm server outside + the kernel some day if it feels like a good thing to do + i.e. vm maps won't be inferred from tasks + not implicitely + the kernel will be restricted to scheduling, memory management, + and ipc, much as mach is (notwithstanding drivers) + hm... going with QNX IPC still seems risky to me... it's designed + for simple embedded environments, not for general-purpose operating + systems in my understanding + no, the qnx ipc interface is very generic + they can already call remote services + the system can scale well on multiprocessor machines + that's not risky at all, on the contrary + yeah, I'm sure it's generic... but I don't think anybody tried to + build a Hurd-like system on top of it; so it's not at all clear whether + it will work out at all... + clueless question: does x15 have any inspiration from + helenos? + absolutely none + i'd say x15 is almost an opposite to helenos + it's meant as a foundation for unix systems, like mach + some unix interfaces considered insane by helenos people (such as + fork and signals) will be implemented (although not completely in the + kernel) + ipc will be mostly synchronous + they're very different + well, helenos is very different + cool + x15 and actually propel (the current name i have for the final + system), are meant to create a hurd clone + another clueless question: any similarities of x15 to minix? + and since we're few, implementing posix efficiently is a priority + goal for me + again, absolutely none + for the same reasons + minix targets resilience in embedded environments + propel is a hurd clone + propel aims at being a very scalable and performant hurd clone + that's all + neato + unfortunately, i couldn't find a name retaining all the cool + properties of the hurd + feel free to suggest ideas :) + propel? as in to launch forward? + push forward, yes + that's very likely a better name than anything i could + conjure up + x15 is named after mach (the first aircraft to break mach 4, + reaching a bit less than mach 7) + servers will be engines, and together to push the system forward + ..... :) + nice + thrust might be a bit too generic i guess + oh i'm looking for something like "hurd" + doubly recursive acronym, related to gnu + and short, so it can be used as a c namespace + antrik: i've thought about it a lot, and i'm convinced this kind + of interface is fine for a hurd like system + the various discussions i found about the hurd requirements + (remember roland talking about notifications) all went in this direction + note however the interface isn't completely synchronous + and that's very important + well, I'm certainly curious. but if you are serious about this, + you'd better start building a prototype as soon as possible, rather than + perfecting SMP ;-) + i'm not perfecting smp + but i consider it very important to have migrations and preemption + actually working before starting the prototype + so that tricky mistakes about concurrency can be catched early + my current hunch is that you are trying to do too much at the same + time... improving both the implementation details and redoing the system + design + so, for example, there is (or will be soon, actually) thread + migratio, but the scheduler doesn't take processor topology into account + that's why i'm starting from scratch + i don't delve too deep into the details + just the ones i consider very important + what do you mean by thread migration here? didn't you say you + don't even have IPC?... + i mean migration between cpus + OK + the other is too confusing + and far too unused and unknown to be used + and i won't actually implement it the way it was done in mach + again, it will be similar to qnx + oh? now that's news for me :-) + you seemed pretty hooked on thread migration when we talked about + these things last time... + i still am + i'm just saying it won't be implemented the same way + instead of upcalls from the kernel into userspace, i'll "simply" + make server threads inherit from the caller's scheduling context + the ideas i had about stack management are impossible to apply in + practice + which make the benefit i imagined unrealistic + and the whole idea was very confusing when compared and integrated + into a unix like view + so stack usage will be increased + that's ok + but thread migration is more or less equivalent with first-class + RPCs AIUI. does that work with the QNX IPC model?... + the very important property that threads don't block and wake a + server when sending, and the server again blocks and wake the client on + reply, is preserved + (in fact I find the term "first-class RPC" much clearer...) + i dont + there are two benefits in practice: since the scheduling context + is inherited, the client is charged for the cpu time consumed + and since there are no wakeups and blockings, but a direct hand + off in the scheduler, the cost of crossing task space is closer to the + system call + which can be problematic too... but still it's the solution chosen + by EROS for example AIUI + (inheriting scheduling contexts I mean) + by practically all modern microkernel based systems actually, as + noted by shapiro + braunr: well, both benefits can be achieved by other means as + well... scheduler activations like in Viengoos should handle the hand-off + part AIUI, and scheduling contexts can be inherited explicitly too, like + in EROS (and in a way in Viengoos) + i don't understand viengoos well enough to do it that way -- cgit v1.2.3