From 47e4d194dc36adfcfd2577fa4630c9fcded005d3 Mon Sep 17 00:00:00 2001 From: Thomas Schwinge Date: Sun, 27 Oct 2013 19:15:06 +0100 Subject: IRC. --- open_issues/performance/io_system/read-ahead.mdwn | 10 ++ .../performance/microkernel_multi-server.mdwn | 183 ++++++++++++++++++++- 2 files changed, 191 insertions(+), 2 deletions(-) (limited to 'open_issues/performance') diff --git a/open_issues/performance/io_system/read-ahead.mdwn b/open_issues/performance/io_system/read-ahead.mdwn index cd39328f..05a58f2e 100644 --- a/open_issues/performance/io_system/read-ahead.mdwn +++ b/open_issues/performance/io_system/read-ahead.mdwn @@ -3031,3 +3031,13 @@ License|/fdl]]."]]"""]] so, add? if that's what you want to do, ok i'll think about your initial question tomorrow + + +## IRC, freenode, #hurd, 2013-09-30 + + talking about which... did the clustered I/O work ever get + concluded? + antrik: yes, mcsim was able to finish clustered pageins, and it's + still on my TODO list + it will get merged eventually, now that the large store patch has + also been applied diff --git a/open_issues/performance/microkernel_multi-server.mdwn b/open_issues/performance/microkernel_multi-server.mdwn index 111d2b88..0382c835 100644 --- a/open_issues/performance/microkernel_multi-server.mdwn +++ b/open_issues/performance/microkernel_multi-server.mdwn @@ -1,4 +1,4 @@ -[[!meta copyright="Copyright © 2011 Free Software Foundation, Inc."]] +[[!meta copyright="Copyright © 2011, 2013 Free Software Foundation, Inc."]] [[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable id="license" text="Permission is granted to copy, distribute and/or modify this @@ -12,7 +12,8 @@ License|/fdl]]."]]"""]] Performance issues due to the microkernel/multi-server system architecture? -IRC, freenode, #hurd, 2011-07-26 + +# IRC, freenode, #hurd, 2011-07-26 < CTKArcher> I read that, because of its microkernel+servers design, the hurd was slower than a monolithic kernel, is that confirmed ? @@ -45,3 +46,181 @@ IRC, freenode, #hurd, 2011-07-26 < braunr> but in 95, processors weren't that fast compared to other components as they are now < youpi> while disk/mem haven't evovled so fast + + +# IRC, freenode, #hurd, 2013-09-30 + + ok.. i noticed when installing debian packages in X, the mouse + lagged a little bit + that takes me back to classic linux days + it could be a side effect of running under virtualisation who + knows + no + it's because of the difference of priorities between server and + client tasks + is it simple enough to increase the priority of the X server? + it does remind me of the early linux days.. people were more + interested in making things work, and making things not crash.. than + improving the desktop interactivity or responsiveness + very low priority :P + snadge: actually it's not the difference in priority, it's the + fact that some asynchronous processing is done at server side + the priority difference just gives more time overall to servers + for that processing + snadge: when i talk about servers, i mean system (hurd) servers, + no x + yeah.. linux is the same.. in the sense that, that was its + priority and focus + snadge: ? + servers + what are you talking about ? + going back 10 years or so.. linux had very poor desktop + performance + i'm not talking about priorities for developers + it has obviously improved significantly + i'm talking about things like nice values + right.. and some of the modifications that have been done to + improve interactivity of an X desktop, are not relevant to servers + not relevant at all since it's a hurd problem, not an x problem + yeah.. that was more of a linux problem too, some time ago was the + only real point i was making.. a redundant one :p + where i was going with that.. was desktop interactivity is not a + focus for hurd at this time + it's not "desktop interactivity" + it's just correct scheduling + is it "correct" though.. the scheduler in linux is configurable, + and selectable + depending on the type of workload you expect to be doing + not really + it can be interactive, for desktop loads.. or more batched, for + server type loads.. is my basic understanding + no + that's the scheduling policy + the scheduler is cfs currently + and that's the main difference + cfs means completely fair + whereas back in 2.4 and before, it was a multilevel feedback + scheduler + i.e. a scheduler with a lot of heuristics + the gnumach scheduler is similar, since it was the standard + practice from unix v6 at the time + (gnumach code base comes from bsd) + so 1/ we would need a completely fair scheduler too + and 2/ we need to remove asynchronous processing by using mostly + synchronous rpc + im just trying to appreciate the difference between async and sync + event processing + on unix, the only thing asynchronous is signals + on the hurd, simply cancelling select() can cause many + asynchronous notifications at the server to remove now unneeded resources + when i say cancelling select, i mean one or more fds now have + pending events, and the others must be cleaned + yep.. thats a pretty fundamental change though isnt it? .. if im + following you, you're talking about every X event.. so mouse move, + keyboard press etc etc etc + instead of being handled async.. you're polling for them at some + sort of timing interval? + never mind.. i just read about async and sync with regards to rpc, + and feel like a bit of a noob + async provides a callback, sync waits for the result.. got it :p + async is resource intensive on hurd for the above mentioned + reasons.. makes sense now + how about optimising the situation where a select is cancelled, + and deferring the signal to the server to clean up resources until a + later time? + so like java.. dont clean up, just make a mess + then spend lots of time later trying to clean it up.. sounds like + my life ;) + reuse stale objects instead of destroying and recreating them, and + all the problems associated with that + but if you're going to all these lengths to avoid sending messages + between processes + then you may as well just use linux? :P + im still trying to wrap my head around how converting X to use + synchronous rpc calls will improve responsiveness + what has X to do with it? + nothing wrong with X.. braunr just mentioned that hurd doesnt + really handle the async calls so well + there is more overhead.. that it would be more efficient on hurd, + if it uses sync rpc instead + and perhaps a different task scheduler would help also + ala cfs + but i dont think anyone is terribly motivated in turning hurd into + a desktop operating system just yet.. but i could be wrong ;) + i didn't say that + i misinterpreted what you said then .. im not surprised, im a + linux sysadmin by trade.. and have basic university OS understanding (ie + crap all) at a hobbyist level + i said there is asynchronous processing (i.e. server still have + work to do even when there is no client) + that processing mostly comes from select requests cancelling what + they installed + ie.e. you select fd 1 2 3, even on 2, you cancel on 1 and 3 + those cancellations aren't synchronous + the client deletes ports, and the server asynchronously receives + dead name notifications + since servers have a greater priority, these notifications are + processed before the client can continue + which is what makes you feel lag + X is actually a client here + when i say server, i mean hurd servers + the stuff implementing sockets and files + also, you don't need to turn the hurd into a desktop os + any correct way to do fair scheduling will do + can the X client be made to have a higher priority than the hurd + servers? + or perhaps something can be added to hurd to interface with X + well, the future is wayland + ufs .. unfair scheduling.. give priority to X over everything else + hurd almost seams ideal for that idea.. since the majority of the + system is seperated from the kernel + im likely very wrong though :p + snadge: the reason we elevated the priority of servers is to avoid + delaying the processing of notifications + because each notification can spawn a server thread + and this lead to cases where processing notifications was so slow + that spawning threads would occur more frequently, leading to the server + exhausting its address space because of thread stacks + cant it wait for X though? .. or does it lead to that situation + you just described + we should never need such special cases + we should remove async notifications + my logic is this.. if you're not running X then it doesnt + matter.. if you are, then it might.. its sort of up to you whether you + want priority over your desktop interface or whether it can wait for more + important things, which creates perceptible lag + snadge: no it doesn't + X is clearly not the only process involved + the whole chain should act synchronously + from the client through the server through the drivers, including + the file system and sockets, and everything that is required + it's a general problem, not specific to X + right.. from googling around, it looks like people get very + excited about asyncronous + there was a move to that for some reason.. it sounds great in + theory + continue processing something else whilst you wait for a + potentially time consuming process.. and continue processing that when + you get the result + its also the only way to improve performance with parallelism? + which is of no concern to hurd at this time + snadge: please don't much such statements when you don't know what + you're talking about + it is a concern + and yes, async processing is a way to improve performance + but don't mistake async rpc and async processing + async rpc simply means you can send and receive at any time + sync means you need to recv right after send, blocking until a + reply arrives + the key word here is *blocking*ù + okay sure.. that makes sense + what is the disadvantage to doing it that way? + you potentially have more processes that are blocking? + a system implementing posix such as the hurd needs signals + and some event handling facility like select + implementing them synchronously means a thread ready to service + these events + the hurd currently has such a message thread + but it's complicated and also a scalability concern + e.g. you have at least two thread per process + bbl -- cgit v1.2.3