From 219988e74ba30498a1c5d71cf557913a70ccca91 Mon Sep 17 00:00:00 2001 From: Thomas Schwinge Date: Mon, 3 Oct 2011 20:49:54 +0200 Subject: IRC. --- faq/which_microkernel/discussion.mdwn | 61 ++++ hurd/translator/tmpfs/tmpfs_vs_defpager.mdwn | 115 ++++++- open_issues/code_analysis.mdwn | 32 +- open_issues/default_pager.mdwn | 3 + open_issues/gnumach_memory_management.mdwn | 365 +++++++++++++++++++++ open_issues/libmachuser_libhurduser_rpc_stubs.mdwn | 50 ++- open_issues/mach-defpager_vs_defpager.mdwn | 24 +- open_issues/mach_vm_pageout.mdwn | 19 ++ open_issues/osf_mach.mdwn | 237 +++++++++++++ open_issues/performance/degradation.mdwn | 16 +- .../io_system/clustered_page_faults.mdwn | 23 ++ open_issues/performance/ipc_virtual_copy.mdwn | 37 +++ open_issues/resource_management_problems.mdwn | 4 + .../resource_management_problems/pagers.mdwn | 322 ++++++++++++++++++ open_issues/rework_gnumach_ipc_spaces.mdwn | 2 +- .../translators_set_up_by_untrusted_users.mdwn | 21 ++ 16 files changed, 1298 insertions(+), 33 deletions(-) create mode 100644 open_issues/mach_vm_pageout.mdwn create mode 100644 open_issues/osf_mach.mdwn create mode 100644 open_issues/resource_management_problems/pagers.mdwn diff --git a/faq/which_microkernel/discussion.mdwn b/faq/which_microkernel/discussion.mdwn index 9ef3b915..7ea131e9 100644 --- a/faq/which_microkernel/discussion.mdwn +++ b/faq/which_microkernel/discussion.mdwn @@ -1,3 +1,20 @@ +[[!meta copyright="Copyright © 2011 Free Software Foundation, Inc."]] + +[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable +id="license" text="Permission is granted to copy, distribute and/or modify this +document under the terms of the GNU Free Documentation License, Version 1.2 or +any later version published by the Free Software Foundation; with no Invariant +Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license +is included in the section entitled [[GNU Free Documentation +License|/fdl]]."]]"""]] + +[[!tag open_issue_documentation]] + +[[!toc]] + + +# Olaf, 2011-04-10 + This version mixes up three distinct phases: rewrite from scratch; redesign; own microkernel. @@ -31,3 +48,47 @@ to the Coyotos port -- which after all is what the title promises... All in all, I still think my text was better. If you have any conerns with it, please discuss them... + + +# IRC, freenode, #hurd, 2011-09-27 + + Does anyone remember/know if/why not seL4 was considered for + hurd-l4? Is anyone aware of any differences between seL4 and coyotos? + + +## 2011-09-28 + + cjuner: the seL4 project was only at the beginning when the + decision was made. so was Coyotos, but Shapiro promised back then that + building on EROS, it would be done very fast (a promise he couldn't keep + BTW); plus he convinced the people in question that it's safer to build + on his ideas... + it doesn't really matter though, as by the time the ngHurd people + were through with Coyotos, they had already concluded that it doesn't + make sense to build upon *any* third-party microkernel + antrik, what was the problem with coyotos? what would be the + problem with sel4 today? + antrik, yes I did read the FAQ. It doesn't mention seL4 at all + (there isn't even much on the hurd-l4 mailing lists, I think that being + due to seL4 not having been released at that point?) and it does not + specify what problems they had with coyotos. + cjuner: it doesn't? I thought it mentioned "newer L4 variants" or + something like that... but the text was rewritten a couple of times, so I + guess it got lost somewhere + cjuner: unlike original L4, it's probably possible to implement a + system like the Hurd on top on seL4, just like on top of + Coyotos. however, foreign microkernels are always created with foreign + design ideas in mind; and building our own design around them is always + problematic. it's problematic with Mach, and it will be problematic with + any other third-party microkernel + Coyotos specifically has different ideas about memory protection, + different ideas about task startup, different ideas about memory + handling, and different ideas about resource allocation + antrik, do any specific problems of the foreign designs, + specifically of seL4 or coyotos come to mind? + cjuner: I mentioned several for Coyotos. I don't have enough + understanding of the matters to go into much more detail + (and I suspect you don't have enough understanding of these + matters to take away anything useful from more detail ;-) ) + I could try to explain the issues I mentioned for Coyotos (as far + as I understand them), but would that really help you? diff --git a/hurd/translator/tmpfs/tmpfs_vs_defpager.mdwn b/hurd/translator/tmpfs/tmpfs_vs_defpager.mdwn index f0eb473c..ecebe662 100644 --- a/hurd/translator/tmpfs/tmpfs_vs_defpager.mdwn +++ b/hurd/translator/tmpfs/tmpfs_vs_defpager.mdwn @@ -8,9 +8,10 @@ Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled [[GNU Free Documentation License|/fdl]]."]]"""]] -[[!tag open_issue_hurd]] +[[!tag open_issue_gnumach open_issue_hurd]] -\#hurd, freenode, 2010 + +# IRC, freenode, #hurd, 2010 humm... why does tmpfs try to use the default pager? that's a bad idea, and probably will never work correctly... @@ -120,3 +121,113 @@ License|/fdl]]."]]"""]] memory, gives them a reference to the default pager by calling vm_object_pager_create this is not really important, but worth noting ;-) + + +# IRC, freenode, #hurd, 2011-09-28 + + mcsim: "Fix tmpfs" task should be called "Fix default pager" :-) + mcsim: I've been thinking about modifying tmpfs to actually have + it's own storeio based backend, even if a tmpfs with storage sounds a bit + stupid. + mcsim: but I don't like the idea of having translators messing up + with the default pager... + slpz: messing up?... + antrik: in the sense of creating a number of arbitrarily sized + objects + slpz: well, it doesn't really matter much whether a process + indirectly eats up arbitrary amounts of swap through tmpfs, or directly + through vm_allocate()... + though admittedly it's harder to implement resource limits with + tmpfs + antrik: but I've talked about having its own storeio device as + backend. This way Mach can pageout memory to tmpfs if it's needed. + Do I understand correctly that the goal of tmpfs task is to create + tmpfs in RAM? + mcsim: It is. But it also needs some kind of backend, just in case + it's ordered to page out data to free some system's memory. + mcsim: Nowadays, this backend is another translator that acts as + default pager for the whole system + slpz: pageout memory to tmpfs? not sure what you mean + antrik: I mean tmpfs acting as its own pager + slpz: you mean tmpfs not using the swap partition, but some other + backing store? + antrik: Yes. + +See also: [[open_issues/resource_management_problems/pagers]]. + + slpz: I don't think an extra backing store for tmpfs is a good + idea. the whole point of tmpfs is not having a backing store... TBH, I'd + even like to see a single backing store for anonymous memory and named + files + antrik: But you need a backing store, even if it's the default pager + :-) + antrik: The question is, Should users share the same backing store + (swap space) or provide their own? + slpz: not sure what you mean by "users" in this context :-) + antrik: Real users with the ability of setting tmpfs translators + essentially, I'd like to have a single partition that contains + both swap space and the main filesystem (at least /tmp, but probably also + all of /run, and possibly even /home...) + but that's a bit off-topic :-) + well, ideally all storage should be accounted to a user, + regardless whether it's swapped out anonymous storage, temporary named + files, or permanent files + antrik: you could use a file as backend for tmpfs + slpz: what's the point of using tmpfs then? :-) + (and then store the file in another tmpfs) + antrik: mach-defpager could be modified to use storeio instead of + Mach's device_* operations, but by the way things work right now, that + could be dangerous, IMHO + pinotree: hehe + .. recursive tmpfs'es ;) + slpz: hm, sounds interesting + antrik: tmpfs would try to keep data in memory always it's possible + (not calling m_o_lock_request would do the trick), but if memory is + scarce an Mach starts paging out, it would write it to that + file/device/whatever + ideally, all storage used by system tasks for swapped out + anonymous memory as well as temporary named files would end up on the + /run partition; while all storage used by users would end up in /home/* + if users share a partition, some explicit storage accounting would + be useful too... + slpz: is that any different from what "normal" filesystems do?... + (and *should* it be different?...) + antrik: Yes, as most FS try to synchronize to disk at a reasonable + rate, to prevent data losses. + antrik: tmpfs would be a FS that wouldn't synchronize until it's + forced to do that (which, by the way, it's what's currently happening + with everyone that uses the default pager). + slpz: hm, good point... + antrik: Also, metadata in never written to disk, only kept in memory + (which saves a lot of I/O, too). + antrik: In fact, we would be doing the same as every other kernel + does, but doing it explicitly :-) + I see the use in separating precious data (in permanent named + files) from temporary state (anonymous memory and temporary named files) + -- but I'm not sure whether having a completely separate FS for the + temporary data is the right approach for that... + antrik: And giving the user the option to specify its own storage, + so we don't limit him to the size established for swap by the super-user. + either way, that would be a rather radical change... still would + be good to fix tmpfs as it is first if possible + as for limited swap, that's precisely why I'd prefer not to have + an extra swap partition at all... + antrik: It's not much o fa change, it's how it works right now, with + the exception of replacing the default pager with its own. + antrik: I think it's just a matter of 10-20 hours, as + much. Including testing. + antrik: It could be forked with another name, though :-) + slpz: I don't mean radical change in the implementation... but a + radical change in the way it would be used + antrik: I suggest "almosttmpfs" as the name for the forked one :-P + hehe + how about lazyfs? + antrik: That sound good to me, but probably we should use a more + descriptive name :-) + + +## 2011-09-29 + + slpz, antrik: There is a defpager in the Hurd code. It is not + currently being used, and likely incomplete. It is backed by libstore. + I have never looked at it. diff --git a/open_issues/code_analysis.mdwn b/open_issues/code_analysis.mdwn index 552cd2c9..7495221b 100644 --- a/open_issues/code_analysis.mdwn +++ b/open_issues/code_analysis.mdwn @@ -19,7 +19,12 @@ analysis|performance]], [[formal_verification]], as well as general [[!toc]] -# Suggestions +# Bounty + +There is a [[!FF_project 276]][[!tag bounty]] on some of these tasks. + + +# Static * [[GCC]]'s warnings. Yes, really. @@ -52,8 +57,6 @@ analysis|performance]], [[formal_verification]], as well as general * - * [[community/gsoc/project_ideas/Valgrind]] - * [Smatch](http://smatch.sourceforge.net/) * [Parfait](http://labs.oracle.com/projects/parfait/) @@ -66,7 +69,12 @@ analysis|performance]], [[formal_verification]], as well as general * [sixgill](http://sixgill.org/) - * [Coverity](http://www.coverity.com/) -- commercial? + * [Coverity](http://www.coverity.com/) (nonfree?) + + +# Dynamic + + * [[community/gsoc/project_ideas/Valgrind]] * @@ -76,7 +84,15 @@ analysis|performance]], [[formal_verification]], as well as general * - -# Bounty - -There is a [[!FF_project 276]][[!tag bounty]] on some of these tasks. + * IRC, freenode, #glibc, 2011-09-28 + + two things you can do -- there is an environment variable + (DEBUG_MALLOC_ iirc?) that can be set to 2 to make ptmalloc (glibc's + allocator) more forceful and verbose wrt error checking + another is to grab a copy of Tor's source tree and copy out + OpenBSD's allocator (its a clearly-identifyable file in the tree); + LD_PRELOAD it or link it into your app, it is even more aggressive + about detecting memory misuse. + third, Red hat has a gdb python plugin that can instrument + glibc's heap structure. its kinda handy, might help? + MALLOC_CHECK_ was the envvar you want, sorry. diff --git a/open_issues/default_pager.mdwn b/open_issues/default_pager.mdwn index 189179c6..18670c75 100644 --- a/open_issues/default_pager.mdwn +++ b/open_issues/default_pager.mdwn @@ -18,6 +18,9 @@ IRC, freenode, #hurd, 2011-08-31: have rewritten their swap pager (and also I/O performance steadily dropping before that point is reached?) + +[[performance/degradation]] (?). + hm there could too many things perhaps we could "borrow" from one of them? :-) diff --git a/open_issues/gnumach_memory_management.mdwn b/open_issues/gnumach_memory_management.mdwn index 1fe2f9be..fb3d6895 100644 --- a/open_issues/gnumach_memory_management.mdwn +++ b/open_issues/gnumach_memory_management.mdwn @@ -1412,3 +1412,368 @@ There is a [[!FF_project 266]][[!tag bounty]] on this task. better cache->nr_slabs * cache->bufs_per_slab * cache->buf_size or cache->nr_slabs * cache->slab_size? the latter + + +# IRC, freenode, #hurd, 2011-09-07 + + braunr: I've disabled calling of mem_cpu_pool_fill and allocator + became faster + mcsim: sounds nice + mcsim: i suspect the free path might not be as fast though + results for first calling: http://paste.debian.net/128639/ second: + http://paste.debian.net/128640/ and with many alloc/free: + http://paste.debian.net/128641/ + mcsim: thanks + best result are for second call: average time decreased from 159.56 + to 118.756 + First call slightly worse, but this is because I've added some + profiling code + i still see some ~8k lines in 128639 + even some around ~12k + I think this is because of mem_cache_grow I'm investigating it now + i guess so too + I've measured time for first call in cache and from about 22000 + mem_cache_grow takes 20000 + how did you change the code so that it doesn't call + mem_cpu_pool_fill ? + is the cpu layer still used ? + http://paste.debian.net/128644/ + don't forget the free path + mcsim: anyway, even with the previous slightly slower behaviour we + could observe, the performance hit is negligible + Is free path a compilation? (I'm sorry for my english) + mcsim: mem_cache_free + mcsim: the last two measurements i'd advise are with big (>4k) + object sizes and, really, kernel allocator consumption + http://paste.debian.net/128648/ http://paste.debian.net/128646/ + http://paste.debian.net/128649/ (first, second, small) + mcsim: these numbers are closer to the zalloc ones, aren't they ? + deallocating slighty faster too + it may not be the case with larger objects, because of the use of + a tree + yes, they are closer + but then, i expect some space gains + the whole thing is about compromise + ok. I'll try to measure them today. Anyway I'll post result and you + could read them in the morning + at least, it shows that the zone allocator was actually quite good + i don't like how the code looks, there are various hacks here and + there, it lacks self inspection features, but it's quite good + and there was little room for true improvement in this area, like + i told you :) + (my allocator, like the current x15 dev branch, focuses on mp + machines) + mcsim: thanks again for these numbers + i wouldn't have had the courage to make the tests myself before + some time eh + braunr: hello. Look at the small_4096 results + http://paste.debian.net/128692/ (balloc) http://paste.debian.net/128693/ + (zalloc) + mcsim: wow, what's that ? :) + mcsim: you should really really include your test parameters in + the report + like object size, purpose, and other similar details + for balloc I specified only object_size = 4096 + for zalloc object_size = 4096, alloc_size = 4096, memtype = 0; + the results are weird + apart from the very strange numbers (e.g. 0 or 4429543648), none + is around 3k, which is the value matching a kmem_alloc call + happy to see balloc behaves quite good for this size too + s/good/well/ + Oh + here is significant only first 101 lines + I'm sorry + ok + what does the test do again ? 10 loops of 10 allocs/frees ? + yes + ok, so the only slowdown is at the beginning, when the slabs are + created + the two big numbers (31844 and 19548) are strange + on the other hand time of compilation is + balloc zalloc + 38m28.290s 38m58.400s + 38m38.240s 38m42.140s + 38m30.410s 38m52.920s + what are you compiling ? + gnumach kernel + in 40 mins ? + yes + you lack hvm i guess + is it long? + I use real PC + very + ok + so it's normal + in vm it was about 2 hours) + the difference really is negligible + ok i can explain the big numbers + the slab size depends on the object size, and for 4k, it is 32k + you can store 8 4k buffers in a slab (lines 2 to 9) + so we need use kmem_alloc_* 8 times? + on line 10, the ninth object is allocated, which adds another slab + to the cache, hence the big number + no, once for a size of 32k + and then the free list is initialized, which means accessing those + pages, which means tlb misses + i guess the zone allocator already has free pages available + I see + i think you can stop performance measurements, they show the + allocator is slightly slower, but so slightly we don't care about that + we need numbers on memory usage now (at the page level) + and this isn't easy + For balloc I can get numbers if I summarize nr_slabs*slab_size for + each cache, isn't it? + yes + you can have a look at the original implementation, function + mem_info + And for zalloc I have to summarize of cur_size and then add + zalloc_wasted_space? + i don't know :/ + i think the best moment to obtain accurate values is after zone_gc + removes the collected pages + for both allocators, you could fill a stats structure at that + moment, and have an rpc copy that structure when a client tool requests + it + concerning your tests, there is another point to have in mind + the very first loop in your code shows a result of 31844 + although you disabled the call to cpu_pool_fill + but the reason why it's so long is that the cpu layer still exists + and if you look carefully, the cpu pools are created as needed on + the free path + I removed cpu_pool_drain + but not cpu_pool_push/pop i guess + http://paste.debian.net/128698/ + see, you still allocate the cpu pool array on the free path + but I don't fill it + that's not the point + it uses mem_cache_alloc + so in a call to free, you can also have an allocation, that can + potentially create a new slab + I see, so I have to create cpu_pool at the initialization stage? + no, you can't + there is a reason why they're allocated on the free path + but since you don't have the fill/drain functions, i wonder if you + should just comment out the whole cpu layer code + but hmm + no really, it's not worth the effort + even with drains/fills, the results are really good enough + it makes the allocator smp ready + we should just keep it that way + mcsim: fyi, the reason why cpu pool arrays are allocated on the + free path is to avoid recursion + because cpu pool arrays are allocated from caches just as almost + everything else + ok + summ of cur_size and then adding zalloc_wasted_space gives 0x4e1954 + but this value isn't even page aligned + For balloc I've got 0x4c6000 0x4aa000 0x48d000 + hm can you report them in decimal, >> 10 so that values are in KiB + ? + 4888 4776 4660 for balloc + 4998 for zalloc + when ? + after boot ? + boot, compile, zone_gc + and then measure + ? + I call garbage collector before measuring + and I measure after kernel compilation + i thought it took you 40 minutes + for balloc I got results at night + oh so you already got them + i can't beleive the kernel only consumes 5 MiB + before gc it takes about 9052 Kib + can i see the measurement code ? + oh, and how much ram does your machine have ? + 758 mb + 768 + that's really weird + i'd expect the kernel to consume much more space + http://paste.debian.net/128703/ + it's only dynamically allocated data + yes + ipc ports, rights, vm map entries, vm objects, and lots of other + hanging buffers + about how much is zalloc_wasted_space ? + if it's small or constant, i guess you could ignore it + about 492 + KiB + well it's another good point, mach internal structures don't imply + much overhead + or, the zone allocator is underused + + mcsim, braunr: The memory allocator project is coming along + good, as I get from your IRC messages? + tschwinge: yes, but as expected, improvements are minor + But at the very least it's now well-known, maintainable code. + yes, it's readable, easier to understand, provides self inspection + and is smp ready + there also are less hacks, but a few less features (there are no + way to avoid sleeping so it's unusable - and unused - in interrupt + handlers) + is* no way + tschwinge: mcsim did a good job porting and measuring it + + +# IRC, freenode, #hurd, 2011-09-08 + + braunr: note that the zalloc map used to be limited to 8 MiB or + something like that a couple of years ago... so it doesn't seems + surprising that the kernel uses "only" 5 MiB :-) + (yes, we had a *lot* of zalloc panics back then...) + + +# IRC, freenode, #hurd, 2011-09-14 + + braunr: hello. I've written a constructor for kernel map entries + and it can return resources to their source. Can you have a look at it? + http://paste.debian.net/130037/ If all be OK I'll push it tomorrow. + mcsim: send the patch through mail please, i'll apply it on my + copy + are you sure the cache is reapable ? + All slabs, except first I allocate with kmem_alloc_wired. + how can you be sure ? + First slab I allocate during bootstrap and use pmap_steal_memory + and further I use only kmem_alloc_wired + no, you use kmem_free + in kentry_dealloc_cache() + which probably creates a recursion + using the constructor this way isn't a good idea + constructors are good for preconstructed state (set counters to 0, + init lists and locks, that kind of things, not allocating memory) + i don't think you should try to make this special cache reapable + mcsim: keep in mind constructors are applied on buffers at *slab* + creation, not at object allocation + so if you allocate a single slab with, say, 50 or 100 objects per + slab, kmem_alloc_wired would be called that number of times + why kentry_dealloc_cache can create recursion? kentry_dealloc_cache + is called only by mem_cache_reap. + right + but are you totally sure mem_cache_reap() can't be called by + kmem_free() ? + i think you're right, it probably can't + + +# IRC, freenode, #hurd, 2011-09-25 + + braunr: hello. I rewrote constructor for kernel entries and seems + that it works fine. I think that this was last milestone. Only moving of + memory allocator sources to more appropriate place and merge with main + branch left. + mcsim: it needs renaming and reindenting too + for reindenting C-x h Tab in emacs will be enough? + mcsim: make sure which style must be used first + and what should I rename and where better to place allocator? For + example, there is no lib directory, like in x15. Should I create it and + move list.* and rbtree.* to lib/ or move these files to util/ or + something else? + mcsim: i told you balloc isn't a good name before, use something + more meaningful (kmem is already used in gnumach unfortunately if i'm + right) + you can put the support files in kern/ + what about vm_alloc? + you should prefix it with vm_ + shouldn't + it's a top level allocator + on top of the vm system + maybe mcache + hm no + maybe just km_ + kern/km_alloc.*? + no + just km + ok. + + +# IRC, freenode, #hurd, 2011-09-27 + + braunr: hello. When I've tried to speed of new allocator and bad + I've removed function mem_cpu_pool_fill. But you've said to undo this. I + don't understand why this function is necessary. Can you explain it, + please? + When I've tried to compare speed of new allocator and old* + i'm not sure i said that + i said the performance overhead is negligible + so it's better to leave the cpu pool layer in place, as it almost + doesn't hurt + you can implement the KMEM_CF_NO_CPU_POOL I added in the x15 mach + version + so that cpu pools aren't used by default, but the code is present + in case smp is implemented + I didn't remove cpu pool layer. I've just removed filling of cpu + pool during creation of slab. + how do you fill the cpu pools then ? + If object is freed than it is added to cpu poll + so you don't fill/drain the pools ? + you try to get/put an object and if it fails you directly fall + back to the slab layer ? + I drain them during garbage collection + oh + yes + you shouldn't touch the cpu layer during gc + the number of objects should be small enough so that we don't care + much + ok. I can drain cpu pool at any other time if it is prohibited to + in mem_gc. + But why do we need to fill cpu poll during slab creation? + In this case allocation consist of: get object from slab -> put it + to cpu pool -> get it from cpu pool + I've just remove last to stages + hm cpu pools aren't filled at slab creation + they're filled when they're empty, and drained when they're full + so that the number of objects they contain is increased/reduced to + a value suitable for the next allocations/frees + the idea is to fall back as little as possible to the slab layer + because it requires the acquisition of the cache lock + oh. You're right. I'm really sorry. The point is that if cpu pool + is empty we don't need to fill it first + uh, yes we do :) + Why cache locking is so undesirable? If we have free objects in + slabs locking will not take a lot if time. + mcsim: it's undesirable on a smp system + ok. + mcsim: and spin locks are normally noops on a up system + which is the case in gnumach, hence the slightly better + performances without the cpu layer + but i designed this allocator for x15, which only supports mp + systems :) + mcsim: sorry i couldn't look at your code, sick first, busy with + server migration now (new server almost ready for xen hurds :)) + ok. + I ended with allocator if didn't miss anything important:) + i'll have a look soon i hope :) + + +# IRC, freenode, #hurd, 2011-09-27 + + braunr: would it be realistic/useful to check during GC whether + all "used" objects are actually in a CPU pool, and if so, destroy them so + the slab can be freed?... + mcsim: BTW, did you ever do any measurements of memory + use/fragmentation? + antrik: I couldn't do this for zalloc + oh... why not? + (BTW, I would be interested in a comparision between using the CPU + layer, and bare slab allocation without CPU layer) + Result I've got were strange. It wasn't even aligned to page size. + Probably is it better to look into /proc/vmstat? + Because I put hooks in the code and probably I missed something + mcsim: I doubt vmstat would give enough information to make any + useful comparision... + antrik: isn't this draining cpu pools at gc time ? + antrik: the cpu layer was found to add a slight overhead compared + to always falling back to the slab layer + braunr: my idea is only to drop entries from the CPU cache if they + actually prevent slabs from being freed... if other objects in the slab + are really in use, there is no point in flushing them from the CPU cache + braunr: I meant comparing the fragmentation with/without CPU + layer. the difference in CPU usage is probably negligable anyways... + you might remember that I was (and still am) sceptical about CPU + layer, as I suspect it worsens the good fragmentation properties of the + pure slab allocator -- but it would be nice to actually check this :-) + antrik: right + antrik: the more i think about it, the more i consider slqb to be + a better solution ...... :> + an idea for when there's time + eh + hehe :-) diff --git a/open_issues/libmachuser_libhurduser_rpc_stubs.mdwn b/open_issues/libmachuser_libhurduser_rpc_stubs.mdwn index d069641e..93055b77 100644 --- a/open_issues/libmachuser_libhurduser_rpc_stubs.mdwn +++ b/open_issues/libmachuser_libhurduser_rpc_stubs.mdwn @@ -1,4 +1,4 @@ -[[!meta copyright="Copyright © 2010 Free Software Foundation, Inc."]] +[[!meta copyright="Copyright © 2010, 2011 Free Software Foundation, Inc."]] [[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable id="license" text="Permission is granted to copy, distribute and/or modify this @@ -8,19 +8,49 @@ Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled [[GNU Free Documentation License|/fdl]]."]]"""]] -bug-hurd discussion. +[[!tag open_issue_glibc open_issue_hurd]] ---- +[[!toc]] -IRC, #hurd, 2010-08-12 - Looking at hurd.git, shouldn't {hurd,include}/Makefile's "all" target do something, and shouldn't pretty much everything depend on them? As it stands it seems that the system headers are used and the potentially newer ones never get built, except maybe on "install" (which is seemingly never called from the top-level Makefile) - I would fix it, but something tells me that maybe it's a feature :-) +# bug-hurd discussion. + + +# IRC, freenode, #hurd, 2010-08-12 + + Looking at hurd.git, shouldn't {hurd,include}/Makefile's "all" + target do something, and shouldn't pretty much everything depend on them? + As it stands it seems that the system headers are used and the + potentially newer ones never get built, except maybe on "install" (which + is seemingly never called from the top-level Makefile) + I would fix it, but something tells me that maybe it's a feature + :-) jkoenig: the headers are provided by glibc, along with the stubs - antrik, you mean, even those built from the .defs files in hurd/ ? + antrik, you mean, even those built from the .defs files in hurd/ + ? yes oh, ok then. - as glibc provides the stubs (in libhurduser), the headers also have to come from there, or they would get out of sync - hmm, shouldn't glibc also provide /usr/share/msgids/hurd.msgids, then? - jkoenig: not necessarily. the msgids describe what the servers actually understand. if the stubs are missing from libhurduser, that's no reason to leave out the msgids... + as glibc provides the stubs (in libhurduser), the headers also + have to come from there, or they would get out of sync + hmm, shouldn't glibc also provide /usr/share/msgids/hurd.msgids, + then? + jkoenig: not necessarily. the msgids describe what the servers + actually understand. if the stubs are missing from libhurduser, that's no + reason to leave out the msgids... ok this makes sense + + +# IRC, OFTC, #debian-hurd, 2011-09-29 + + pinotree: I don't like their existence. IMO (but I haven't + researched this in very much detail), every user of RPC stubs should + generated them for themselves (and glibc should directly include the + stubs it uses internally). + sounds fair + maybe they could be moved from glibc to hurd? + pinotree: Yeah; someone needs to research why we have them (or + if it's only convenience), and whether we want to keep them. + you could move them to hurd, leaving them unaltered, so binary + compatibility with eventual 3rd party users is not broken + but those using them, other than hurd itself, won't compile + anymore, so you fix them progressively diff --git a/open_issues/mach-defpager_vs_defpager.mdwn b/open_issues/mach-defpager_vs_defpager.mdwn index d6976706..f03bc67f 100644 --- a/open_issues/mach-defpager_vs_defpager.mdwn +++ b/open_issues/mach-defpager_vs_defpager.mdwn @@ -1,4 +1,4 @@ -[[!meta copyright="Copyright © 2010 Free Software Foundation, Inc."]] +[[!meta copyright="Copyright © 2010, 2011 Free Software Foundation, Inc."]] [[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable id="license" text="Permission is granted to copy, distribute and/or modify this @@ -10,16 +10,24 @@ License|/fdl]]."]]"""]] [[!tag open_issue_gnumach open_issue_hurd]] -\#hurd, 2010, end of May / beginning of June +IRC, freenode, #hurd, end of May/beginning of June 2010 whats the difference between mach-defpager and defpager? - i'm guessing defpager is a hurdish version that uses libstore but was never finished or something - found an interesting thread about it: http://mirror.libre.fm/hurd/list/msg01232.html + i'm guessing defpager is a hurdish version that uses libstore + but was never finished or something + found an interesting thread about it: + http://mirror.libre.fm/hurd/list/msg01232.html antrik: an interesting thread, indeed :-) - slpz: btw is mach-defpager linked statically but not called mach-defpager.static on purpose? - antrik: also, I can confirm that mach-defpager needs a complete rewrite ;-) + slpz: btw is mach-defpager linked statically but not called + mach-defpager.static on purpose? + antrik: also, I can confirm that mach-defpager needs a complete + rewrite ;-) pochu: I think the original defpager was launched by serverboot pochu: that could be the reason to have it static, like ext2fs - and since there's no need to execute it again during the normal operation of the system, they probably decided to not create a dynamically linked version + and since there's no need to execute it again during the normal + operation of the system, they probably decided to not create a + dynamically linked version (but I'm just guessing) - of perhaps they wanted to prevent mach-defpager from the need of reading libraries, since it's used when memory is really scarce (guessing again) + of perhaps they wanted to prevent mach-defpager from the need of + reading libraries, since it's used when memory is really scarce (guessing + again) diff --git a/open_issues/mach_vm_pageout.mdwn b/open_issues/mach_vm_pageout.mdwn new file mode 100644 index 00000000..dac7fe28 --- /dev/null +++ b/open_issues/mach_vm_pageout.mdwn @@ -0,0 +1,19 @@ +[[!meta copyright="Copyright © 2011 Free Software Foundation, Inc."]] + +[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable +id="license" text="Permission is granted to copy, distribute and/or modify this +document under the terms of the GNU Free Documentation License, Version 1.2 or +any later version published by the Free Software Foundation; with no Invariant +Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license +is included in the section entitled [[GNU Free Documentation +License|/fdl]]."]]"""]] + +[[!tag open_issue_gnumach]] + +IRC, freenode, #hurd, 2011-09-09 + + It's amazing how broken some parts of Mach's VM are + currently, it doesn't even keep track of the number of external + pages in the lists + and vm_pageout_scan produces a hang if want_pages == FALSE (which + never is, because vm_page_external_count is always 0) diff --git a/open_issues/osf_mach.mdwn b/open_issues/osf_mach.mdwn new file mode 100644 index 00000000..d689bfcb --- /dev/null +++ b/open_issues/osf_mach.mdwn @@ -0,0 +1,237 @@ +[[!meta copyright="Copyright © 2011 Free Software Foundation, Inc."]] + +[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable +id="license" text="Permission is granted to copy, distribute and/or modify this +document under the terms of the GNU Free Documentation License, Version 1.2 or +any later version published by the Free Software Foundation; with no Invariant +Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license +is included in the section entitled [[GNU Free Documentation +License|/fdl]]."]]"""]] + +[[!tag open_issue_glibc open_issue_gnumach open_issue_hurd]] + +IRC, freenode, #hurd, 2011-09-07 + + tschwinge: do you think that should be possible/convenient to + maintain hurd and glibc versions for OSF Mach as branches in the offical + git repo? + Is OSF Mach the MkLinux one? + Yes, it is + slpz: If there's a suitable license, then yes, of course! + Unless there is a proper upstream, of course. + But I don't assume there is? + slpz: What is interesting for us about OSF Mach? + tschwinge: Peter Bruin and Jose Marchesi did a gnuified version some + time ago (gnu-osfmach), so I suppose the license is not a problem. But + I'm going to check it, though + OSF Mach has a number of interesting features + like migrating threads, advisory pageout, clustered pageout, kernel + loaded tasks, short circuited RPC... + Oh! + Good. + right now I'm testing if it's really worth the effort + Yes. + But if the core codebase is the same (is it?) it may be + possible to merge some things? + If the changes can be identified reasonably... + comparing performance of the specialized RPC of OSF Mach with + generic IPC + That was my first intention, but I think that porting all those + features will be much more work than porting Hurd/glibc to it + slpz: ipc performance currently matters less than clustered + pageouts + slpz: i'm really not sure .. + i'd personnally adapt the kernel + braunr: well, clustered pageouts is one of the changes that can be + easily ported + braunr: We can consider OSF Mach code as reasonably stable, and + porting its features to GNU Mach will take us to the point of having to + debug all that code again + probably, the hardest feature to be ported is migrating threads + isn't that what was tried for gnu mach 2 ? or was it only about + oskit ? + IIRC only oskit + slpz: But there have been some advancements in GNU Mach, too. + For example the Xen port. + But wen can experiment with it, of course. + tschwinge: I find easier to move the Xen support from GNU Mach to + OSF Mach, than porting MT in the other direction + slpz: And I think MkLinux is a single-server, so I don't this + they used IPC as much as we did? + slpz: OK, I see. + slpz: MT aren't as needed as clustered pageouts :p + gnumach already has ipc handoff, so MT would just consume less + stack space, and only slightly improve raw ipc performance + slpz: But we will surely accept patches that get the Hurd/glibc + ported to OSF Mach, no question. + (it's required for other issues we discussed already, but not a + priority imo) + tschwinge: MkLinux makes heavy use of IPC, but it tries to + "short-circuit" it when running as a kernel loaded task + And it's obviously best to keep it in one place. Luckily it's + not CVS branches anymore... :-) + braunr: well, I'm a bit obsessed with IPC peformance, if the RPC on + OSF Mach really makes a difference, I want it for Hurd right now + braunr: clustered pages can be implemented at any time :-) + tschwinge: great! + slpz: In fact, haven'T there already been some Savannah + repositories created, several (five?) years ago? + slpz: the biggest performance issue on the hurd is I/O + and the easiest way to improve that is better VM transfers + tschwinge: yes, the HARD project, but I think it wasn't too well + received... + slpz: Quite some things changed since then, I'd say. + braunr: I agree, but IPC is the hardest part to optimize + braunr: If we have a fast IPC, the rest of improvements are way + easier + slpz: i don't see how faster IPC makes I/O faster :( + slpz: read + http://www.sceen.net/~rbraun/the_increasing_irrelevance_of_ipc_performance_for_microkernel_based_operating_systems.pdf + again :) + braunr: IPC puts the upper limit of how fast I/O could be + the abstract for my thesis on x15 mach was that the ipc code was + the most focused part of the kernel + so my approach was to optimize everything *else* + the improvements in UVM (and most notably clustered page + transfers) show global system improvements up to 30% in netbsd + we should really focus on the VM first (which btw, is a pain in + the ass with the crappy panicking swap code in place) + and then complete the I/O system + braunr: If a system can't transfer data between translators faster + than 100 MB/s, faster devices doesn't make much sense + has anyone considered switching the syscalls to use + sysenter/syscall instead of soft interrupts? + braunr: but I agree on the VM part + guillem: it's in my thesis .. but only there :) + slpz: let's reach 100 MiB/s first, then improve IPC + guillem: that's a must do, also moving to 64 bits :-) + guillem: there are many tiny observations in it, like the use of + global page table entries, which was added by youpi around that time + slpz: I wanted to fix all warnings first before sending my first + batch of 64 bit fixes, but I think I'll just send them after checking + they don't introduce regressions on i386 + braunr: interesting I think I might have skimmed over your + thesis, maybe I should read it properly some time :) + braunr: I see exactly as the opposite. First push IPC to its limit, + then improve devices/VM + guillem: that's great :-) + slpz: improving ipc now will bring *nothing*, whereas improving + vm/io now will make the system considerably more useable + but then fixing 64-bit issues in the Linux code is pretty + annoying given that the latest code from upstream has that already fixed, + and we are “supposed” to drop the linux code from gnumach at some point + :) + slpz: that's a basic principle in profiling, improve what brings + the best gains + braunr: I'm not thinking about today, I'm thinking about how fast + Hurd could be when running on Mach. And, as I said, IPC is the absolute + upper limit. + i'm really not convinced + there are that many tasks making extensive use of IPCs + most are cpu/IO bound + but I have to acknowledge that this concern has been really + aliviated by the EPT improvement discovery + there aren't* that many tasks + braunr: create a ramdisk an write some files on it + braunr: there's no I/O in that case, an performance it's really low + too + well, ramdisks don't even work correctly iirc + I must say that I consider improvements in OOL data moving as if it + were in IPC itself + braunr: you can simulate one with storeio + slpz: then measure what's slow + slpz: it couldn't simply be the vm layer + braunr: + http://www.gnu.org/s/hurd/hurd/libstore/examples/ramdisk.html + ok, it's not a true ramdisk + it's a stack of a ramdisk and extfs servers + ext2fs* + i was thinking about tmpfs + True, but one of Hurd main advantages is the ability of doing that + kind of things + so they must work with a reasonable performance + other systems can too .. + anyway + i get your point, you want faster IPCs, like everyone does + braunr: yes, and I also want to know how fast could be, to have a + reference when profiling complex services + slpz: really improving IPC performance probably requires changing + the semantics... but we don't know which semantics we want until we have + actually tried fixing the existing bottlenecks + well, not only bottlenecks... also other issues such as resource + management + antrik: I think fixing bottlenecks would probably require changes in + some Mach interfaces, not in the IPC subsystem + antrik: I mean, IPC semantics just provide the basis for messaging, + I don't think we will need to change them further + slpz: right, but only once we have addressed the bottlenecks (and + other major shortcomings), we will know how the IPC mechanisms needs to + change to get further improvements... + of course improving Mach IPC performance is interesting too -- if + nothing else, then to see how much of a difference it really makes... I + just don't think it should be considered an overriding priority :-) + slpz: I agree with braunr, I don't think improving IPC will bring + much on the short term + the buildds are slow mostly because of bad VM + like lack of read-ahead, the randomness of object cache pageout, + etc. + that doesn't mean IPC shouldn't be improved of course + but we have a big margin for iow + s/iow/now + youpi: I agree with you and with braunr in that regard. I'm not + looking for an inmediate improvement, I just want to see how fast the IPC + (specially, OOL data transfers) could be. + also, migrating threads will help to fix some problems related with + resource management + slpz: BTW, what about Apple's Mach? isn't it essentialy OSF Mach + with some further improvements?... + antrik: IPC is an area with very little room for improvement, so I + don't we will fix that bottlenecks by applying some changes there + well, for large OOL transfers, the limiting facter is certainly + also VM rather than the thread model?... + antrik: yes, but I think is encumbered with the APPLv2 license + ugh + antrik: for OOL transfers, VM plays a big role, but IPC also has + great deal of responsibility + as for resource management, migrating threads do not really help + much IMHO, as they only affect CPU scheduling. memory usage is a much + more pressing issue + BTW, I have thought about passive objects in the past, but didn't + reach any conclusion... so I'm a bit ambivalent about migrating threads + :-) + As an example, in Hurd on GNU Mach, an io_read can't take advantage + from copy-on-write, as buffers from the translator always arrive outside + user's buffer + antrik: well, I think cpu scheduling is a big deal ;-) + antrik: and for memory management, until a better design is + implemented, some fixes could be applied to get us to the same level as a + monolithic kernel + to get even close to monolithic systems, we need either a way to + account server resources used on client's behalf, or to make servers use + client-provided resources. both require changes in the IPC mechanism I + think... + (though *if* we go for the latter option, the CPU scheduling + changes of migrating threads would of course be necessary, in addition to + any changes regarding memory management...) + slpz: BTW, I didn't get the point about io_read and COW... + antrik: AFAIK, the FS cache (which is our primary concern) in most + monolithic system is agnostic with respect the users, and only deals with + absolute numbers. In our case we can do almost the same by combining Mach + and pagers knowledege. + slpz: my primary concern is that anything program having a hiccup + crashes the system... and I'm not sure this can be properly fixed without + working memory accounting + (I guess in can be worked around to some extent by introducing + various static limits on processes... but I'm not sure how well) + it can + antrik: monolithic system also suffer that problem (remember fork + bombs) and it's "solved" by imposing static limits to user processes + (ulimit). + antrik: we do have more problems due to port management, but I think + some degree of control can be archieved with a reasonably amount of + changes. + slpz: in a client-server architecture static limits are much less + effective... that problem exists on traditional systems too, but only in + some specific cases (such as X server); while on a microkernel system + it's ubiquitous... that's why we need a *better* solution to this problem + to get anywhere close to monolithic systems diff --git a/open_issues/performance/degradation.mdwn b/open_issues/performance/degradation.mdwn index db759308..8c9a087c 100644 --- a/open_issues/performance/degradation.mdwn +++ b/open_issues/performance/degradation.mdwn @@ -10,8 +10,12 @@ License|/fdl]]."]]"""]] [[!meta title="Degradation of GNU/Hurd ``system performance''"]] -Email, *id:"87mxg2ahh8.fsf@kepler.schwinge.homeip.net"* (bug-hurd, 2011-07-25, -Thomas Schwinge) +[[!tag open_issue_gnumach open_issue_hurd]] + +[[!toc]] + + +# Email, `id:"87mxg2ahh8.fsf@kepler.schwinge.homeip.net"` (bug-hurd, 2011-07-25, Thomas Schwinge) > Building a certain GCC configuration on a freshly booted system: 11 h. > Remove build tree, build it again (2nd): 12 h 50 min. Huh. Remove build @@ -27,9 +31,8 @@ IRC, freenode, #hurd, 2011-07-23: are some serious fragmentation issues < braunr> antrik: both could be induced by fragmentation ---- -During [[IPC_virtual_copy]] testing: +# During [[IPC_virtual_copy]] testing IRC, freenode, #hurd, 2011-09-02: @@ -38,3 +41,8 @@ IRC, freenode, #hurd, 2011-09-02: 800 fifteen minutes ago) manuel: i observed the same behaviour [...] + + +# IRC, freenode, #hurd, 2011-09-22 + +See [[/open_issues/pagers]], IRC, freenode, #hurd, 2011-09-22. diff --git a/open_issues/performance/io_system/clustered_page_faults.mdwn b/open_issues/performance/io_system/clustered_page_faults.mdwn index 9e20f8e1..a3baf30d 100644 --- a/open_issues/performance/io_system/clustered_page_faults.mdwn +++ b/open_issues/performance/io_system/clustered_page_faults.mdwn @@ -137,3 +137,26 @@ License|/fdl]]."]]"""]] where the pager interface needs to be modified, not the Mach one?... antrik: would be nice wouldn't it ? :) antrik: more probably the page fault handler + + +# IRC, freenode, #hurd, 2011-09-28 + + antrik: I've just recovered part of my old multipage I/O work + antrik: I intend to clean and submit it after finishing the changes + to the pageout system. + slpz: oh, great! + didn't know you worked on multipage I/O + slpz: BTW, have you checked whether any of the work done for GSoC + last year is any good?... + (apart from missing copyright assignments, which would be a + serious problem for the Hurd parts...) + antrik: It was seven years ago, but I did: + http://www.mail-archive.com/bug-hurd@gnu.org/msg10285.html :-) + antrik: Sincerely, I don't think the quality of that code is good + enough to be considered... but I think it was my fault as his mentor for + not correcting him soon enough... + slpz: I see + TBH, I feel guilty myself, for not asking about the situation + immediately when he stopped attending meetings... + slpz: oh, you even already looked into vm_pageout_scan() back then + :-) diff --git a/open_issues/performance/ipc_virtual_copy.mdwn b/open_issues/performance/ipc_virtual_copy.mdwn index 00fa7180..9708ab96 100644 --- a/open_issues/performance/ipc_virtual_copy.mdwn +++ b/open_issues/performance/ipc_virtual_copy.mdwn @@ -356,3 +356,40 @@ IRC, freenode, #hurd, 2011-09-06: in PV it does not make sense: the guest already provides the translated page table which is just faster than anything else + +IRC, freenode, #hurd, 2011-09-09: + + oh BTW, for another data point: dd zero->null gets around 225 MB/s + on my lowly 1 GHz Pentium3, with a blocksize of 32k + (but only half of that with 256k blocksize, and even less with 1M) + the system has been up for a while... don't know whether it's + faster on a freshly booted one + +IRC, freenode, #hurd, 2011-09-15: + + + http://www.reddit.com/r/gnu/comments/k68mb/how_intelamd_inadvertently_fixed_gnu_hurd/ + so is the dd command pointed to by that article a measure of io + performance? + sudoman: no, not really + it's basically the baseline of what is possible -- but the actual + slowness we experience is more due to very unoptimal disk access patterns + though using KVM with writeback caching does actually help with + that... + also note that the title of this post really makes no + sense... nested page tables should provide similar improvements for *any* + guest system doing VM manipulation -- it's not Hurd-specific at all + ok, that makes sense. thanks :) + +IRC, freenode, #hurd, 2011-09-16: + + antrik: I wrote that article (the one about How AMD/Intel fixed...) + antrik: It's obviously a bit of an exaggeration, but it's true that + nested pages supposes a great improvement in the performance of Hurd + running on virtual machines + antrik: and it's Hurd specific, as this system is more affected by + the cost of page faults + antrik: and as the impact of virtualization on the performance is + much higher than (almost) any other OS. + antrik: also, dd from /dev/zero to /dev/null it's a measure on how + fast OOL IPC is. diff --git a/open_issues/resource_management_problems.mdwn b/open_issues/resource_management_problems.mdwn index 1558bebb..8f752d61 100644 --- a/open_issues/resource_management_problems.mdwn +++ b/open_issues/resource_management_problems.mdwn @@ -77,6 +77,10 @@ IRC, freenode, #hurd, 2011-07-31 # Further Examples + * [[hurd/critique]] + * [[IO_accounting]] + * [[translators_set_up_by_untrusted_users]], and [[pagers]] + * [[configure max command line length]] diff --git a/open_issues/resource_management_problems/pagers.mdwn b/open_issues/resource_management_problems/pagers.mdwn new file mode 100644 index 00000000..4c36703c --- /dev/null +++ b/open_issues/resource_management_problems/pagers.mdwn @@ -0,0 +1,322 @@ +[[!meta copyright="Copyright © 2011 Free Software Foundation, Inc."]] + +[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable +id="license" text="Permission is granted to copy, distribute and/or modify this +document under the terms of the GNU Free Documentation License, Version 1.2 or +any later version published by the Free Software Foundation; with no Invariant +Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license +is included in the section entitled [[GNU Free Documentation +License|/fdl]]."]]"""]] + +[[!tag open_issue_gnumach]] + +[[!toc]] + + +# IRC, freenode, #hurd, 2011-09-14 + +Coming from [[translators_set_up_by_untrusted_users]], 2011-09-14 discussion: + + antrik: I think a tunable option for preventing non-root users from + creating pagers and attaching translators could also be desirable + slpz: why would you want to prevent creating pagers and attaching + translators? + Preventing resource exhaustion, I guess. + antrik: security and (as tschwinge says) for prevent a rouge pager + from exhausting the system. + antrik: without the ability to use translators for non-root users, + Hurd can provide (almost) the same level of resource protection than + other *nixes + +See also: [[translators_set_up_by_untrusted_users]], +[[hurd/translator/tmpfs/tmpfs_vs_defpager]]. + + the hurd is about that though + there should be also a limit on the number of outstanding requests + that a task can have, and some other easily traceable values + port messages queues have limits + slpz: anything can exhaust the system. there are much more basic + limits that are missing... and I don't see how translators or pagers are + special in that regard + braunr: that's what I said tunable. If I don't share my computer + with untrusted users, I want full functionality. Otherwise, I can enable + that limitation + braunr: but I think those limits are on reception + that's a wrong solution + antrik: because pagers are external memory objects, and those are + treated differently + compared to what ? + and yes, the limit is on the message queue, on reception + why is that a problem ? + antrik: forbidding the use of translator was for security, to avoid + the problem of traversing an untrusted FS + braunr: compared to anonymous memory + braunr: because if the limit is on reception, a task can easily do a + DoS against a server + hm actually, the problems we have with swap handling is that + anonymous memory is handled in a very similar way as other objects + braunr: I want to limit the number of outstanding (unprocessed + messages in queues) requests + slpz: the solution isn't about forbidding the use of translators, + but changing common code (libc i guess) not to use them, they can still + run beside + braunr: that's because, currently, the external page limit is not + enforced + i'm also not sure about DoS attacks + if i'm right, there is often one port for each managed object, + which usually exist per client + braunr: yes, that could an option too (for translators, not for + pagers) + i don't see how pagers wouldn't be translators on the hurd + braunr: all pagers are translators, but not all translators are + pagers ;-) + so if it works for translators, it also works for pagers + braunr: it would fix the security issue, but not the resource + exhaustion problem, with only affects to pagers + i just don't see a point in implementing resource limits before + even fixing other fundamental issues + the only way to avoid resource exhaustion is resource limits + slpz: just not following untrusted translators is much more useful + than forbidding them alltogether + and the main problem of mach is resource accounting + so first, fix that, using the critique as a starting point + +[[hurd/critique]]. + + braunr: i'm not saying that this should be implemented right now, + i'm just pointing out this possibility + i think we're all mostly aware of it + braunr: resource accounting, as it's expressed in the critique, + would be wonderful, but it's just too complex IMHO + it requires carefully designed changes to the interface yes + to the interface, to the internals, to user space tasks... + the internals wouldn't be impacted that much + user space tasks would mostly include hurd servers + if the changes are centralized in libraries, it should be easy to + provide to the servers + + +# IRC, freenode, #hurd, 2011-09-22 + + antrik: I've also implemented a simple resource control on dirty + pages and changed pageout_scan to free external pages, and only touch + anonymous memory if it's really needed + antrik: those combined make the system work better under heavy load + antrik: 1.5 GB of RAM and another 1.5 GB of swap helps a lot, too + :-) + hm... I'm not sure what these things mean exactly TBH... but I + wonder whether some of these could fix the performance degradation (and + ultimate crash) I described recently... + +[[/open_issues/default_pager]], [[system performance degradation +(?)|performance/degradation]]. + + care to explain them to a noob like me? + probably not. During my tests, I've noticed that, at some points, + the system performance starts to degrade, and this doesn't change until + it's restarted + but I wasn't able to create a test case to reproduce the bug... + antrik: Sure. First, I've changed GNU Mach to: + - Classify all pages from data_supply as external, and count them + in vm_page_external_count (previously, this variable was always zero) + +[[/open_issues/mach_vm_pageout]] + + - Count all pages for which a data_unlock has been requested as + potentially dirty pages + there is one important bit I forgot to mention in my recent + report: one "reliable" way to cause growing swap usage is simply + installing a lot of debian packages (e.g. running an apt-get upgrade) + some other kinds of I/O also seem to have such an effect, but I + wasn't able to pinpoint specific situations + - Establish a limit on how many potentially dirty pages are + allowed. If it's reached, a notification (right now it's just a bogus + m_o_data_unlock, to avoid implementing a new RPC) it's sent to the pager + which has generated the page fault + - Establish a hard limit on those dirt pages. If it's reached, + threads asking for a data_unlock are blocked until someone cleans some + pages. This should be improved with a forced pageout, if needed. + - And finally, in vm_pageout_scan, run over the inactive queue + searching for clean, external pages, freeing them. If it's not possible + to free enough pages, or if vm_page_external_count is less than 10% of + system's memory, the "normal" pageout is used. + I need to clean up things a little, but I want to send a preliminary + patch to bug-hurd ASAP, to have more people testing it. + antrik: Do you thing that performance degradation can be related + with the number of threads of your ext2fs translators? + slpz: hm... I didn't watch that recently; but in the past, I + observe that the thread count is pretty constant after it reaches + something like 14000 on heavy load... + err... wait, 14000 was ports :-) + I doubt my system would survive 14000 threads ;-) + don't remember thread count... I guess I should start watching + this again + antrik: I was thinking that 14000 threads sound like a lot :-) + what I know for sure, is that when operating with large files, the + deactivation of all pages of the memory object which is done after every + operation really hurts to performance + right now my root FS has 5100 ports and a mere 71 thread... but + then, it's almost freshly booted :-) + that's why I've just commented that operation in my code, since it's + not really needed anymore :-) + anyway, after submitting all my pending mails to bug-hurd, I'll try + to hunt that bug. Sounds funny. + regarding your explanation, I'm still trying to wrap my head + around some of the details. I must admit that I don't remember what + data_unlock does... or maybe I never fully understood it + the limit on dirty pages is global? + yes, right now it's global + I try to find the old discussion of the thread storm stuff + there was some concern about deadlocks + marcusb: yes, because we were talking about putting an static limit + for the server threads of a translators + marcusb: and that was wrong (my fault, I was even dumber back then + :-P) + oh boy digging in old mail is no fun. first I see mistakes in my + english. then I see quite complicated pager stuff I don't ever remember + touching. but there is a patch, and it has my name on it + I think I lost a couple of the early years of my hurd hacking :) + hm... I reread the chapter on locking, and it's still above me :-( + not sure what you are talking about, but if there are any + specific questions... + marcusb: external pager interface + +[[microkernel/mach/external_pager_mechanism]]. + + uuuuh ;) + memory_object_lock_request(), memory_object_lock_completed(), + memory_object_data_unlock() + is that from the mach manual? + yes + I didn't really understand that part when I first read it a couple + of years ago, and I still don't understand it now :-( + I am sure I didn't understand it either + and maybe I missed my window :) + let's see + hehe + slpz: what exactly do you mean by "the pager which has generated + the page fault"? + marcusb: essentially I'm trying to understand the explanation of + the changes slpz did, but there are several bits totally obscure to me + :-( + antrik: when a I/O operation is requested to ext2fs, it maps the + object in question to it's own space, and then memcpy's from/to there + antrik: so the translator (which is also a pager) is the one who + generates the page fault + yeah + antrik: it's important to understand which messages are sent by + the kernel to the manager and which are sent the other way + if the dest port is memory_object_t, that indicates a msg from + kernel to manager. if it is memory_object_control_t, it's a msg from + manager to kernel + antrik: m_o_lock_request it's used by the pager to "settle" the + status of a memory object, m_o_lock_completed is the answer from the + kernel when the lock has been completed (only if the client has requested + to be notified), and m_o_data_unlock is a request from the kernel to + change the level of protection for a page (it's called from vm_fault.c) + slpz: but it's not pagers generating page faults, but users of + the memory object on the other side + marcusb: well, I think the direction is clear to me... but the + purpose not really :-) + ie a client that mapped a file + antrik: in ext2fs, all pages are initially provided to the kernel + (via data_supply) write protected. When a write operation is done over + one of those pages, a page fault it's generated, which sends a + m_o_data_unlock to the pager, which answers (if convenient) which a + page_lock decreasing the protection level + antrik: one use of lock_request is when you want to shut down + cleanly and want to get the dirty pages written back to you from the + kernel. + antrik: the other thing may be COW strategies + marcusb: well, pagers and clients are in the same task for most + translators, like ext2fs + slpz: oh. + marcusb: but yes, a read operation in a mmap'ed file would trigger + the fault in a client user task + slpz: I think I forgot everything about pagers :) + marcusb: pager-memcpy.c is the key :-) + slpz: what becomes of the fault then? the kernel sees it's a + mapped memory object. will it then talk to the manager or to a pager? + slpz: the translator causes the faults itself when it handles + io_read()/io_write() requests I suppose, as opposed to clients accessing + mmap()ed objects which then generate the faults?... + ah, that's actually what you already said above :-) + marcusb: I'm not sure what do you mean by "manager"... + manager == memory object + mh + marcusb: for all external objects, it will ask to their current + pager + slpz: I think I am missing a couple of details, so nevermind. + It's starting to come back to me, but I am a bit afraid of that ;) + what I love about the Hurd is how damn readable the code is + considering it's an object system, it's so much nicer to read + than gtk stuff + when you get the big picture, it's actually somewhat fun to see how + data moves around just to fulfill a simple read() + you should make a diagram! + bonus point for animated video ;) + +[[hurd/IO_path]]. + + marcusb: heh, take a look at the hurd specific parts of glibc... I + cry in pain every time a do that... + slpz: oh yeah, rdwr-internal. + oh man + slpz: funny thing, I just looked at them the other day because of + the security issue + marcusb: I think there was one, maybe a slice from someone's + presentation... + I think I was always confused about the pager/memobj/kernel + interactions + marcusb: I'm barely able to read Roland's glibc code. I think it's + out of my reach. + marcusb: I think part of the problem is confusing terminology + it's good that you are instrumenting the mach kernel to see + what's actually going on in there. it was a black book for me, but neal + too a peek and got a much better understanding of the performance issues + than I ever did + when talking about "pager", we usually mean the process doing the + paging; but in mach terminology this actually seems to be the "manager", + while a "pager" is an individual object in the manager process... or + something like that ;-) + antrik: I just never took a look at the big picture. I look at + the parts + I knew the tail, ears, and legs of the elephant. + it's a lot of code for a beginner + I never understood the distinction between "pager" and "memory + object" though... + maybe "pager" refers to the object in the external pager, while + "memory object" is the part managed in Mach itself?... + memory object is a real object, to which you can send messages. + it's implemented in the server + hm... maybe it's the other way around then ;-) + there is also the default pager + I think the pager is just another name for the process that + serves the memory object (default pager == memory object for anonymous + memory == swap) + but! + there is also libpager + +[[hurd/libpager]] + + and that's a more complicated beast + actually, the correct term seems to be "default memory manager"... + yeah + from mach's pov + we always called it default pager in the Hurd + marcusb: problem is that "pager" is sometimes used in the Mach + documentation to refer to memory object ports IIRC + isn't it defpager executable? + could be + it's the same thing, really + indeed, the program implementing the default memory manager is + called "default pager"... so the terminology is really inconsistent + the hurd's pager library is a high level abstraction for mach's + external memory object interface. + i wouldn't worry about it too much + I never looked at libpager + you should! + it's an important beast + never seemed relevant to anything I did so far... + though maybe it would help understanding + it's related to what you are looking now :) diff --git a/open_issues/rework_gnumach_ipc_spaces.mdwn b/open_issues/rework_gnumach_ipc_spaces.mdwn index b3d1b4a4..7c66776b 100644 --- a/open_issues/rework_gnumach_ipc_spaces.mdwn +++ b/open_issues/rework_gnumach_ipc_spaces.mdwn @@ -10,7 +10,7 @@ License|/fdl]]."]]"""]] [[!tag open_issue_gnumach]] -[[!toc] +[[!toc]] # IRC, freenode, #hurd, 2011-05-07 diff --git a/open_issues/translators_set_up_by_untrusted_users.mdwn b/open_issues/translators_set_up_by_untrusted_users.mdwn index 36fe5438..97f48bba 100644 --- a/open_issues/translators_set_up_by_untrusted_users.mdwn +++ b/open_issues/translators_set_up_by_untrusted_users.mdwn @@ -324,3 +324,24 @@ do bear some similarity with the issue we're discussing here. it should be one's normal right to change the view one has of it we discussed that once actually I believe... err... private namespaces I mean + +IRC, freenode, #hurd, 2011-09-10: + + I am rereading Neal Walfield's and Marcus Brinkman's critique of + the hurd on mach. One of the arguments is that a file system may be + malicious (by DoS its clients with infinitely deep directory + hierarchies). Is there an answer to that that does not require programs + to be programmed defensively against such possibilities? + +IRC, freenode, #hurd, 2011-09-14: + + cjuner: regarding malicious filesystems: the answer is to do + exactly the same as FUSE on Linux: don't follow translators set up by + untrusted users by default + antrik, but are legacy programs somehow protected? What about + executing `find`? Or is GNU's find somehow protected from that? + cjuner: I'm talking about a global policy + antrik, and who would implement that policy? + cjuner: either glibc or the parent translators + +Continued discussion about [[resource_management_problems/pagers]]. -- cgit v1.2.3