From 8cee055ec4fac00e59f19620ab06e2b30dccee3c Mon Sep 17 00:00:00 2001 From: Thomas Schwinge Date: Wed, 11 Jul 2012 22:39:59 +0200 Subject: IRC. --- hurd/debugging/rpctrace.mdwn | 80 +- hurd/translator/ext2fs.mdwn | 44 + hurd/translator/procfs/jkoenig/discussion.mdwn | 53 +- microkernel/mach.mdwn | 6 +- microkernel/mach/deficiencies.mdwn | 260 +++++ microkernel/mach/gnumach/memory_management.mdwn | 35 +- open_issues/binutils_gold.mdwn | 181 +-- open_issues/code_analysis.mdwn | 17 +- open_issues/dde.mdwn | 10 + open_issues/fcntl_locking_dev_null.mdwn | 38 + open_issues/gcc.mdwn | 54 + open_issues/gdb.mdwn | 2 +- open_issues/gdb_attach.mdwn | 41 + open_issues/glibc.mdwn | 2 + open_issues/glibc/mremap.mdwn | 221 ++++ open_issues/gnumach_i686.mdwn | 26 + open_issues/gnumach_integer_overflow.mdwn | 17 + open_issues/gnumach_page_cache_policy.mdwn | 589 ++++++++++ open_issues/gnumach_tick.mdwn | 35 + open_issues/gnumach_vm_map_red-black_trees.mdwn | 20 + .../gnumach_vm_object_resident_page_count.mdwn | 22 + open_issues/libpthread_CLOCK_MONOTONIC.mdwn | 24 +- open_issues/low_memory.mdwn | 113 ++ open_issues/mach-defpager_swap.mdwn | 20 + open_issues/metadata_caching.mdwn | 31 + open_issues/multithreading.mdwn | 15 +- open_issues/nfs_trailing_slash.mdwn | 36 + open_issues/page_cache.mdwn | 10 +- open_issues/performance.mdwn | 16 +- open_issues/performance/io_system/read-ahead.mdwn | 1176 ++++++++++++++++++++ open_issues/pfinet_vs_system_time_changes.mdwn | 24 +- open_issues/qemu_writeback.mdwn | 18 + open_issues/strict_aliasing.mdwn | 21 + 33 files changed, 3059 insertions(+), 198 deletions(-) create mode 100644 microkernel/mach/deficiencies.mdwn create mode 100644 open_issues/fcntl_locking_dev_null.mdwn create mode 100644 open_issues/gdb_attach.mdwn create mode 100644 open_issues/glibc/mremap.mdwn create mode 100644 open_issues/gnumach_i686.mdwn create mode 100644 open_issues/gnumach_integer_overflow.mdwn create mode 100644 open_issues/gnumach_tick.mdwn create mode 100644 open_issues/gnumach_vm_object_resident_page_count.mdwn create mode 100644 open_issues/low_memory.mdwn create mode 100644 open_issues/mach-defpager_swap.mdwn create mode 100644 open_issues/metadata_caching.mdwn create mode 100644 open_issues/nfs_trailing_slash.mdwn create mode 100644 open_issues/qemu_writeback.mdwn create mode 100644 open_issues/strict_aliasing.mdwn diff --git a/hurd/debugging/rpctrace.mdwn b/hurd/debugging/rpctrace.mdwn index fd24f081..df6290f7 100644 --- a/hurd/debugging/rpctrace.mdwn +++ b/hurd/debugging/rpctrace.mdwn @@ -1,4 +1,4 @@ -[[!meta copyright="Copyright © 2007, 2008, 2009, 2010, 2011 Free Software +[[!meta copyright="Copyright © 2007, 2008, 2009, 2010, 2011, 2012 Free Software Foundation, Inc."]] [[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable @@ -89,6 +89,84 @@ See `rpctrace --help` about how to use it. braunr: the output of rpctrace --help should tell the default dir for msgids +* IRC, freenode, #hurd, 2012-06-30 + + hello. Has anyone faced with problem when translator works + fine, but when it is started via rpctrace it hangs? Probably you know + what can cause this? + mcsim: rpctrace itself is quite buggy + zhengda once did a number of improvements, but they never went + upstream... + well, he never explained how his fixes worked :) + GNU/Hurd is no different from other projects in that regard: if + you don't explain how your patches work, there's low chance that they + are applied + unless the maintainer has time to dive himself, which we don't + "it compiles, ship it!" + pinotree: i guess the hurd is different in that particular + regard :p + not different from linux + eh, they include staging drivers now :) + we have a sort-of staging tree as well, with netdde + we don't really care about stability there + youpi: actually, I think by now (and not to a small part + because of this episode) that we are too strict about patch + submission + well, review really is needed, otherwise source gets into a bad + shape + while zhengda's variant might not have been ideal (nobody of + us understands the workings of rpctrace enough to tell), I have + little doubt that it would be an improvement... + it happened quite a few times that a fix revealed to be + actually bogus + in that particular case, I agree + the problem is that usually what happens is that questions are + asked + and the answers never happen + and thus the patch gets lost + after all, when he when he submitted that patch, he had a much + better understanding of rpctrace than any of us... + sure + Linus is actually quite pragmatic about that. from what I've + seen, if he can be convinced that something is *probably* an + improvement over the previous status, he will usually merge it, even + if he has some qualms + when there is a maintainer, he usually requires his approval, + doesn't he? + in particular, for code that is new or has been in a very bad + shape before, standards shouldn't be as high as for changes to known + good code. and quite frankly, large parts of the Hurd code base + aren't all that good to begin with... + sure + well, sure. in this case, we should have just appointed + zhengda to be the rpctrace maintainer :-) + BTW, as his version is quite fundamentally different, perhaps + instead of merging the very large patch, perhaps we should just ship + both versions, and perhaps drop the old one at some point if the new + one turns out to work well... + (and perhaps I overused the word perhaps in that sentence + perhaps ;-) ) + about that particular patch, you had needed raised a few bits + and there was no answers + the patch is still in my mbox, far away + so it was *not* technically lost + it's just that as usual we lack manpower + yeah, I know. but many of the things I raised were mostly + formalisms, which might be helpful for maintaining high-quality code, + but probably were just a waste of time and effort in this case... I'm + not surprised that zhengda lost motivation to pursue this further :-( + it would help a lot to get the ton of patches in the debian + packages upstream :) + braunr: there aren't many, and usually for a good reason + some of them are in debian for testing, and can probably be + commited at some point + youpi: we could mark (with dep3 headers) the ones which are + meant to be debian-specific + sure + well, there are also a few patches that are not exactly + Debian-specific, but not ready for upstream either... + antrik: yes + # See Also diff --git a/hurd/translator/ext2fs.mdwn b/hurd/translator/ext2fs.mdwn index ad79c7b9..8e15d1c7 100644 --- a/hurd/translator/ext2fs.mdwn +++ b/hurd/translator/ext2fs.mdwn @@ -18,6 +18,8 @@ License|/fdl]]."]]"""]] * [[Page_cache]] + * [[metadata_caching]] + ## Large Stores @@ -43,6 +45,48 @@ Smaller block sizes are commonly automatically selected by `mke2fs` when using small backend stores, like floppy devices. +#### IRC, freenode, #hurd, 2012-06-30 + + at least having the same api in the debian package and the git + source would be great (in reference to the large store patch ofc) + braunr: the api part could be merged perhaps + it's very small apparently + braunr: the large store patch is a sad story. when it was first + submitted, one of the maintainers raised some concerns. the other didn't + share these (don't remember who is who), but the concerned one never + followed up with details. so it has been in limbo ever since. tschwinge + once promised to take it up, but didn't get around to it so far. plus, + the original author himself mentioned once that he didn't consider it + finished... + antrik: it's clearly not finished + there are XXXs here and there + it's called an RC1 and RC2 is mentioned in the release notes + youpi: well, that doesn't stop most other projects from commiting + stuff... including most emphatically the original Hurd code :-) + what do you refer to my "that" ? :) + "XXX" + right + at the time it made sense to delay applying + but I guess by nowadays standard we should just as well commit it + it works enough for Debian, already + there is just one bug I nkow about + the apt database file keeps haveing the wrong size, fixed by e2fsck + youpi: remember that patch should be fixed in the offset + declaration in diskfs.h + I don't remember about that + did we fix it in the debian package? + nope + you had issues when fixing it, didn't you? + (I don't remember where I can find the details about this) + i changed it, recompiled hurd and installed it, started a perl + rebuild and when running one of the two lfs tests it hard locked the vm + after ext2fs was taking 100% cpu for a bit + i don't exclude i could have done something stupid on my side + though + or there could just be actual issues, uncovered here + which can be quite probable + + # Documentation * diff --git a/hurd/translator/procfs/jkoenig/discussion.mdwn b/hurd/translator/procfs/jkoenig/discussion.mdwn index e7fdf46e..182b438b 100644 --- a/hurd/translator/procfs/jkoenig/discussion.mdwn +++ b/hurd/translator/procfs/jkoenig/discussion.mdwn @@ -68,7 +68,7 @@ IRC, #hurd, around October 2010 owner, but always with root group -# `/proc/$pid/stat` being 400 and not 444, and some more +# `/proc/[PID]/stat` being 400 and not 444, and some more IRC, freenode, #hurd, 2011-03-27 @@ -187,7 +187,7 @@ IRC, freenode, #hurd, 2011-07-22 server anyway, I think. -# `/proc/mounts`, `/proc/$pid/mounts` +# `/proc/mounts`, `/proc/[PID]/mounts` IRC, freenode, #hurd, 2011-07-25 @@ -277,3 +277,52 @@ Needed by glibc's `pldd` tool (commit it's very weird for example for fd connected to files that have been unlinked. it looks like a broken symlink, but when dereferencing (e.g. with cp), you get the actual file contents... + + +# `/proc/[PID]/maps` + +## IRC, OFTC, #debian-hurd, 2012-06-20 + + bdefreese: the two elfutils tests fail because there are no + /proc/$pid/maps files + that code is quite relying on linux features, like locating the + linux kernel executables and their modules, etc + (see eg libdwfl/linux-kernel-modules.c) + refactor elfutils to have the linux parts executed only on linux + :D + Oh yeah, the maintainer already seems really thrilled about + Hurd.. Did you see + http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=662041 ? + kurt is generally helpful with us (= hurd) + most probably there he is complaining that we let elfutils build + with nocheck (ie skipping the test suite run) instead of investigate and + report why the test suite failed + + +# IRC, freenode, #hurd, 2011-06-19 + + jkoenig: procfs question: in process.c, process_lookup_pid, why + is the entries[2].hook line repeated twice? + pinotree, let me check + pinotree, it's probably just a mistake, there's no way the second + one has any effect + jkoenig: i see, it looked like you c&p'd that code accidentally + pinotree, it's probably what happened, yes. + + +# IRC, freenode, #hurd, 2012-06-30 + + btw, what do you think about making jkoening's procfs master the + real master? + probably a good idea + it does work quite well, except a few pidof hangs + surely better than the old one :) + yes :) + + +# `/proc/[PID]/cwd` + +## IRC, freenode, #hurd, 2012-06-30 + + * pinotree has a local work to add the /proc/$pid/cwd symlink, but relying + on "internal" (but exported) glibc functions diff --git a/microkernel/mach.mdwn b/microkernel/mach.mdwn index deaf6788..02627766 100644 --- a/microkernel/mach.mdwn +++ b/microkernel/mach.mdwn @@ -1,4 +1,4 @@ -[[!meta copyright="Copyright © 2007, 2008, 2010 Free Software Foundation, +[[!meta copyright="Copyright © 2007, 2008, 2010, 2012 Free Software Foundation, Inc."]] [[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable @@ -14,6 +14,8 @@ microkernel currently used by the [[Hurd]]. * [[Concepts]] + * [[Deficiencies]] + * [[Documentation]] * [[History]] @@ -30,6 +32,8 @@ microkernel currently used by the [[Hurd]]. ([API](http://developer.apple.com/documentation/Darwin/Conceptual/KernelProgramming/index.html)) (**non-free**) + * [[open_issues/OSF_Mach]] + # Related diff --git a/microkernel/mach/deficiencies.mdwn b/microkernel/mach/deficiencies.mdwn new file mode 100644 index 00000000..f2f49975 --- /dev/null +++ b/microkernel/mach/deficiencies.mdwn @@ -0,0 +1,260 @@ +[[!meta copyright="Copyright © 2012 Free Software Foundation, Inc."]] + +[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable +id="license" text="Permission is granted to copy, distribute and/or modify this +document under the terms of the GNU Free Documentation License, Version 1.2 or +any later version published by the Free Software Foundation; with no Invariant +Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license +is included in the section entitled [[GNU Free Documentation +License|/fdl]]."]]"""]] + +[[!tag open_issue_documentation open_issue_gnumach]] + + +# IRC, freenode, #hurd, 2012-06-29 + + I do not understand what are the deficiencies of Mach, the + content I find on this is vague... + the major problems are that the IPC architecture offers poor + performance; and that resource usage can not be properly accounted to the + right parties + antrik: the more i study it, the more i think ipc isn't the + problem when it comes to performance, not directly + i mean, the implementation is a bit heavy, yes, but it's fine + the problems are resource accounting/scheduling and still too much + stuff inside kernel space + and with a very good implementation, the performance problem would + come from crossing address spaces + (and even more on SMP, i've been thinking about it lately, since + it would require syncing mmu state on each processor currently using an + address space being modified) + braunr: the problem with Mach IPC is that it requires too many + indirections to ever be performant AIUI + antrik: can you mention them ? + the semantics are generally quite complex, compared to Coyotos for + example, or even Viengoos + antrik: the semantics are related to the message format, which can + be simplified + i think everybody agrees on that + i'm more interested in the indirections + but then it's not Mach IPC anymore :-) + right + 22:03 < braunr> i mean, the implementation is a bit heavy, yes, + but it's fine + that's not an implementation issue + that's what i meant by heavy :) + well, yes and no + Mach IPC have changed over time + it would be newer Mach IPC ... :) + the fact that data types are (supposed to be) transparent to the + kernel is a major part of the concept, not just an implementation detail + but it's not just the message format + transparent ? + but they're not :/ + the option to buffer in the kernel also adds a lot of complexity + buffer in the kernel ? + ah you mean message queues + yes + braunr: eh? the kernel parses all the type headers during transfer + yes, so it's not transparent at all + maybe you have a different understanding of "transparent" ;-) + i guess + I think most of the other complex semantics are kinda related to + the in-kernel buffering... + i fail to see why :/ + well, it allows ports rights to be destroyed while a message is in + transfer. a lot of semantics revolve around what happens in that case + yes but it doesn't affect performance a lot + sure it does. it requires a lot of extra code and indirections + not a lot of it + "a lot" is quite a relative term :-) + compared to L4 for example, it *is* a lot + and those indirections (i think you refer to more branching here) + are taken only when appropriate, and can be isolated, improved through + locality, etc.. + the features they add are also huge + L4 is clearly insufficient + all current L4 forks have added capabilities .. + (that, with the formal verification, make se4L one of the + "hottest" recent system projects) + seL4* + yes, but with very few extra indirection I think... similar to + EROS (which claims to have IPC almost as efficient as the original L4) + possibly + I still fail to see much real benefit in formal verification :-) + but compared to other problems, this added code is negligible + antrik: for a microkernel, me too :/ + the kernel is already so small you can simply audit it :) + no, it's not neglible, if you go from say two cache lines touched + per IPC (original L4) to dozens (Mach) + every additional variable that needs to be touched to resolve some + indirection, check some condition adds significant overhead + if you compare the dozens to the huge amount of inter processor + interrupt you get each time you change the kernel map, it's next to + nothing .. + change the kernel map? not sure what you mean + syncing address spaces on hundreds of processors each time you + send a message is a real scalability issue here (as an example), where + Mach to L4 IPC seem like microoptimization + braunr: modify, you mean? + yes + (not switchp + ) + but that's only one example + yes, modify, not switch + also, we could easily get rid of the ihash library + making the message provide the address of the object associated to + a receive right + so the only real indirection is the capability, like in other + systems, and yes, buffering adds a bit of complexity + there are other optimizations that could be made in mach, like + merging structures to improve locality + "locality"? + having rights close to their target port when there are only a few + pinotree: locality of reference + for cache efficiency + hundreds of processors? let's stay realistic here :-) + i am .. + a microkernel based system is also a very good environment for RCU + (i yet have to understand how liburcu actually works on linux) + I'm not interested in systems for supercomputers. and I doubt + desktop machines will get that many independant cores any time soon. we + still lack software that could even romotely exploit that + hum, the glibc build system ? :> + lol + we have done a survey over the nix linux distribution + quite few packages actually benefit from a lot of cores + and we already know them :) + what i'm trying to say is that, whenever i think or even measure + system performance, both of the hurd and others, i never actually see the + IPC as being the real performance problem + there are many other sources of overhead to overcome before + getting to IPC + I completely agree + and with the advent of SMP, it's even more important to focus on + contention + (also, 8 cores aren't exactly a lot...) + antrik: s/8/7/ , or even 6 ;) + braunr: it depends a lot on the use case. most of the problems we + see in the Hurd are probably not directly related to IPC performance; but + I pretty sure some are + (such as X being hardly usable with UNIX domain sockets) + antrik: these have more to do with the way mach blocks than IPC + itself + similar to the ext2 "sleep storm" + a lot of overhead comes from managing ports (for for example), + which also mostly comes down to IPC performance + antrik: yes, that's the main indirection + antrik: but you need such management, and the related semantics in + the kernel interface + (although i wonder if those should be moved away from the message + passing call) + you mean a different interface for kernel calls than for IPC to + other processes? that would break transparency in a major way. not sure + we really want that... + antrik: no + antrik: i mean calls specific to right management + admittedly, transparency for port management is only useful in + special cases such as rpctrace, and that probably could be served better + with dedicated debugging interfaces... + antrik: i.e. not passing rights inside messages + passing rights inside messages is quite essential for a capability + system. the problem with Mach IPC in regard to that is that the message + format allows way more flexibility than necessary in that regard... + antrik: right + antrik: i don't understand why passing rights inside messages is + important though + antrik: essential even + braunr: I guess he means you need at least one way to pass rights + braunr: well, for one, you need to pass a reply port with each RPC + request... + youpi: well, as he put, the message passing call is overpowered, + and this leads to many branches in the code + antrik: the reply port is obvious, and can be optimized + antrik: but the case i worry about is passing references to + objects between tasks + antrik: rights and identities with the auth server for example + antrik: well ok forget it, i just recall how it actually works :) + antrik: don't forget we lack thread migration + antrik: you may not think it's important, but to me, it's a major + improvement for RPC performance + braunr: how can seL4 be the most interesting microkernel + then?... ;-) + antrik: hm i don't know the details, but if it lacks thread + migration, something is wrong :p + antrik: they should work on viengoos :) + (BTW, AIUI thread migration is quite related to passive objects -- + something Hurd folks never dared seriously consider...) + i still don't know what passive objects are, or i have forgotten + it :/ + no own control threads + hm, i'm still missing something + what do you refer to by control thread ? + with* + i.e. no main loop etc.; only activated by incoming calls + ok + well, if i'm right, thomas bushnel himself wrote (recently) that + the ext2 "sleep" performance issue was expected to be solved with thread + migration + so i guess they definitely considered having it + braunr: don't know what the "sleep peformance issue" is... + http://lists.gnu.org/archive/html/bug-hurd/2011-12/msg00032.html + antrik: also, the last message in the thread, + http://lists.gnu.org/archive/html/bug-hurd/2011-12/msg00050.html + antrik: do you consider having a reply port being an avoidable + overhead ? + braunr: not sure. I don't remember hearing of any capability + system doing this kind of optimisation though; so I guess there are + reasons for that... + antrik: yes me too, even more since neal talked about it on + viengoos + I wonder whether thread management is also such a large overhead + with fully sync IPC, on L4 or EROS for example... + antrik: it's still a very handy optimization for thread scheduling + antrik: it makes solving priority inversions a lot easier + actually, is thread scheduling a problem at all with a thread + activation approach like in Viengoos? + antrik: thread activation is part of thread migration + antrik: actually, i'd say they both refer to the same thing + err... scheduler activation was the term I wanted to use + same + well + scheduler activation is too vague to assert that + antrik: do you refer to scheduler activations as described in + http://en.wikipedia.org/wiki/Scheduler_activations ? + my understanding was that Viengoos still has traditional threads; + they just can get scheduled directly on incoming IPC + braunr: that Wikipedia article is strange. it seems to use + "scheduler activations" as a synonym for N:M multithreading, which is not + at all how I understood it + antrik: I used to try to keep a look at those pages, to fix such + wrong things, but left it + antrik: that's why i ask + IIRC Viengoos has a thread associated with each receive + buffer. after copying the message, the kernel would activate the + processes activation handler, which in turn could decide to directly + schedule the thead associated with the buffer + or something along these lines + antrik: that's similar to mach handoff + antrik: generally enough, all the thread-related pages on wikipedia + are quite bogus + nah, handoff just schedules the process; which is not useful, if + the right thread isn't activated in turn... + antrik: but i think it's more than that, even in viengoos + for instance, the french "thread" page was basically saying that + they were invented for GUIs to overlap computation with user interaction + .. :) + youpi: good to know... + antrik: the "misunderstanding" comes from the fact that scheduler + activations is the way N:M threading was implemented on netbsd + youpi: that's a refreshing take on the matter... ;-) + antrik: i'll read the critique and viengoos doc/source again to be + sure about what we're talking :) + antrik: as threading is a major issue in mach, and one of the + things i completely changed (and intend to change) in x15, whenever i get + to work on that again ..... :) + antrik: interestingly, the paper about scheduler activations was + written (among others) by brian bershad, in 92, when he was actively + working on research around mach + braunr: BTW, I have little doubt that making RPC first-class would + solve a number of problems... I just wonder how many others it would open diff --git a/microkernel/mach/gnumach/memory_management.mdwn b/microkernel/mach/gnumach/memory_management.mdwn index ca2f42c4..c630af05 100644 --- a/microkernel/mach/gnumach/memory_management.mdwn +++ b/microkernel/mach/gnumach/memory_management.mdwn @@ -1,4 +1,4 @@ -[[!meta copyright="Copyright © 2011 Free Software Foundation, Inc."]] +[[!meta copyright="Copyright © 2011, 2012 Free Software Foundation, Inc."]] [[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable id="license" text="Permission is granted to copy, distribute and/or modify this @@ -8,9 +8,12 @@ Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled [[GNU Free Documentation License|/fdl]]."]]"""]] -[[!tag open_issue_documentation]] +[[!tag open_issue_documentation open_issue_gnumach]] -IRC, freenode, #hurd, 2011-02-15 +[[!toc]] + + +# IRC, freenode, #hurd, 2011-02-15 etenil: originally, mach had its own virtual space (the kernel space) @@ -37,14 +40,15 @@ IRC, freenode, #hurd, 2011-02-15 lage - pages without resetting the mmu often thanks to global pages, but that didn't exist at the time) -IRC, freenode, #hurd, 2011-02-15 + +# IRC, freenode, #hurd, 2011-02-15 however, the kernel won't work in 64 bit mode without some changes to physical memory management and mmu management (but maybe that's what you meant by physical memory) -IRC, freenode, #hurd, 2011-02-16 +## IRC, freenode, #hurd, 2011-02-16 antrik: youpi added it for xen, yes antrik: but you're right, since mach uses a direct mapped kernel @@ -52,9 +56,7 @@ IRC, freenode, #hurd, 2011-02-16 which isn't required if the kernel space is really virtual ---- - -IRC, freenode, #hurd, 2011-06-09 +# IRC, freenode, #hurd, 2011-06-09 btw, how can gnumach use 1 GiB of RAM ? did you lower the user/kernel boundary address ? @@ -82,7 +84,7 @@ IRC, freenode, #hurd, 2011-06-09 RAM to fill the kernel space with struct page entries -IRC, freenode, #hurd, 2011-11-12 +# IRC, freenode, #hurd, 2011-11-12 well, the Hurd doesn't "artificially" limits itself to 1.5GiB memory @@ -102,3 +104,18 @@ IRC, freenode, #hurd, 2011-11-12 kernel space is what determines how much physical memory you can address unless using the linux-said-awful "bigmem" support + + +# IRC, freenode, #hurd, 2012-07-05 + + hm i got an address space exhaustion while building eglibc :/ + we really need the 3/1 split back with a 64-bits kernel + 3/1? + 3 GiB userspace, 1 GiB kernel + ah + the debian gnumach package is patched to use a 2/2 split + and 2 GiB is really small for some needs + on the bright side, the machine didn't crash + there is issue with watch ./slabinfo which turned in a infinite + loop, but it didn't affect the stability of the system + actually with a 64-bits kernel, we could use a 4/x split diff --git a/open_issues/binutils_gold.mdwn b/open_issues/binutils_gold.mdwn index aa6843a3..9eeebf2d 100644 --- a/open_issues/binutils_gold.mdwn +++ b/open_issues/binutils_gold.mdwn @@ -1,4 +1,5 @@ -[[!meta copyright="Copyright © 2010, 2011 Free Software Foundation, Inc."]] +[[!meta copyright="Copyright © 2010, 2011, 2012 Free Software Foundation, +Inc."]] [[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable id="license" text="Permission is granted to copy, distribute and/or modify this @@ -8,180 +9,8 @@ Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled [[GNU Free Documentation License|/fdl]]."]]"""]] -[[!tag open_issue_binutils]] +[[!tag open_issue_binutils open_issue_porting]] -Have a look at GOLD / port as needed. +Have a look at gold / port as needed. - -# teythoon's try / `mremap` issue - -IRC, #hurd, 2011-01-12 - - I've been looking into building gold on hurd and it built fine - with one minor tweak - and it's working fine according to its test suite - the only problem is that the build system is failing to detect - the hurdish mremap which lives in libmemusage - on linux it is in the libc so the check succeeds - any hints on how to fix this properly? - hm... it's strange that it's a different library on the Hurd - are the implementations compatible? - antrik: it seems so, though the declarations differ slightly - I guess the best thing is to ask on the appropriate list(s) why - they are different... - teythoon@ganymede:~/build/gold/binutils-2.21/gold$ grep -A1 - mremap /usr/include/sys/mman.h - extern void *mremap (void *__addr, size_t __old_len, size_t - __new_len, int __flags, ...) __THROW; - vs - of course it would be possible to modify the configure script to - check for the Hurd variant too; but first we should establish whether - here is actually any reason for being different, or it's just some - historical artefact that should be fixed... - teythoon@ganymede:~/build/gold/binutils-2.21/gold$ fgrep 'extern - void *mremap' mremap.c - extern void *mremap (void *, size_t, size_t, int, ...); - the problem is that the test fails to link due to the fact that - mremap isn't in the libc on hurd - yeah, it would be possible for the configure script to check - whether it works when the hurdish extra library is added explicitely - but again, I don't see any good reason for being different here in - the first place... - so should I create a patch to move mremap? - if it's not too complicated, that would be nice... it's always - easier to discuss when you already have code :-) - OTOH, asking first might spare you some useless work if it turns - out there *is* some reason for being different after all... - so where is the right place to discuss this? - bug-hurd mailing list and/or glibc mailing list. not sure which - one is better -- I guess it doesn't hurt to crosspost... - -[[mailing_lists/libc-alpha]] is the correct list, and cross-posting to -[[mailing_lists/bug-hurd]] would be fine, too. - - antrik: some further digging revealed that mremap belongs to - /lib/libmemusage.so on both hurd and linux - the only difference is that on linux there is a weak reference - to that function in /lib/libc-2.11.2.so - $ objdump -T /lib/libc-2.11.2.so | fgrep mremap - 00000000000cf7e0 w DF .text 0000000000000028 GLIBC_2.2.5 - mremap - ah, it's probably simply a bug that we don't have this weak - reference too - IIRC we had similar bugs before - teythoon: can you provide a patch for that? - antrik: unfortunately I have no idea how that weak ref ended up - there - - teythoon: also the libmemusage.s seems to be just a debugging - library to be used by LD_PRELOAD or similar - which override those memory functions - the libc should provide actual code for those functions, even if - the symbol is declared weak (so overridable) - teythoon: are you sure that's the actual problem? can you paste - somewhere the build logs with the error? - guillem: sure - http://paste.debian.net/104437/ - that's the part of config.log that shows the detection (or the - failure to detect it) of mremap - this results in HAVE_MREMAP not being defined - as a consequence it is declared in gold.h and this declaration - conflicts with the one from sys/mman.h http://paste.debian.net/104438/ - on linux the test for mremap succeeds - teythoon: hmm oh I guess it's just what that, mremap is linux - specific so it's not available on the hurd - teythoon: I just checked glibc and seems to confirm that - CONFORMING TO This call is Linux-specific, and should not be used - in programs intended to be portable. - ah okay - so I guess we shouldn't ship an header with that declaration... - teythoon: yeah :/ good luck telling that to drepper :) - teythoon: I guess he'll suggest that everyone else needs to get - our own copy of sys/mman.h - s/our/their/ - hm, so how should I proceed? - what's your goal ? - detecting mremap ? - making binutils/gold compile ootb on hurd - I picked it from the open issues page ;) - well, if there is no mremap, you need a replacement - gold has a replacement - ok - so your problem is fixing the detection of mremap right ? - yes - ok, that's a build system question then :/ - you need to ask an autotools guy - well, actually the build system correctly detects the absence of - mremap - (gold does use the autotools right ?) - yes - oh, i'm lost now (i admit i didn't read the whole issue :/) - it is just that the declaration in sys/mman.h conflicts with - their own declaration - ah - so in the absence of mremap, they use their own builtin function - yes - and according to the test suite it is working perfectly - gold that is - the declaration in mman.h has an extra __THROW - a workaround would be to rename gold's mremap to something else, - gold_mremap for example - that's really the kind of annoying issue - you either have to change glibc, or gold - yeah - you'll face difficulty changing glibc, as guillem told you - the correct solution though IMO is to fix glibc - but this may be true for gold too - guillem: i agree - maybe it would be easiest actually to implement mremap()?... - but as this is something quite linux specific, it makes sense to - use another internal name, and wrap that to the linux mremap if it's - detected - antrik: i'm nto sure - braunr: I don't think using such workarounds is a good - idea. clearly there would be no issue if the header file wouldn't be - incorrect on Hurd - antrik: that's why i said i agree with guillem when he says "the - correct solution though IMO is to fix glibc" - what exactly is the problem with getting a patch into glibc? - the people involved - teythoon: and touching a generic header file - but feel free to try, you could be lucky - but glibc is not an linux specific piece of software, right? - teythoon: no, it's not - erm... - teythoon: but in practice, it is - supposedly not :) - braunr: BTW, by "easiest" I don't mean coding alone, but - coding+pushing upstream :-) - so the problem is, misc/sys/mman.h should be a generic header and - as such not include linux specific parts, which are not present on hurd, - kfreebsd, etc etc - antrik: yes, that's why guillem and i suggested the workaround - thing in gold - that also requires pushing upstream. and quite frankly, if I were - the gold maintainer, I wouldn't accept it. - but the easiest (and wrong) solution in glibc to avoid maintainer - conflict will probably be copying that file under hurd's glibc tree and - install that instead - antrik: implementing mremap could be relatively easy to do - actually - antrik: IIRC, vm_map() supports overlapping - well, actually the easiest solution would be to create a patch - that never goes upstream but is included in Debian, like many - others... but that's obviously not a good long-term plan - braunr: yes, I think so too - braunr: haven't checked, but I have a vague recollection that the - fundamentals are pretty much there - teythoon: so, apart from an ugly workaround in gold, there are - essentially three options: 1. implement mremap; 2. make parts of mman.h - conditional; 3. use our own copy of mman.h - 1. would be ideal, but might be non-trivial; 2. would might be - tricky to get right, and even more tricky to get upstream; 3. would be - simple, but a maintenance burden in the long term - looking at golds replacement code (mmap & memcpy) 1 sounds like - the best option performance wise - -[[!taglink open_issue_glibc]]: check if it is possible to implement `mremap`. -[[I|tschwinge]] remember some discussion about this, but have not yet worked on -locating it. [[Talk to me|tschwinge]] if you'd like to have a look at this. +Apparently it needs [[glibc/mremap]]. diff --git a/open_issues/code_analysis.mdwn b/open_issues/code_analysis.mdwn index d776d81a..00915651 100644 --- a/open_issues/code_analysis.mdwn +++ b/open_issues/code_analysis.mdwn @@ -1,4 +1,5 @@ -[[!meta copyright="Copyright © 2010, 2011 Free Software Foundation, Inc."]] +[[!meta copyright="Copyright © 2010, 2011, 2012 Free Software Foundation, +Inc."]] [[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable id="license" text="Permission is granted to copy, distribute and/or modify this @@ -110,6 +111,20 @@ There is a [[!FF_project 276]][[!tag bounty]] on some of these tasks. glibc's heap structure. its kinda handy, might help? MALLOC_CHECK_ was the envvar you want, sorry. + * In context of [[!message-id + "1341350006-2499-1-git-send-email-rbraun@sceen.net"]]/the `alloca` issue + mentioned in [[gnumach_page_cache_policy]]: + + IRC, freenode, #hurd, 2012-07-08: + + braunr: there's actually already an ifdef REDZONE in libthreads + + It's `RED_ZONE`. + + except it seems clumsy :) + ah, no, the libthreads code properly sets the guard, just for + grow-up stacks + * Input fuzzing Not a new topic; has been used (and a paper published) for early UNIX diff --git a/open_issues/dde.mdwn b/open_issues/dde.mdwn index 725af646..aff988d5 100644 --- a/open_issues/dde.mdwn +++ b/open_issues/dde.mdwn @@ -451,3 +451,13 @@ At the microkernel davroom at [[community/meetings/FOSDEM_2012]]: any movement in that regard :-( wasn't it needed for dde ? hm... good point + + +# virtio + + +## IRC, freenode, #hurd, 2012-07-01 + + hm, i haven't looked but, does someone know if virtio is included + in netdde ? + braunr: nope, there's an underlying virtio layer needed before diff --git a/open_issues/fcntl_locking_dev_null.mdwn b/open_issues/fcntl_locking_dev_null.mdwn new file mode 100644 index 00000000..4c65a5c4 --- /dev/null +++ b/open_issues/fcntl_locking_dev_null.mdwn @@ -0,0 +1,38 @@ +[[!meta copyright="Copyright © 2012 Free Software Foundation, Inc."]] + +[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable +id="license" text="Permission is granted to copy, distribute and/or modify this +document under the terms of the GNU Free Documentation License, Version 1.2 or +any later version published by the Free Software Foundation; with no Invariant +Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license +is included in the section entitled [[GNU Free Documentation +License|/fdl]]."]]"""]] + +[[!meta title="fcntl locking /dev/null"]] + +[[!tag open_issue_hurd]] + + +# IRC, OFTC, #debian-hurd, 2012-07-06 + + regarding the libwibble failure (which holds libbuffy → + libbuffy-bindings), the failing test happens because it logs to /dev/null + as test file, + and while doing that, it wants to lock it first, having a + ENOTSUP in return + oh + locking null, how interesting + what is that supposed to do ? :o) + from what i was reading posix, it would seem that such object is + considered a "File" + is it our unimplemented record lock, or just the lock operation + that /dev/null doesn't support ? + what size is null supposed to be? zero, right? + the latter + ah + so we can simply make lock return 0 + since there's no byte to lock? + I don't remember whether you can lock unexistant bytes + indeed, if i change the libwibble unit test to use eg /tmp/foo, + they pas + s diff --git a/open_issues/gcc.mdwn b/open_issues/gcc.mdwn index 04d399f0..9019939d 100644 --- a/open_issues/gcc.mdwn +++ b/open_issues/gcc.mdwn @@ -237,6 +237,60 @@ Last reviewed up to the [[Git mirror's 9aa4b6a8046270a9dbdf47827f1ea873217d7aa5 to find out why some stuff wasn't compiling even after kfreebsd porting patches adding preprocessors checks for __GLIBC__ + IRC, freenode, #hurd, 2012-05-25: + + Hi, looks like __GLIBC__ is not defined by default for GNU? + touch foo.h; cpp -dM foo.h|grep LIBC: empty + gnu_srs: well, this only tells your the compiler defaults + gnu_srs: See the email I just sent. + + [[!message-id "87396od3ej.fsf@schwinge.name"]] + + __GLIBC__ would probably be introduced by a glibc header + tschwinge: I saw your email. I wonder if features.h is + included in the kFreeBSD build of webkit. + It is defined in their build, but not in the Hurd build. + gcc on kfreebsd unconditionally defines __GLIBC__ + (a bit stupid choice imho, but hardly something that could + be changed now...) + :/ + personally i don't consider this only "a bit" stupid, as + kfreebsd is one of the various efforts pushing towards portability + and using such hacks actually hinders portability ... + yeah don't tell me, i can remember at least half dozen of + occasions when a code wouldn't have been compiling at all on other + glibc platforms otherwise + sure, i have nothing against kfreebsd's efforts, but making + gcc define something which is proper of the libc used is stupid + it is + i spotted changes like: + -#ifdef __linux + +#if defined(__linux__) || defined(__GLIBC__) + and wondered why they wouldn't work at all for us... and + then realized there were no #include in that file before that + preprocessor check + This is even in upstream GCC gcc/config/kfreebsd-gnu.h: + #define GNU_USER_TARGET_OS_CPP_BUILTINS() \ + do \ + { \ + builtin_define ("__FreeBSD_kernel__"); \ + builtin_define ("__GLIBC__"); \ + builtin_define_std ("unix"); \ + builtin_assert ("system=unix"); \ + builtin_assert ("system=posix"); \ + } \ + while (0) + I might raise this upstream at some point. + tschwinge: i could guess the change was proposed by the + kfreebsd people, so asking them before at d-bsd@d.o would be a start + pinotree: Ack. + especially that they would need to fix stuff afterwards + imho we could propose them the change, and if they agree put + that as local patch to debian's gcc4.6/.7 after wheezy, so there is + plenty of time for them to fix stuff + what should be done first is, however, find out why that + define has been added to gcc + * [low] Does `-mcpu=native` etc. work? (For example, 2ae1f0cc764e998bfc684d662aba0497e8723e52.) diff --git a/open_issues/gdb.mdwn b/open_issues/gdb.mdwn index 2ae3518c..dae18227 100644 --- a/open_issues/gdb.mdwn +++ b/open_issues/gdb.mdwn @@ -69,7 +69,7 @@ harmonized. There are several occurences of *error: dereferencing type-punned pointer will break strict-aliasing rules* in the MIG-generated stub files; thus no `-Werror` -until that is resolved. +until that is resolved ([[strict_aliasing]]). This takes up around 140 MiB and needs roughly 6 min on kepler.SCHWINGE and 30 min on coulomb.SCHWINGE. diff --git a/open_issues/gdb_attach.mdwn b/open_issues/gdb_attach.mdwn new file mode 100644 index 00000000..4e4f2ea0 --- /dev/null +++ b/open_issues/gdb_attach.mdwn @@ -0,0 +1,41 @@ +[[!meta copyright="Copyright © 2012 Free Software Foundation, Inc."]] + +[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable +id="license" text="Permission is granted to copy, distribute and/or modify this +document under the terms of the GNU Free Documentation License, Version 1.2 or +any later version published by the Free Software Foundation; with no Invariant +Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license +is included in the section entitled [[GNU Free Documentation +License|/fdl]]."]]"""]] + +[[!meta title="GDB: attach"]] + +[[!tag open_issue_gdb]] + + +# [[gdb_thread_ids]] + + +# IRC, freenode, #hurd, 2012-06-30 + + hm, gdb isn't able to determine which thread is running when + attaching to a process + + +# IRC, freenode, #hurd, 2012-07-02 + + woah, now that's a weird message ! + when using gdb on a hanged ext2fs : + Pid 938 has an additional task suspend count of 1; clear it? (y or + n) + when hanged, gdb thinks the target task is already being debugged + :/ + no wonder why it's completely stuck + hm, the task_suspend might actually be the crash-dump-core server + attempting to create the core :/ + hm interesting, looks like a problem with the + diskfs_catch_exception macro + braunr: what's up with it? + pinotree: it uses setjmp + hm random corruptions :/ + definitely looks like a concurrency problem diff --git a/open_issues/glibc.mdwn b/open_issues/glibc.mdwn index 1ce47560..2dea816a 100644 --- a/open_issues/glibc.mdwn +++ b/open_issues/glibc.mdwn @@ -267,6 +267,8 @@ Last reviewed up to the [[Git mirror's d40c5d54cb551acba4ef1617464760c5b3d41a14 initialization OK, that at least matches my understanding. + * [[`mremap`|mremap]] + * `syncfs` We should be easily able to implement that one. diff --git a/open_issues/glibc/mremap.mdwn b/open_issues/glibc/mremap.mdwn new file mode 100644 index 00000000..a293eea0 --- /dev/null +++ b/open_issues/glibc/mremap.mdwn @@ -0,0 +1,221 @@ +[[!meta copyright="Copyright © 2011, 2012 Free Software Foundation, Inc."]] + +[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable +id="license" text="Permission is granted to copy, distribute and/or modify this +document under the terms of the GNU Free Documentation License, Version 1.2 or +any later version published by the Free Software Foundation; with no Invariant +Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license +is included in the section entitled [[GNU Free Documentation +License|/fdl]]."]]"""]] + +[[!tag open_issue_glibc]] + +[[!toc]] + + +# binutils gold + +## IRC, freenode, #hurd, 2011-01-12 + + I've been looking into building gold on hurd and it built fine + with one minor tweak + and it's working fine according to its test suite + the only problem is that the build system is failing to detect + the hurdish mremap which lives in libmemusage + on linux it is in the libc so the check succeeds + any hints on how to fix this properly? + hm... it's strange that it's a different library on the Hurd + are the implementations compatible? + antrik: it seems so, though the declarations differ slightly + I guess the best thing is to ask on the appropriate list(s) why + they are different... + teythoon@ganymede:~/build/gold/binutils-2.21/gold$ grep -A1 + mremap /usr/include/sys/mman.h + extern void *mremap (void *__addr, size_t __old_len, size_t + __new_len, int __flags, ...) __THROW; + vs + of course it would be possible to modify the configure script to + check for the Hurd variant too; but first we should establish whether + here is actually any reason for being different, or it's just some + historical artefact that should be fixed... + teythoon@ganymede:~/build/gold/binutils-2.21/gold$ fgrep 'extern + void *mremap' mremap.c + extern void *mremap (void *, size_t, size_t, int, ...); + the problem is that the test fails to link due to the fact that + mremap isn't in the libc on hurd + yeah, it would be possible for the configure script to check + whether it works when the hurdish extra library is added explicitely + but again, I don't see any good reason for being different here in + the first place... + so should I create a patch to move mremap? + if it's not too complicated, that would be nice... it's always + easier to discuss when you already have code :-) + OTOH, asking first might spare you some useless work if it turns + out there *is* some reason for being different after all... + so where is the right place to discuss this? + bug-hurd mailing list and/or glibc mailing list. not sure which + one is better -- I guess it doesn't hurt to crosspost... + +[[mailing_lists/libc-alpha]] is the correct list, and cross-posting to +[[mailing_lists/bug-hurd]] would be fine, too. + + antrik: some further digging revealed that mremap belongs to + /lib/libmemusage.so on both hurd and linux + the only difference is that on linux there is a weak reference + to that function in /lib/libc-2.11.2.so + $ objdump -T /lib/libc-2.11.2.so | fgrep mremap + 00000000000cf7e0 w DF .text 0000000000000028 GLIBC_2.2.5 + mremap + ah, it's probably simply a bug that we don't have this weak + reference too + IIRC we had similar bugs before + teythoon: can you provide a patch for that? + antrik: unfortunately I have no idea how that weak ref ended up + there + + teythoon: also the libmemusage.s seems to be just a debugging + library to be used by LD_PRELOAD or similar + which override those memory functions + the libc should provide actual code for those functions, even if + the symbol is declared weak (so overridable) + teythoon: are you sure that's the actual problem? can you paste + somewhere the build logs with the error? + guillem: sure + http://paste.debian.net/104437/ + that's the part of config.log that shows the detection (or the + failure to detect it) of mremap + this results in HAVE_MREMAP not being defined + as a consequence it is declared in gold.h and this declaration + conflicts with the one from sys/mman.h http://paste.debian.net/104438/ + on linux the test for mremap succeeds + teythoon: hmm oh I guess it's just what that, mremap is linux + specific so it's not available on the hurd + teythoon: I just checked glibc and seems to confirm that + CONFORMING TO This call is Linux-specific, and should not be used + in programs intended to be portable. + ah okay + so I guess we shouldn't ship an header with that declaration... + teythoon: yeah :/ good luck telling that to drepper :) + teythoon: I guess he'll suggest that everyone else needs to get + our own copy of sys/mman.h + s/our/their/ + hm, so how should I proceed? + what's your goal ? + detecting mremap ? + making binutils/gold compile ootb on hurd + I picked it from the open issues page ;) + well, if there is no mremap, you need a replacement + gold has a replacement + ok + so your problem is fixing the detection of mremap right ? + yes + ok, that's a build system question then :/ + you need to ask an autotools guy + well, actually the build system correctly detects the absence of + mremap + (gold does use the autotools right ?) + yes + oh, i'm lost now (i admit i didn't read the whole issue :/) + it is just that the declaration in sys/mman.h conflicts with + their own declaration + ah + so in the absence of mremap, they use their own builtin function + yes + and according to the test suite it is working perfectly + gold that is + the declaration in mman.h has an extra __THROW + a workaround would be to rename gold's mremap to something else, + gold_mremap for example + that's really the kind of annoying issue + you either have to change glibc, or gold + yeah + you'll face difficulty changing glibc, as guillem told you + the correct solution though IMO is to fix glibc + but this may be true for gold too + guillem: i agree + maybe it would be easiest actually to implement mremap()?... + but as this is something quite linux specific, it makes sense to + use another internal name, and wrap that to the linux mremap if it's + detected + antrik: i'm nto sure + braunr: I don't think using such workarounds is a good + idea. clearly there would be no issue if the header file wouldn't be + incorrect on Hurd + antrik: that's why i said i agree with guillem when he says "the + correct solution though IMO is to fix glibc" + what exactly is the problem with getting a patch into glibc? + the people involved + teythoon: and touching a generic header file + but feel free to try, you could be lucky + but glibc is not an linux specific piece of software, right? + teythoon: no, it's not + erm... + teythoon: but in practice, it is + supposedly not :) + braunr: BTW, by "easiest" I don't mean coding alone, but + coding+pushing upstream :-) + so the problem is, misc/sys/mman.h should be a generic header and + as such not include linux specific parts, which are not present on hurd, + kfreebsd, etc etc + antrik: yes, that's why guillem and i suggested the workaround + thing in gold + that also requires pushing upstream. and quite frankly, if I were + the gold maintainer, I wouldn't accept it. + but the easiest (and wrong) solution in glibc to avoid maintainer + conflict will probably be copying that file under hurd's glibc tree and + install that instead + antrik: implementing mremap could be relatively easy to do + actually + antrik: IIRC, vm_map() supports overlapping + well, actually the easiest solution would be to create a patch + that never goes upstream but is included in Debian, like many + others... but that's obviously not a good long-term plan + braunr: yes, I think so too + braunr: haven't checked, but I have a vague recollection that the + fundamentals are pretty much there + teythoon: so, apart from an ugly workaround in gold, there are + essentially three options: 1. implement mremap; 2. make parts of mman.h + conditional; 3. use our own copy of mman.h + 1. would be ideal, but might be non-trivial; 2. would might be + tricky to get right, and even more tricky to get upstream; 3. would be + simple, but a maintenance burden in the long term + looking at golds replacement code (mmap & memcpy) 1 sounds like + the best option performance wise + +[[!taglink open_issue_glibc]]: check if it is possible to implement `mremap`. +[[I|tschwinge]] remember some discussion about this, but have not yet worked on +locating it. [[Talk to me|tschwinge]] if you'd like to have a look at this. + + +# IRC, OFTC, #debian-hurd, 2012-06-19 + + OK, how the heck do you get an undefined reference to mremap? + simply because we don't have it + mremap exists only on linux + It's in sys/mman.h + on linux? + No, on GNU/Hurd + /usr/include/i386-gnu/sys/mman.h + that's just the common file with linux + containing just the prototype + that doesn't mean there's an implementation behind + youpi: hm no, linux has an own version + uh + Ah, aye, I didn't look at the implementation.. :( + it's then odd that it was added to the generic sys/mman.h :) + Just another stub? + ah, only few linux archs have own versions + for the macro values I guess + http://paste.debian.net/175173/ on glibc/master + Hmm, so where is MREMAP_MAYMOVE coming in from? + rgrep on a linux box ;) + + but that's again linuxish + Aye but with us having that in the header it is causing some + code to be run which utilizes mremap. If that wasn't defined we wouldn't + be calling it. + ah + we could try to remove it indeed + Should I change the code to #ifdef MREMAP_MAYMOVE & !defined + __GNU__? + no, I said we could remove the definition of MREMAP_MAYMOVE itself diff --git a/open_issues/gnumach_i686.mdwn b/open_issues/gnumach_i686.mdwn new file mode 100644 index 00000000..b34df73b --- /dev/null +++ b/open_issues/gnumach_i686.mdwn @@ -0,0 +1,26 @@ +[[!meta copyright="Copyright © 2012 Free Software Foundation, Inc."]] + +[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable +id="license" text="Permission is granted to copy, distribute and/or modify this +document under the terms of the GNU Free Documentation License, Version 1.2 or +any later version published by the Free Software Foundation; with no Invariant +Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license +is included in the section entitled [[GNU Free Documentation +License|/fdl]]."]]"""]] + +[[!tag open_issue_gnumach]] + + +# IRC, freenode, #hurd, 2012-07-05 + + we could use a gnumach-i686 too + how would you compile gnumach as i686 variant btw? add + -march=.. or something like that in CFLAGS? + yes + at least we'll get some cmovs :) + + +## IRC, freenode, #hurd, 2012-07-07 + + it was rejected in the past because we didn't think it would bring + real performance benefit, but it actually may diff --git a/open_issues/gnumach_integer_overflow.mdwn b/open_issues/gnumach_integer_overflow.mdwn new file mode 100644 index 00000000..2166e591 --- /dev/null +++ b/open_issues/gnumach_integer_overflow.mdwn @@ -0,0 +1,17 @@ +[[!meta copyright="Copyright © 2012 Free Software Foundation, Inc."]] + +[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable +id="license" text="Permission is granted to copy, distribute and/or modify this +document under the terms of the GNU Free Documentation License, Version 1.2 or +any later version published by the Free Software Foundation; with no Invariant +Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license +is included in the section entitled [[GNU Free Documentation +License|/fdl]]."]]"""]] + +[[!tag open_issue_gnumach]] + + +# IRC, freenode, #hurd, 2012-07-04 + + yes, we have integer overflows on resident_page_count, but + luckily, the member is rarely used diff --git a/open_issues/gnumach_page_cache_policy.mdwn b/open_issues/gnumach_page_cache_policy.mdwn index 75fcdd88..6f51d713 100644 --- a/open_issues/gnumach_page_cache_policy.mdwn +++ b/open_issues/gnumach_page_cache_policy.mdwn @@ -10,6 +10,11 @@ License|/fdl]]."]]"""]] [[!tag open_issue_gnumach]] +[[!toc]] + + +# [[page_cache]] + # IRC, freenode, #hurd, 2012-04-26 @@ -33,3 +38,587 @@ License|/fdl]]."]]"""]] have either lots of free pages because tha max limit is reached, or lots of pressure and system freezes :/ yes + + +## IRC, freenode, #hurd, 2012-06-17 + + youpi: i don't understand your patch :/ + arf +  which part don't you understand? + the global idea :/ + first, drop the limit on number of objects + you added a new collect call at pageout time + (i.e. here, hack overflow into 0) + yes + obviously + but then the cache keeps filling up with objects + which sooner or later become empty + thus the collect, which is supposed to look for empty objects, and + just drop them + but not at the right time + objects should be collected as soon as their ref count drops to 0 + err + now, the code of the collect is just a crude attempt without + knowing much about the vm + when their resident page count drops to 0 + so don't necessarily read it :) + ok + i've begin playing with the vm recently + the limits (arbitrary, and very old obviously) seem far too low + for current resources + (e.g. the threshold on free pages is 50 iirc ...) + yes + i'll probably use a different approach + the one i mentioned (collecting one object at a time - or pushing + them on a list for bursts - when they become empty) + this should relax the kernel allocator more + (since there will be less empty vm_objects remaining until the + next global collecttion) + + +## IRC, freenode, #hurd, 2012-06-30 + + the threshold values of the page cache seem quite enough actually + braunr: ah + youpi: yes, it seems the problems are in ext2, not in the VM + k + the page cache limitation still doesn't help :) + the problem in the VM is the recycling of vm_objects, which aren't + freed once empty + but it only wastes some of the slab memory, it doesn't prevent + correct processing + braunr: thus the limitation, right? + no + well + that's the policy they chose at the time + for what reason .. i can't tell + ok, but I mean + we can't remove the policy because of the non-free of empty objects + we must remove vm_objects at some point + but even without it, it makes no sense to disable the limit while + ext2 is still unstable + also, i noticed that the page count in vm_objects never actually + drop to 0 ... + you mean the limit permits to avoid going into the buggy scenarii + too often? + yes + k + at least, that's my impression + my test case is tar xf files.tar.gz, which contains 50000 files of + 12k random data + i'll try with other values + i get crashes, deadlocks, livelocks, and it's not pretty :) + and always in ext2, mach doesn't seem affected by the issue, other + than the obvious + (well i get the usual "deallocating an invalid port", but as + mentioned, it's "most probably a bug", which is the case here :) + braunr: looks coherent with the hangs I get on the buildds + youpi: so that's the nasty bug i have to track now + though I'm also still getting some out of memory from gnumach + sometimes + the good thing is i can reproduce it very quickly + a dump from the allocator to know which zone took all the room + might help + youpi: yes i promised that too + although that's probably related with ext2 issues :) + youpi: can you send me the panic message so i can point the code + which must output the allocator state please ? + next time I get it, sure :) + braunr: you could implement a /proc/slabinfo :) + pinotree: yes but when a panic happens, it's too late + http://git.sceen.net/rbraun/slabinfo.git/ btw + although it's not part of procfs + and the mach_debug interface isn't provided :( + + +## IRC, freenode, #hurd, 2012-07-03 + + it looks like pagers create a thread per memory object ... + braunr: oh. so if I open a lot of files, ext2fs will *inevitably* + have lots of threads?... + antrik: i'm not sure + it may only be required to flush them + but when there are lots of them, the threads could run slowly, + giving the impression there is one per object + in sync mode i don't see many threads + and i don't get the bug either for now + while i can see physical memory actually being used + (and the bug happens before there is any memory pressure in the + kernel) + so it definitely looks like a corruption in ext2fs + and i have an idea .... :> + hm no, i thought an alloca with a big size parameter could erase + memory outside the stack, but it's something else + (although alloca should really be avoided) + arg, the problem seems to be in diskfs_sync_everything -> + ports_bucket_iterate (pager_bucket, sync_one); :/ + :( + looks like the ext2 problem is triggered by calling pager_sync + from diskfs_sync_everything + and is possibly related to + http://lists.gnu.org/archive/html/bug-hurd/2010-03/msg00127.html + (and for reference, the rest of the discussion + http://lists.gnu.org/archive/html/bug-hurd/2010-04/msg00012.html) + multithreading in libpager is scary :/ + braunr: s/in libpager/ ;-) + antrik: right + omg the ugliness :/ + ok i found a bug + a real one :) + (but not sure it's the only one since i tried that before) + 01:38 < braunr> hm no, i thought an alloca with a big size + parameter could erase memory outside the stack, but it's something else + turns out alloca is sometimes used for 64k+ allocations + which explains the stack corruptions + ouch + as it's used to duplicate the node table before traversing it, it + also explains why the cache limit affects the frequency of the bug + now the fun part, write the patch following GNU protocol .. :) + +[[!message-id "1341350006-2499-1-git-send-email-rbraun@sceen.net"]] + + if someone feels like it, there are a bunch of alloca calls in the + hurd (like around 30 if i'm right) + most of them look safe, but some could trigger that same problem + in other servers + ok so far, no problem with the upstream ext2fs code :) + 20 loops of tar xf / rm -rf consuming all free memory as cache :) + the hurd uses far too much cpu time for no valid reason in many + places :/ + * braunr happy + my hurd is completely using its ram :) + Meaning, the bug is solved? Congrats if so :) + well, ext2fs looks way more stable now + i haven't had a single issue since the change, so i guess i messed + something with my previous test + and the Mach VM cache implementation looks good enough + now the only thing left is to detect unused objects and release + them + which is actually the core of my work :) + but i'm glad i could polish ext2fs + with luck, this is the issue that was striking during "thread + storms" in the past + * pinotree hugs braunr + i'm also very happy to see the slab allocator reacting well upon + memory pressure :> + braunr: Why alloca corrupted memory diskfs_node_iterate? Was + temporary node to big to keep it in stack? + mcsim: yes + 17:54 < braunr> turns out alloca is sometimes used for 64k+ + allocations + and i wouldn't be surprised if our thread stacks are + simplecontiguous 64k mappings of zero-filled memory + (as Mach only provides bottom-up allocation) + our thread implementation should leave unmapped areas between + thread stacks, to easily catch such overflows + braunr: wouldn't also fatfs/inode.c and tmpfs/node.c need the + same fix? + pinotree: possibly + i haven't looked + more than 300 loops of tar xf / rm -rf on an archive of 20000 + files of 12 KiB each, without any issue, still going on :) + braunr: yay + + +## [[!message-id "20120703121820.GA30902@mail.sceen.net"]], 2012-07-03 + + +## IRC, freenode, #hurd, 2012-07-04 + + mach is so good it caches objects which *no* page in physical + memory + hm i think i have a working and not too dirty vm cache :> + braunr: congrats :) + kilobug: hey :) + the dangerous side effect is the increased swappiness + we'll have to monitor that on the buildds + otherwise the cache is effectively used, and the slab allocator + reports reasonable amounts of objects, not increasing once the ram is + full + let's see what happens with 1.8 GiB of RAM now + damn glibc is really long to build :) + and i fear my vm cache patch makes non scalable algorithms negate + some of its benefits :/ + 72 tasks, 2090 threads + we need the ability to monitor threads somewhere + + +## IRC, freenode, #hurd, 2012-07-05 + + hm i get kernel panics when not using the host cache :/ + no virtual memory for stack allocations + that's scary + ? + i guess the lack of host cache makes I/O slow enough to create a + big thread storm + that completely exhausts the kernel space + my patch challenges scalability :) + and not having a zalloc zone anymore, instead of getting a nice + panic when trying to allocate yet another thread, you get an address + space exhaustion on an unrelated event instead. I see ;-) + thread stacks are not allocated from a zone/cache + also, the panic concerned aligned memory, but i don't think that + matters + the kernel panic clearly mentions it's about thread stack + allocation + oh, by "stack allocations" you actually mean allocating a stack + for a new thread... + yes + that's not what I normally understand when reading "stack + allocations" :-) + user stacks are simple zero filled memory objects + so we usually get a deadlock on them :> + i wonder if making ports_manage_port_operations_multithread limit + the number of threads would be a good thing to do + braunr: last time slpz did that, it turned out that it causes + deadlocks in at least one (very specific) situation + ok + I think you were actually active at the time slpz proposed the + patch (and it was added to Debian) -- though probably not at the time + where youpi tracked it down as the cause of certain lockups, so it was + dropped again... + what seems very weird though is that we're normally using + continuations + braunr: you mean in the kernel? how is that relevant to the topic + at hand?... + antrik: continuations have been designed to reduce the number of + stacks to one per cpu :/ + but they're not used everywhere + they are not used *anywhere* in the Hurd... + antrik: continuations are supposed to be used by kernel code + braunr: not sure what you are getting at. of course we should use + some kind of continuations in the Hurd instead of having an active thread + for every single request in flight -- but that's not something that could + be done easily... + antrik: oh no, i don't want to use continuations at all + i just want to use less threads :) + my panic definitely looks like a thread storm + i guess increasing the kmem_map will help for the time bein + g + (it's not the whole kernel space that gets filled up actually) + also, stacks are kept on a local cache until there is memory + pressure oO + their slab cache can fill the backing map before there is any + pressure + and it makes a two level cache, i'll have to remove that + well, how do you reduce the number of threads? apart from + optimising scheduling (so requests are more likely to be completed before + new ones are handled), the only way to reduce the number of threads is to + avoid having a thread per request + exactly + so instead the state of each request being handled has to be + explicitly stored... + i.e. continuations + hm actually, no + you use thread migration :) + i don't want to artificially use the number of kernel threads + the hurd should be revamped not to use that many threads + but it looks like a hard task + well, thread migration would reduce the global number of threads + in the system... it wouldn't prevent a server from having thousands of + threads + threads would allready be allocated before getting in the server + again, the only way not to use a thread for each outstanding + request is having some explicit request state management, + i.e. continuations + hm right + but we can nonetheless reduce the number of threads + i wonder if the sync threads are created on behalf of the pagers + or the kernel + one good thing is that i can already feel better performance + without using the host cache until the panic happens + the tricky bit about that is that I/O can basically happen at any + point during handling a request, by hitting a page fault. so we need to + be able to continue with some other request at any point... + yes + actually, readahead should help a lot in reducing the number of + request and thus threads... still will be quite a lot though + we should have a bunch of pageout threads handling requests + asynchronously + it depends on the implementation + consider readahead detects that, in the next 10 pages, 3 are not + resident, then 1 is, then 3 aren't, then 1 is again, and the last two + aren't + how is this solved ? :) + about the stack allocation issue, i actually think it's very + simple to solv + the code is a remnant of the old BSD days, when processes were + heavily swapped + so when a thread is created, its stack isn't allocated + the allocation happens when the thread is dispatched, and the + scheduler finds it's swapped (which is the initial state) + the stack is allocated, and the operation is assumed to succeed, + which is why failure produces a panic + well, actually, not just readahead... clustered paging in + general. the thread storms happen mostly on write not read AIUI + changing that to allocate at thread creation time will allow a + cleaner error handling + antrik: yes, at writeback + antrik: so i guess even when some physical pages are already + present, we should aim at larger sizes for fewer I/O requests + not sure that would be worthwhile... probably doesn't happen all + that often. and if some of the pages are dirty, we would have to make + sure that they are ignored although they were part of the request... + yes + so one request per missing area ? + the opposite might be a good idea though -- if every other page is + dirty, it *might* indeed be preferable to do a single request rewriting + even the clean ones in between... + yes + i personally think one request, then replace only what was + missing, is simpler and preferable + OTOH, rewriting clean pages might considerably increase write time + (and wear) on SSDs + why ? + I doubt the controller is smart enough to recognies if a page + doesn't really need rewriting + so it will actually allocate and write a new cluster + no but it won't spread writes on different internal sectors, will + it ? + sectors are usually really big + "sectors" is not a term used in SSDs :-) + they'll be erased completely whatever the amount of data at some + point if i'm right + ah + need to learn more about that + i thought their internal hardware was much like nand flash + admittedly I don't remember the correct terminology either... + they *are* NAND flash + writing is actually not the problem -- it can happen in small + chunks. the problem is erasing, which is only possible in large blocks + yes + so having larger requests doesn't seem like a problem to me + because of that + thus smart controllers (which pretty much all SSD nowadays have, + and apparently even SD cards) do not actually overwrite. instead, writes + always happen to clean portions, and erasing only happens when a block is + mostly clean + (after relocating the remaining used parts to other clean areas) + braunr: the problem is not having larger requests. the problem is + rewriting clusters that don't really need rewriting. it means the dist + performs unnecessary writing actions. + it doesn't hurt for magnetic disks, as the head has to pass over + the unchanged sectors anyways; and rewriting the unnecessarily doesn't + increase wear + but it's different for SSDs + each write has a penalty there + i thought only erases were the real penalty + well, erase happens in the background with modern controllers; so + it has no direct penalty. the write has a direct performance penalty when + saturating the bandwith, and always has a direct wear penalty + can't controllers handle 32k requests ? like everything does ? :/ + sure they can. but that's beside the point... + if they do, they won't mind the clean data inside such large + blocks + apparently we are talking past each other + i must be missing something important about SSD + braunr: the point is, the controller doesn't *know* it's clean + data; so it will actually write it just like the really unclean data + yes + and it will choose an already clean sector for that (previously + erased), so writing larger blocks shouldn't hurt + there will be a slight increase in bandwidth usage, but that's + pretty much all of it + isn't it ? + well, writing always happens to clean blocks. but writing more + blocks obviously needs more time, and causes more wear... + aiui, blocks are always far larger than the amount of pages we + want to writeback in one request + the only way to use more than one is crossing a boundary + no. again, the blocks that can be *written* are actually quite + small. IIRC most SSDs use 4k nowadays + ok + only erasing operates on much larger blocks + so writing is a problem too + i didn't think it would cause wear leveling to happen + well, I'm not sure whether the wear actually happens on write or + on erase... but that doesn't matter, as the number of blocks that need to + be erased is equivalent to the number of blocks written... + sorry, i'm really not sure + if you erase one sector, then write the first and third block, + it's clearly not equivalent + i mean + let's consider two kinds of pageout requests + 1/ a big one including clean pages + 2/ several ones for dirty pages only + let's assume they both need an erase when they happen + what's the actual difference between them ? + wear will increase only if the controller handle it on writes, if + i'm right + but other than that, it's just bandwidth + strictly speaking erase is only *necessary* when there are no + clean blocks anymore. but modern controllers will try to perform erase of + unused blocks in the background, so it doesn't delay actual writes + i agree on that + but the point is that for each 16 pages (or so) written, we need + to erase one block so we get 16 clean pages to write... + yes + which is about the size of a request for the sequential policy + so it fits + just to be clear: it doesn't matter at all how the pages + "fit". the controller will reallocate them anyways + what matters is how many pages you write + ah + i thought it would just put the whole request in a single sector + (or two) + I'm not sure what you mean by "sector". as I said, it's not a term + used in SSD technology + so do you imply that writes can actually get spread over different + sectors ? + the sector is the unit at the nand flash level, its size is the + erase size + actually, I used the right terminology... the erase unit is the + block; the write unit is the page + sector is a synonym of block + never seen it. and it's very confusing, as it isn't in any way + similar to sectors in magnetic disks... + http://en.wikipedia.org/wiki/Flash_memory#NAND_flash + it's actually in the NOR part right before, paragraph "Erasing" + "Modern NOR flash memory chips are divided into erase segments + (often called blocks or sectors)." + ah. I skipped the NOR part :-) + i've only heard sector where i worked, but i don't consider french + computer engineers to be authorities on the matter :) + hehe + let's call them block + so, thread stacks are allocated out of the kernel map + this is already a bad thing (which is probably why there is a + local cache btw) + anyways, yes. modern controllers might split a contiguous write + request onto several blocks, as well as put writes to completely + different logical pages into one block. the association between addresses + and actual blocks is completely free + now i wonder why the kernel map is so slow, as the panic happens + at about 3k threads, so about 11M of thread stacks + antrik: ok + antrik: well then it makes sense to send only dirty pages + s/slow/low/ + it's different for raw flash (using MTD subsystem in Linux) -- but + I don't think this is something we should consider any time soon :-) + (also, raw flash is only really usable with specialised + filesystems anyways) + yes + are the thread stacks really only 4k? I would expect them to be + larger in many cases... + youpi reduced them some time ago, yes + they're 4k on xen + uh, 16k + damn, i'm wondering why i created separate submaps for the slab + allocator :/ + probably because that's how it was done by the zone allocator + before + but that's stupid :/ + hm the stack issue is actually more complicated than i thought + because of interrupt priority levels + i increased the kernel map size to avoid the panic instead + now libc0.3 seems to build fine + and there seems to be a clear decrease of I/O :) + + +### IRC, freenode, #hurd, 2012-07-06 + + braunr: there is a submap for the slab allocator? that's strange + indeed. I know we talked about this; and I am pretty sure we agreed + removing the submap would actually be among the major benefits of a new + allocator... + antrik: a submap is a good idea anyway + antrik: it avoids fragmenting the kernel space too much + it also breaks down locking + but we could consider it + as a first step, i'll merge the kmem and kalloc submaps (the ones + used for the slab caches and the malloc-like allocations respectively) + then i'll change the allocation of thread stacks to use a slab + cache + and i'll also remove the thread swapping stuff + it will take some time, but by the end we should be able to + allocate tens of thousands of threads, and suffer no panic when the limit + is reached + braunr: I'm not sure "no panic" is really a worthwhile goal in + such a situation... + antrik: uh ?N + antrik: it only means the system won't allow the creation of + threads until there is memory available + from my pov, the microkernel should never fail up to a point it + can't continue its job + braunr: the system won't be able to recover from such a situation + anyways. without actual resource management/priorisation, not having a + panic is not really helpful. it only makes it harder to guess what + happened I fear... + i don't see why it couldn't recover :/ + + +## IRC, freenode, #hurd, 2012-07-07 + + grmbl, there are a lot of issues with making the page cache larger + :( + it actually makes the system slower in half of my tests + we have to test that on real hardware + unfortunately my current results seem to indicate there is no + clear benefit from my patch + the current limit of 4000 objects creates a good balance between + I/O and cpu time + with the previous limit of 200, I/O is often extreme + with my patch, either the working set is less than 4k objects, so + nothing is gained, or the lack of scalability of various parts of the + system add overhead that affect processing speed + also, our file systems are cached, but our block layer isn't + which means even when accessing data from the cache, accesses + still cause some I/O for metadata + + +## IRC, freenode, #hurd, 2012-07-08 + + youpi: basically, it works fine, but exposes scalability issues, + and increases swapiness + so it doens't help with stability? + hum, that was never the goal :) + the goal was to reduce I/O, and increase performance + sure + but does it at least not lower stability too much? + not too much, no + k + most of the issues i found could be reproduced without the patch + ah + then fine :) + random deadlocks on heavy loads + youpi: but i'm not sure it helps with performance + youpi: at least not when emulated, and the host cache is used + that's not very surprising + it does help a lot when there is no host cache and the working set + is greater (or far less) than 4k objects + ok + the amount of vm_object and ipc_port is gracefully adjusted + that'd help us with not having to tell people to use the complex + -drive option :) + so you can easily run a hurd with 128 MiB with decent performance + and no leak in ext2fs + yes + for example + braunr: I'd say we should just try it on buildds + (it's not finished yet, i'd like to work more on reducing + swapping) + (though they're really not busy atm, so the stability change can't + really be measured) + when building the hurd, which takes about 10 minutes in my kvm + instances, there is only a 30 seconds difference between using the host + cache and not using it + this is already the case with the current kernel, since the + working set is less than 4k objects + while with the previous limit of 200 objects, it took 50 minutes + without host cache, and 15 with it + so it's a clear benefit for most uses, except my virtual machines + :) + heh + because there, the amount of ram means a lot of objects can be + cached, and i can measure an increase in cpu usage + slight, but present + youpi: isn't it a good thing that buildds are resting a bit ? :) + on one hand, yes + but on the other hand, that doesn't permit to continue + stress-testing the Hurd :) + we're not in a hurry for this patch + because using it really means you're tickling the pageout daemon a + lot :) + + +## [[metadata_caching]] diff --git a/open_issues/gnumach_tick.mdwn b/open_issues/gnumach_tick.mdwn new file mode 100644 index 00000000..eed447f6 --- /dev/null +++ b/open_issues/gnumach_tick.mdwn @@ -0,0 +1,35 @@ +[[!meta copyright="Copyright © 2012 Free Software Foundation, Inc."]] + +[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable +id="license" text="Permission is granted to copy, distribute and/or modify this +document under the terms of the GNU Free Documentation License, Version 1.2 or +any later version published by the Free Software Foundation; with no Invariant +Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license +is included in the section entitled [[GNU Free Documentation +License|/fdl]]."]]"""]] + +[[!tag open_issue_gnumach]] + + +# IRC, freenode, #hurd, 2012-07-05 + + braunr: wrt to mach: it seems to me it ticks every 10ms or so, + it is true? + yes + and it's not preemptible + braunr: that means a gnumach kernel currently has a maximum + uptime of almost 500 days + pinotree: what do you mean ? + there's an int (or uint, i don't remember) variable that keeps + the tick count + yes the tick variable should probably be a 64-bits type + or a struct + but the tick count should only be used for computation on "short" + delays + and it should be safe to use it even when it overflows + it's not the wall clock + i found that when investigating why the maximum timeout for a + mach_msg is like INT_MAX >> 2 (or 4) or something like that, also due to + the tick count + iirc, in linux, they mostly use the lower 32-bits on 32-bits + architecture, updating the 32 upper only when necessary diff --git a/open_issues/gnumach_vm_map_red-black_trees.mdwn b/open_issues/gnumach_vm_map_red-black_trees.mdwn index 17263099..d7407bfe 100644 --- a/open_issues/gnumach_vm_map_red-black_trees.mdwn +++ b/open_issues/gnumach_vm_map_red-black_trees.mdwn @@ -152,3 +152,23 @@ License|/fdl]]."]]"""]] entries) [[glibc/fork]]. + + +## IRC, freenode, #hurdfr, 2012-06-02 + + braunr: oh, un bug de rbtree + Assertion `diff != 0' failed in file "vm/vm_map.c", line 1002 + c'est dans rbtree_insert() + vm_map_enter (vm/vm_map.c:1002). + vm_map (vm/vm_user.c:373). + syscall_vm_map (kern/ipc_mig.c:657). + erf j'ai tué mon débuggueur, je ne peux pas inspecter plus + le peu qui me reste c'est qu'apparemment target_map == 1, size == + 0, mask == 0 + anywhere = 1 + youpi: ça signifie sûrement que des adresses overlappent + je rejetterai un coup d'oeil sur le code demain + (si ça se trouve c'est un bug rare de la vm, le genre qui fait + crasher le noyau) + (enfin jveux dire, qui faisait crasher le noyau de façon très + obscure avant le patch rbtree) diff --git a/open_issues/gnumach_vm_object_resident_page_count.mdwn b/open_issues/gnumach_vm_object_resident_page_count.mdwn new file mode 100644 index 00000000..cc1b8897 --- /dev/null +++ b/open_issues/gnumach_vm_object_resident_page_count.mdwn @@ -0,0 +1,22 @@ +[[!meta copyright="Copyright © 2012 Free Software Foundation, Inc."]] + +[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable +id="license" text="Permission is granted to copy, distribute and/or modify this +document under the terms of the GNU Free Documentation License, Version 1.2 or +any later version published by the Free Software Foundation; with no Invariant +Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license +is included in the section entitled [[GNU Free Documentation +License|/fdl]]."]]"""]] + +[[!tag open_issue_gnumach]] + + +# IRC, freenode, #hurd, 2012-07-03 + + omg the ugliness + the number of pages in physical memory for on object is a short + ... which limits the amount to .. 128 MiB + * braunr cries + luckily, this should be easy to solve + +`vm/vm_object.h:vm_object:resident_page_count`. diff --git a/open_issues/libpthread_CLOCK_MONOTONIC.mdwn b/open_issues/libpthread_CLOCK_MONOTONIC.mdwn index f9195540..2c8f10f8 100644 --- a/open_issues/libpthread_CLOCK_MONOTONIC.mdwn +++ b/open_issues/libpthread_CLOCK_MONOTONIC.mdwn @@ -15,7 +15,7 @@ License|/fdl]]."]]"""]] [[!message-id "201204220058.37328.toscano.pino@tiscali.it"]] -# IRC, freenode, #hurd- 2012-04-22 +# IRC, freenode, #hurd, 2012-04-22 youpi: what i thought would be creating a glib/hurd/hurdtime.{c,h}, adding _hurd_gettimeofday and @@ -34,7 +34,7 @@ License|/fdl]]."]]"""]] (and others) -## IRC, freenode, #hurd- 2012-04-23 +## IRC, freenode, #hurd, 2012-04-23 pinotree: about librt vs libpthread, don't worry too much for now libpthread can lib against the already-installed librt @@ -56,3 +56,23 @@ License|/fdl]]."]]"""]] at all pinotree: yes, things work even with -lrt wow + + +## IRC, OFTC, #debian-hurd, 2012-06-04 + + pinotree: -lrt in libpthread is what is breaking glib2.0 + during ./configure it makes clock_gettime linked in, while at + library link it doesn't + probably for obscure reasons + I'll have to disable it in debian + + +### IRC, OFTC, #debian-hurd, 2012-06-05 + + youpi: i saw your message about the linking issues with + pthread/rt; do you want me to provide a patch to switch clock_gettime → + gettimeofday in libpthread? + you mean switch it back as it was previously? + kind of, yes + I have reverted the change in libc for now + ok diff --git a/open_issues/low_memory.mdwn b/open_issues/low_memory.mdwn new file mode 100644 index 00000000..22470c65 --- /dev/null +++ b/open_issues/low_memory.mdwn @@ -0,0 +1,113 @@ +[[!meta copyright="Copyright © 2012 Free Software Foundation, Inc."]] + +[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable +id="license" text="Permission is granted to copy, distribute and/or modify this +document under the terms of the GNU Free Documentation License, Version 1.2 or +any later version published by the Free Software Foundation; with no Invariant +Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license +is included in the section entitled [[GNU Free Documentation +License|/fdl]]."]]"""]] + +[[!tag open_issue_gnumach open_issue_glibc open_issue_hurd]] + +Issues relating to system behavior under memory pressure. + +[[!toc]] + + +# [[gnumach_page_cache_policy]] + + +# IRC, freenode, #hurd, 2012-07-08 + + am i mistaken or is the default pager simply not vm privileged ? + (which would explain the hangs when memory is very low) + no idea + but that's very possible + we start it by hand from the init scripts + actually, i see no way provided by mach to set that + i'd assume it would set the property when a thread would register + itself as the default pager, but it doesn't + i'll check at runtime and see if fixing helps + thread_wire(host, thread, 1) ? + ./hurd/mach-defpager/wiring.c: kr = + thread_wire(priv_host_port, + no + look in cprocs.c + iir + iirc + iiuc, it sets a 1:1 kernel/user mapping + ?? + thread_wire, not cthread_wire + ah + right, i'm getting tired + youpi: do you understand the comment in default_pager_thread() ? + well, I'm not sure to know what external vs internal is + i'm almost sure the default pager is blocked because of a relation + with an unprivlege thread + +d + when hangs happen, the pageout daemon is still running, waiting + for an event so he can continue + it* + + all right, our pageout stuff completely sucks + when you think the system is hanged, it's actually not + and what's happening instead? + instead, it seems it's in a very complex resursive state which + ends in the slab allocator not being able to allocate kernel map entries + recursive* + the pageout daemon, unable to continue, progressively slows + in hope the default pager is able to service the pageout requests, + but it's not + probably the most complicated deadlock i've seen :) + luckily ! + i've been playing with some tunables involved in waking up the + pageout daemon + and got good results so far + (although it's clearly not a proper solution) + one thing the kernel lacks is a way to separate clean from dirty + pages + this stupid kernel doesn't try to free clean pages first .. :) + hm + now i can see the system recover, but some applications are still + stuck :( + (but don't worry, my tests are rather aggressive) + what i mean by aggressive is several builds and various dd of a + few hundred MiB in parallel, on various file systems + so far the file systems have been very resilient + ok, let's try running the hurd with 64 MiB of RAM + after some initial swapping, it runs smoothly :) + uh ? + ah no, i'm still doing my parallel builds + although less + gcc: internal compiler error: Resource lost (program as) + arg + lol + the file system crashed under the compiler + too much memory required during linking? or ram+swap should have + been enough? + there is a lot of swap, i doubt it + the hurd is such a dumb and impressive system at the same time + pinotree: what does this tell you ? + git: hurdsig.c:948: post_signal: Unexpected error: (os/kern) + failure. + something samuel spots often during the builds of haskell + packages + +Probably also the *sigpost* case mentioned in [[!message-id +"87bol6aixd.fsf@schwinge.name"]]. + + actually i should be asking jkoenig + it seems the lack of memory has a strong impact on signal delivery + which is bad + braunr: I have a vague recollection of slpz also saying something + about missing dirty page tracking a while back... I might be confusing + stuff though + pinotree: yes it happens often during links + which makes sense + braunr: "happens often" == "hurdsig.c:948: post_signal: ..."? + yes + if you can reproduce it often, what about debugging it? :P + i mean, the few times i got it, it was often during a link :p + i'd rather debug the pageout deadlock :( + but it's hard diff --git a/open_issues/mach-defpager_swap.mdwn b/open_issues/mach-defpager_swap.mdwn new file mode 100644 index 00000000..7d3b001c --- /dev/null +++ b/open_issues/mach-defpager_swap.mdwn @@ -0,0 +1,20 @@ +[[!meta copyright="Copyright © 2012 Free Software Foundation, Inc."]] + +[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable +id="license" text="Permission is granted to copy, distribute and/or modify this +document under the terms of the GNU Free Documentation License, Version 1.2 or +any later version published by the Free Software Foundation; with no Invariant +Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license +is included in the section entitled [[GNU Free Documentation +License|/fdl]]."]]"""]] + +[[!tag open_issue_gnumach]] + +[[!toc]] + + +# IRC, OFTC, #debian-hurd, 2012-06-16 + + I allocated a 5GB partition as swap, but hurd only found 1GB + use 2GiB swaps only, >2Gib are not supported + (and apparently it just truncates the size, to be investigated) diff --git a/open_issues/metadata_caching.mdwn b/open_issues/metadata_caching.mdwn new file mode 100644 index 00000000..f7f4cb53 --- /dev/null +++ b/open_issues/metadata_caching.mdwn @@ -0,0 +1,31 @@ +[[!meta copyright="Copyright © 2011, 2012 Free Software Foundation, Inc."]] + +[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable +id="license" text="Permission is granted to copy, distribute and/or modify this +document under the terms of the GNU Free Documentation License, Version 1.2 or +any later version published by the Free Software Foundation; with no Invariant +Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license +is included in the section entitled [[GNU Free Documentation +License|/fdl]]."]]"""]] + +[[!tag open_issue_gnumach open_issue_hurd]] + +[[!toc]] + + +# IRC, freenode, #hurd, 2012-07-08 + + youpi: there is still quite a lot of I/O even for cached objects + youpi: i strongly suspect these are for the metadata + i.e. we don't have a "buffer cache", only a file cache + (gnu is really not unix lol) + doesn't ext2fs cache these? + (as long as the corresponding object is cached + ) + i didn't look too much, but if it does, it does a bad job + i would guess it does, but possibly only writethrough + iirc it does writeback + there's a sorta "node needs written" flag somewhere iirc + but that's for the files, not the metadata + I mean the metadata of the node + then i have no idea what happens diff --git a/open_issues/multithreading.mdwn b/open_issues/multithreading.mdwn index 0f6b9f19..5924d3f9 100644 --- a/open_issues/multithreading.mdwn +++ b/open_issues/multithreading.mdwn @@ -1,4 +1,5 @@ -[[!meta copyright="Copyright © 2010, 2011 Free Software Foundation, Inc."]] +[[!meta copyright="Copyright © 2010, 2011, 2012 Free Software Foundation, +Inc."]] [[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable id="license" text="Permission is granted to copy, distribute and/or modify this @@ -36,6 +37,18 @@ Control*](http://soft.vub.ac.be/~tvcutsem/talks/presentations/T37_nobackground.p Tom Van Cutsem, 2009. +## IRC, freenode, #hurd, 2012-07-08 + + braunr: about limiting number of threads, IIRC the problem is that + for some threads, completing their work means triggering some action in + the server itself, and waiting for it (with, unfortunately, some lock + held), which never terminates when we can't create new threads any more + youpi: the number of threads should be limited, but not globally + by libports + pagers should throttle their writeback requests + right + + # Alternative approaches: * diff --git a/open_issues/nfs_trailing_slash.mdwn b/open_issues/nfs_trailing_slash.mdwn new file mode 100644 index 00000000..90f138e3 --- /dev/null +++ b/open_issues/nfs_trailing_slash.mdwn @@ -0,0 +1,36 @@ +[[!meta copyright="Copyright © 2012 Free Software Foundation, Inc."]] + +[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable +id="license" text="Permission is granted to copy, distribute and/or modify this +document under the terms of the GNU Free Documentation License, Version 1.2 or +any later version published by the Free Software Foundation; with no Invariant +Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license +is included in the section entitled [[GNU Free Documentation +License|/fdl]]."]]"""]] + +[[!tag open_issue_glibc open_issue_hurd]] + + +# IRC, freenode, #hurd, 2012-05-27 + + ok, on nfs "mkdir dir0" succeeds, "mkdir dir0/" fails. RPC struct is bad + + +## IRC, freenode, #hurd, 2012-05-27 + + 150->dir_mkdir ("foo1/" 493) = 0x40000048 (RPC struct is bad) + task2876->mach_port_deallocate (pn{ 18}) = 0 + mkdir: 136->io_write_request ("mkdir: " -1) = 0 7 + cannot create directory `/nfsroot/foo1/' 136->io_write_request + ("cannot create directory `/nfsroot/foo1/'" -1) = 0 40 + : RPC struct is bad 136->io_write_request (": RPC struct is bad" -1) + = 0 19 + 136->io_write_request (" + " -1) = 0 1 + gg0: Yes, I think we knew about this before. Nobody felt like + working on it yet. Might be a nfs, libnetfs, glibc issue. + gg0: If you want to work on it, please ask here or on bug-hurd + if you need some guidance. + yeah found this thread + http://lists.gnu.org/archive/html/bug-hurd/2008-04/msg00069.html I don't + think I'll work on it diff --git a/open_issues/page_cache.mdwn b/open_issues/page_cache.mdwn index 062fb8d6..fd503fdc 100644 --- a/open_issues/page_cache.mdwn +++ b/open_issues/page_cache.mdwn @@ -1,4 +1,4 @@ -[[!meta copyright="Copyright © 2011 Free Software Foundation, Inc."]] +[[!meta copyright="Copyright © 2011, 2012 Free Software Foundation, Inc."]] [[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable id="license" text="Permission is granted to copy, distribute and/or modify this @@ -10,7 +10,10 @@ License|/fdl]]."]]"""]] [[!tag open_issue_gnumach]] -IRC, freenode, #hurd, 2011-11-28: +[[!toc]] + + +# IRC, freenode, #hurd, 2011-11-28 youpi: would you find it reasonable to completely disable the page cache in gnumach ? @@ -71,3 +74,6 @@ IRC, freenode, #hurd, 2011-11-28: restarting them every few days is ok so I'd rather keep the performance :) ok + + +# [[gnumach_page_cache_policy]] diff --git a/open_issues/performance.mdwn b/open_issues/performance.mdwn index 2fd34621..8dbe1160 100644 --- a/open_issues/performance.mdwn +++ b/open_issues/performance.mdwn @@ -1,4 +1,5 @@ -[[!meta copyright="Copyright © 2010, 2011 Free Software Foundation, Inc."]] +[[!meta copyright="Copyright © 2010, 2011, 2012 Free Software Foundation, +Inc."]] [[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable id="license" text="Permission is granted to copy, distribute and/or modify this @@ -38,3 +39,16 @@ call|/glibc/fork]]'s case. * [[microbenchmarks]] * [[microkernel_multi-server]] + + * [[gnumach_page_cache_policy]] + + * [[metadata_caching]] + +--- + + +# IRC, freenode, #hurd, 2012-07-05 + + the more i study the code, the more i think a lot of time is + wasted on cpu, unlike the common belief of the lack of performance being + only due to I/O diff --git a/open_issues/performance/io_system/read-ahead.mdwn b/open_issues/performance/io_system/read-ahead.mdwn index d6a98070..710c746b 100644 --- a/open_issues/performance/io_system/read-ahead.mdwn +++ b/open_issues/performance/io_system/read-ahead.mdwn @@ -16,6 +16,9 @@ License|/fdl]]."]]"""]] # [[community/gsoc/project_ideas/disk_io_performance]] +# [[gnumach_page_cache_policy]] + + # 2011-02 [[Etenil]] has been working in this area. @@ -389,3 +392,1176 @@ License|/fdl]]."]]"""]] with appropriate frame size. Is that right? question of taste, better ask on the list ok + + +## IRC, freenode, #hurd, 2012-06-09 + + hello. What fictitious pages in gnumach are needed for? + I mean why real page couldn't be grabbed straight, but in sometimes + fictitious page is grabbed first and than converted to real? + mcsim: iirc, fictitious pages are needed by device pagers which + must comply with the vm pager interface + mcsim: specifically, they must return a vm_page structure, but + this vm_page describes device memory + mcsim: and then, it must not be treated like normal vm_page, which + can be added to page queues (e.g. page cache) + + +## IRC, freenode, #hurd, 2012-06-22 + + braunr: Ah. Patch for large storages introduced new callback + pager_notify_evict. User had to define this callback on his own as + pager_dropweak, for instance. But neal's patch change this. Now all + callbacks could have any name, but user defines structure with pager ops + and supplies it in pager_create. + So, I just changed notify_evict to confirm it to new style. + braunr: I want to changed interface of mo_change_attributes and + test my changes with real partitions. For both these I have to update + ext2fs translator, but both partitions I have are bigger than 2Gb, that's + why I need apply this patch.z + But what to do with mo_change_attributes? I need somehow inform + kernel about page fault policy. + When I change mo_ interface in kernel I have to update all programs + that use this interface and ext2fs is one of them. + + braunr: Who do you think better to inform kernel about fault + policy? At the moment I've added fault_strategy parameter that accepts + following strategies: randow, sequential with single page cluster, + sequential with double page cluster and sequential with quad page + cluster. OSF/mach has completely another interface of + mo_change_attributes. In OSF/mach mo_change_attributes accepts structure + of parameter. This structure could have different formats depending o + This rpc could be useful because it is not very handy to update + mo_change_attributes for kernel, for hurd libs and for glibc. Instead of + this kernel will accept just one more structure format. + well, like i wrote on the mailing list several weeks ago, i don't + think the policy selection is of concern currently + you should focus on the implementation of page clustering and + readahead + concerning the interface, i don't think it's very important + also, i really don't like the fact that the policy is per object + it should be per map entry + i think it mentioned that in my mail too + i really think you're wasting time on this + http://lists.gnu.org/archive/html/bug-hurd/2012-04/msg00064.html + http://lists.gnu.org/archive/html/bug-hurd/2012-04/msg00029.html + mcsim: any reason you completely ignored those ? + braunr: Ok. I'll do clustering for map entries. + no it's not about that either :/ + clustering is grouping several pages in the same transfer between + kernel and pager + the *policy* is held in map entries + mcsim: I'm not sure I properly understand your question about the + policy interface... but if I do, it's IMHO usually better to expose + individual parameters as RPC arguments explicitly, rather than hiding + them in an opaque structure... + (there was quite some discussion about that with libburn guy) + antrik: Following will be ok? kern_return_t vm_advice(map, address, + length, advice, cluster_size) + Where advice will be either random or sequential + looks fine to me... but then, I'm not an expert on this stuff :-) + perhaps "policy" would be clearer than "advice"? + madvise has following prototype: int madvise(void *addr, size_t + len, int advice); + hmm... looks like I made a typo. Or advi_c_e is ok too? + advise is a verb; advice a noun... there is a reason why both + forms show up in the madvise prototype :-) + so final variant should be kern_return_t vm_advise(map, address, + length, policy, cluster_size)? + mcsim: nah, you are probably right that its better to keep + consistency with madvise, even if the name of the "advice" parameter + there might not be ideal... + BTW, where does cluster_size come from? from the filesystem? + I see merits both to naming the parameter "policy" (clearer) or + "advice" (more consistent) -- you decide :-) + antrik: also there is variant strategy, like with inheritance :) + I'll choose advice for now. + What do you mean under "where does cluster_size come from"? + well, madvise doesn't have this parameter; so the value must come + from a different source? + in madvise implementation it could fixed value or somehow + calculated basing on size of memory range. In OSF/mach cluster size is + supplied too (via mo_change_attributes). + ah, so you don't really know either :-) + well, my guess is that it is derived from the cluster size used by + the filesystem in question + so for us it would always be 4k for now + (and thus you can probably leave it out alltogether...) + well, fatfs can use larger clusters + I would say, implement it only if it's very easy to do... if it's + extra effort, it's probably not worth it + There is sense to make cluster size bigger for ext2 too, since most + likely consecutive clusters will be within same group. + But anyway I'll handle this later. + well, I don't know what cluster_size does exactly; but by the + sound of it, I'd guess it makes an assumption that it's *always* better + to read in this cluster size, even for random access -- which would be + simply wrong for 4k filesystem clusters... + BTW, I agree with braunr that madvice() is optional -- it is way + way more important to get readahead working as a default policy first + + +## IRC, freenode, #hurd, 2012-07-01 + + youpi: Do you think you could review my code? + sure, just post it to the list + make sure to break it down into logical pieces + youpi: I pushed it my branch at gnumach repository + youpi: or it is still better to post changes to list? + posting to the list would permit feedback from other people too + mcsim: posix distinguishes normal, sequential and random + we should probably too + the system call should probably be named "vm_advise", to be a verb + like allocate etc. + youpi: ok. A have a talk with antrik regarding naming, I'll change + this later because compiling of glibc take a lot of time. + mcsim: I find it odd that vm_for_every_page allocates non-existing + pages + there should probably be at least a flag to request it or not + youpi: normal policy is synonym to default. And this could be + treated as either random or sequential, isn't it? + mcsim: normally, no + yes, the normal policy would be the default + it doesn't mean random or sequential + it's just to be a compromise between both + random is meant to make no read-ahead, since that'd be spurious + anyway + while by default we should make readahead + and sequential makes even more aggressive readahead, which usually + implies a greater number of pages to fetch + that's all + yes + well, that part is handled by the cluster_size parameter actually + what about reading pages preceding the faulted paged ? + Shouldn't sequential clean some pages (if they, for example, are + not precious) that are placed before fault page? + ? + that could make sense, yes + you lost me + and something that you wouldn't to with the normal policy + braunr: clear what has been read previously + ? + since the access is supposed to be sequential + oh + the application will proabably not re-read what was already read + you mean to avoid caching it ? + yes + inactive memory is there for that + while with the normal policy you'd assume that the application + might want to go back etc. + yes, but you can help it + yes + instead of making other pages compete with it + but then, it's for precious pages + I have to say I don't know what a precious page it + s + does it mean dirty pages? + no + precious means cached pages + "If precious is FALSE, the kernel treats the data as a temporary + and may throw it away if it hasn't been changed. If the precious value is + TRUE, the kernel treats its copy as a data repository and promises to + return it to the manager; the manager may tell the kernel to throw it + away instead by flushing and not cleaning the data" + hm no + precious means the kernel must keep it + youpi: According to vm_for_every_page. What kind of flag do you + suppose? If object is internal, I suppose not to cross the bound of + object, setting in_end appropriately in vm_calculate_clusters. + If object is external we don't know its actual size, so we should + make mo request first. And for this we should create fictitious pages. + mcsim: but how would you implement this "cleaning" with sequential + ? + mcsim: ah, ok, I thought you were allocating memory, but it's just + fictitious pages + comment "Allocate a new page" should be fixed :) + braunr: I don't now how I will implement this specifically (haven't + tried yet), but I don't think that this is impossible + braunr: anyway it's useful as an example where normal and + sequential would be different + if it can be done simply + because i can see more trouble than gains in there :) + braunr: ok :) + mcsim: hm also, why fictitious pages ? + fictitious pages should normally be used only when dealing with + memory mapped physically which is not real physical memory, e.g. device + memory + but vm_fault could occur when object represent some device memory. + that's exactly why there are fictitious pages + at the moment of allocating of fictitious page it is not know what + backing store of object is. + really ? + damn, i've got used to UVM too much :/ + braunr: I said something wrong? + no no + it's just that sometimes, i'm confusing details about the various + BSD implementations i've studied + out-of-gsoc-topic question: besides network drivers, do you think + we'll have other drivers that will run in userspace and have to implement + memory mapping ? like framebuffers ? + or will there be a translation layer such as storeio that will + handle mapping ? + framebuffers typically will, yes + that'd be antrik's work on drm + hmm + ok + mcsim: so does the implementation work, and do you see performance + improvement? + youpi: I haven't tested it yet with large ext2 :/ + youpi: I'm going to finish now moving of ext2 to new interface, + than other translators in hurd repository and than finish memory policies + in gnumach. Is it ok? + which new interface? + Written by neal. I wrote some temporary code to make ext2 work with + it, but I'm going to change this now. + you mean the old unapplied patch? + yes + did you have a look at Karim's work? + (I have to say I never found the time to check how it related with + neal's patch) + I found only his work in kernel. I didn't see his work in applying + of neal's patch. + ok + how do they relate with each other? + (I have never actually looked at either of them :/) + his work in kernel and neal's patch? + yes + They do not correlate with each other. + ah, I must be misremembering what each of them do + in kam's patch was changes to support sequential reading in reverse + order (as in OSF/Mach), but posix does not support such behavior, so I + didn't implement this either. + I can't find the pointer to neal's patch, do you have it off-hand? + http://comments.gmane.org/gmane.os.hurd.bugs/351 + thx + I think we are not talking about the same patch from Karim + I mean lists.gnu.org/archive/html/bug-hurd/2010-06/msg00023.html + I mean this patch: + http://lists.gnu.org/archive/html/bug-hurd/2010-06/msg00024.html + Oh. + ok + seems, this is just the same + yes + from a non-expert view, I would have thought these patches play + hand in hand, do they really? + this patch is completely for kernel and neal's one is completely + for libpager. + i.e. neal's fixes libpager, and karim's fixes the kernel + yes + ending up with fixing the whole path? + AIUI, karim's patch will be needed so that your increased readahead + will end up with clustered page request? + I will not use kam's patch + is it not needed to actually get pages in together? + how do you tell libpager to fetch pages together? + about the cluster size, I'd say it shouldn't be specified at + vm_advise() level + in other OSes, it is usually automatically tuned + by ramping it up to a maximum readahead size (which, however, could + be specified) + that's important for the normal policy, where there are typically + successive periods of sequential reads, but you don't know in advance for + how long + braunr said that there are legal issues with his code, so I cannot + use it. + did i ? + mcsim: can you give me a link to the code again please ? + see above :) + which one ? + both + they only differ by a typo + mcsim: i don't remember saying that, do you have any link ? + or log ? + sorry, can you rephrase "ending up with fixing the whole path"? + cluster_size in vm_advise also could be considered as advise + no + it must be the third time we're talking about this + mcsim: I mean both parts would be needed to actually achieve + clustered i/o + again, why make cluster_size a per object attribute ? :( + wouldn't some objects benefit from bigger cluster sizes, while + others wouldn't? + but again, I believe it should rather be autotuned + (for each object) + if we merely want posix compatibility (and for a first attempt, + it's quite enough), vm_advise is good, and the kernel selects the + implementation (and thus the cluster sizes) + if we want finer grained control, perhaps a per pager cluster_size + would be good, although its efficiency depends on several parameters + (e.g. where the page is in this cluster) + but a per object cluster size is a large waste of memory + considering very few applications (if not none) would use the "feature" + .. + (if any*) + there must be a misunderstanding + why would it be a waste of memory? + "per object" + so? + there can be many memory objects in the kernel + so? + so such an overhead must be useful to accept it + in my understanding, a cluster size per object is just a mere + integer for each object + what overhead? + yes + don't we have just thousands of objects? + for now + remember we're trying to remove the page cache limit :) + that still won't be more than tens of thousands of objects + times an integer + that's completely neglectible + braunr: Strange, Can't find in logs. Weird things are happening in + my memory :/ Sorry. + mcsim: i'm almost sure i never said that :/ + but i don't trust my memory too much either + youpi: depends + mcsim: I mean both parts would be needed to actually achieve + clustered i/o + braunr: I made I call vm_advise that applies policy to memory range + (vm_map_entry to be specific) + mcsim: good + actually the cluster size should even be per memory range + youpi: In this sense, yes + k + sorry, Internet connection lags + when changing a structure used to create many objects, keep in + mind one thing + if its size gets larger than a threshold (currently, powers of + two), the cache used by the slab allocator will allocate twice the + necessary amount + sure + this is the case with most object caching allocators, although + some can have specific caches for common sizes such as 96k which aren't + powers of two + anyway, an integer is negligible, but the final structure size + must be checked + (for both 32 and 64 bits) + braunr: ok. + But I didn't understand what should be done with cluster size in + vm_advise? Should I delete it? + to me, the cluster size is a pager property + to me, the cluster size is a map property + whereas vm_advise indicates what applications want + you could have several process accessing the same file in different + ways + youpi: that's why there is a policy + isn't cluster_size part of the policy? + but if the pager abilities are limited, it won't change much + i'm not sure + cluster_size is the amount of readahead, isn't it? + no, it's the amount of data in a single transfer + Yes, it is. + ok, i'll have to check your code + shouldn't transfers permit unbound amounts of data? + braunr: than I misunderstand what readahead is + well then cluster size is per policy :) + e.g. random => 0, normal => 3, sequential => 15 + why make it per map entry ? + because it depends on what the application doezs + let me check the code + if it's accessing randomly, no need for big transfers + just page transfers will be fine + if accessing sequentially, rather use whole MiB of transfers + and these behavior can be for the same file + mcsim: the call is vm_advi*s*e + mcsim: the call is vm_advi_s_e + not advice + yes, he agreed earlier + ok + cluster_size is the amount of data that I try to read at one time. + at singe mo_data_request + *single + which, to me, will depend on the actual map + ok so it is the transfer size + and should be autotuned, especially for normal behavior + youpi: it makes no sense to have both the advice and the actual + size per map entry + to get big readahead with all apps + braunr: the size is not only dependent on the advice, but also on + the application behavior + youpi: how does this application tell this ? + even for sequential, you shouldn't necessarily use very big amounts + of transfers + there is no need for the advice if there is a cluster size + there can be, in the case of sequential, as we said, to clear + previous pages + but otherwise, indeed + but for me it's the converse + the cluster size should be tuned anyway + and i'm against giving the cluster size in the advise call, as we + may want to prefetch previous data as well + I don't see how that collides + well, if you consider it's the transfer size, it doesn't + to me cluster size is just the size of a window + if you consider it's the amount of pages following a faulted page, + it will + also, if your policy says e.g. "3 pages before, 10 after", and + your cluster size is 2, what happens ? + i would find it much simpler to do what other VM variants do: + compute the I/O sizes directly from the policy + don't they autotune, and use the policy as a maximum ? + depends on the implementations + ok, but yes I agree + although casting the size into stone in the policy looks bogus to + me + but making cluster_size part of the kernel interface looks way too + messy + it is + that's why i would have thought it as part of the pager properties + the pager is the true component besides the kernel that is + actually involved in paging ... + well, for me the flexibility should still be per application + by pager you mean the whole pager, not each file, right? + if a pager can page more because e.g. it's a file system with big + block sizes, why not fetch more ? + yes + it could be each file + but only if we have use for it + and i don't see that currently + well, posix currently doesn't provide a way to set it + so it would be useless atm + i was thinking about our hurd pagers + could we perhaps say that the policy maximum could be a fraction of + available memory? + why would we want that ? + (total memory, I mean) + to make it not completely cast into stone + as have been in the past in gnumach + i fail to understand :/ + there must be a misunderstanding then + (pun not intended) + why do you want to limit the policy maximum ? + how to decide it? + the pager sets it + actually I don't see how a pager could decide it + on what ground does it make the decision? + readahead should ideally be as much as 1MiB + 02:02 < braunr> if a pager can page more because e.g. it's a file + system with big block sizes, why not fetch more ? + is the example i have in mind + otherwise some default values + that's way smaller than 1MiB, isn't it? + yes + and 1 MiB seems a lot to me :) + for readahead, not really + maybe for sequential + that's what we care about! + ah, i thought we cared about normal + "as much as 1MiB", I said + I don't mean normal :) + right + but again, why limit ? + we could have 2 or more ? + at some point you don't get more efficiency + but eat more memory + having the pager set the amount allows us to easily adjust it over + time + braunr: Do you think that readahead should be implemented in + libpager? + than needed + mcsim: no + mcsim: err + mcsim: can't answer + mcsim: do you read the log of what you have missed during + disconnection? + i'm not sure about what libpager does actually + yes + for me it's just mutualisation of code used by pagers + i don't know the details + youpi: yes + youpi: that's why we want these values not hardcoded in the kernel + youpi: so that they can be adjusted by our shiny user space OS + (btw apparently linux uses minimum 16k, maximum 128 or 256k) + that's more reasonable + that's just 4 times less :) + braunr: You say that pager should decide how much data should be + read ahead, but each pager can't implement it on it's own as there will + be too much overhead. So the only way is to implement this in libpager. + mcsim: gni ? + why couldn't they ? + mcsim: he means the size, not the actual implementation + the maximum size, actually + actually, i would imagine it as the pager giving per policy + parameters + right + like how many before and after + I agree, then + the kernel could limit, sure, to avoid letting pagers use + completely insane values + (and that's just a max, the kernel autotunes below that) + why not + that kernel limit could be a fraction of memory, then? + it could, yes + i see what you mean now + mcsim: did you understand our discussion? + don't hesitate to ask for clarification + I supposed cluster_size to be such parameter. And advice will help + to interpret this parameter (whether data should be read after fault page + or some data should be cleaned before) + mcsim: we however believe that it's rather the pager than the + application that would tell that + at least for the default values + posix doesn't have a way to specify it, and I don't think it will + in the future + and i don't think our own hurd-specific programs will need more + than that + if they do, we can slightly change the interface to make it a per + object property + i've checked the slab properties, and it seems we can safely add + it per object + cf http://www.sceen.net/~rbraun/slabinfo.out + so it would still be set by the pager, but if depending on the + object, the pager could set different values + youpi: do you think the pager should just provide one maximum size + ? or per policy sizes ? + I'd say per policy size + so people can increase sequential size like crazy when they know + their sequential applications need it, without disturbing the normal + behavior + right + so the last decision is per pager or per object + mcsim: i'd say whatever makes your implementation simpler :) + braunr: how kernel knows that object are created by specific pager? + that's the kind of things i'm referring to with "whatever makes + your implementation simpler" + but usually, vm_objects have an ipc port and some properties + relatedto their pagers + -usually + the problem i had in mind was the locking protocol but our spin + locks are noops, so it will be difficult to detect deadlocks + braunr: and for every policy there should be variable in vm_object + structure with appropriate cluster_size? + if you want it per object, yes + although i really don't think we want it + better keep it per pager for now + let's imagine youpi finishes his 64-bits support, and i can + successfully remove the page cache limit + we'd jump from 1.8 GiB at most to potentially dozens of GiB of RAM + and 1.8, mostly unused + to dozens almost completely used, almost all the times for the + most interesting use cases + we may have lots and lots of objects to keep around + so if noone really uses the feature ... there is no point + but also lots and lots of memory to spend on it :) + a lot of objects are just one page, but a lof of them are not + sure + we wouldn't be doing that otherwise :) + i'm just saying there is no reason to add the overhead of several + integers for each object if they're simply not used at all + hmm, 64-bits, better page cache, clustered paging I/O :> + (and readahead included in the last ofc) + good night ! + than, probably, make system-global max-cluster_size? This will save + some memory. Also there is usually no sense in reading really huge chunks + at once. + but that'd be tedious to set + there are only a few pagers, that's no wasted memory + the user being able to set it for his own pager is however a very + nice feature, which can be very useful for databases, image processing, + etc. + In conclusion I have to implement following: 3 memory policies per + object and per vm_map_entry. Max cluster size for every policy should be + set per pager. + So, there should be 2 system calls for setting memory policy and + one for setting cluster sizes. + Also amount of data to transfer should be tuned automatically by + every page fault. + youpi: Correct me, please, if I'm wrong. + I believe that's what we ended up to decide, yes + + +## IRC, freenode, #hurd, 2012-07-02 + + is it safe to say that all memory objects implemented by external + pagers have "file" semantics ? + i wonder if the current memory manager interface is suitable for + device pagers + braunr: What does "file" semantics mean? + mcsim: anonymous memory doesn't have the same semantics as a file + for example + anonymous memory that is discontiguous in physical memory can be + contiguous in swap + and its location can change with time + whereas with a memory object, the data exchanged with pagers is + identified with its offset + in (probably) all other systems, this way of specifying data is + common to all files, whatever the file system + linux uses the struct vm_file name, while in BSD/Solaris they are + called vnodes (the link between a file system inode and virtual memory) + my question is : can we implement external device pagers with the + current interface, or is this interface really meant for files ? + also + mcsim: something about what you said yesterday + 02:39 < mcsim> In conclusion I have to implement following: 3 + memory policies per object and per vm_map_entry. Max cluster size for + every policy should be set per pager. + not per object + one policy per map entry + transfer parameters (pages before and after the faulted page) per + policy, defined by pagers + 02:39 < mcsim> So, there should be 2 system calls for setting + memory policy and one for setting cluster sizes. + adding one call for vm_advise is good because it mirrors the posix + call + but for the parameters, i'd suggest changing an already existing + call + not sure which one though + braunr: do you know how mo_change_attributes implemented in + OSF/Mach? + after a quick reading of the reference manual, i think i + understand why they made it per object + mcsim: no + did they change the call to include those paging parameters ? + it accept two parameters: flavor and pointer to structure with + parameters. + flavor determines semantics of structure with parameters. + + http://www.darwin-development.org/cgi-bin/cvsweb/osfmk/src/mach_kernel/vm/memory_object.c?rev=1.1 + structure can have 3 different views and what exect view will be is + determined by value of flavor + So, I thought about implementing similar call that could be used + for various purposes. + like ioctl + "pointer to structure with parameters" <= which one ? + mcsim: don't model anything anywhere like ioctl please + memory_object_info_t attributes + ioctl is the very thing we want NOT to have on the hurd + ok attributes + and what are the possible values of flavour, and what kinds of + attributes ? + and then appears something like this on each case: behave = + (old_memory_object_behave_info_t) attributes; + ok i see + flavor could be OLD_MEMORY_OBJECT_BEHAVIOR_INFO, + MEMORY_OBJECT_BEHAVIOR_INFO, MEMORY_OBJECT_PERFORMANCE_INFO etc + i don't really see the point of flavour here, other than + compatibility + having attributes is nice, but you should probably add it as a + call parameter, not inside a structure + as a general rule, we don't like passing structures too much + to/from the kernel, because handling them with mig isn't very clean + ok + What policy parameters should be defined by pager? + i'd say number of pages to page-in before and after the faulted + page + Only pages before and after the faulted page? + for me yes + youpi might have different things in mind + the page cleaning in sequential mode is something i wouldn't do + 1/ applications might want data read sequentially to remain in the + cache, for other sequential accesses + 2/ applications that really don't want to cache anything should + use O_DIRECT + 3/ it's complicated, and we're in july + i'd rather have a correct and stable result than too many unused + features + braunr: MADV_SEQUENTIAL Expect page references in sequential order. + (Hence, pages in the given range can be aggressively read ahead, and may + be freed soon after they are accessed.) + this is from linux man + braunr: Can I at least make keeping in mind that it could be + implemented? + I mean future rpc interface + braunr: From behalf of kernel pager is just a port. + That's why it is not clear for me how I can make in kernel + per-pager policy + mcsim: you can't + 15:19 < braunr> after a quick reading of the reference manual, i + think i understand why they made it per object + + http://pubs.opengroup.org/onlinepubs/009695399/functions/posix_madvise.html + POSIX_MADV_SEQUENTIAL + Specifies that the application expects to access the specified + range sequentially from lower addresses to higher addresses. + linux might free pages after their access, why not, but this is + entirely up to the implementation + I know, when but applications might want data read sequentially to + remain in the cache, for other sequential accesses this kind of access + could be treated rather normal or random + we can do differently + mcsim: no + sequential means the access will be sequential + so aggressive readahead (e.g. 0 pages before, many after), should + be used + for better performance + from my pov, it has nothing to do with caching + i actually sometimes expect data to remain in cache + e.g. before playing a movie from sshfs, i sometimes prefetch it + using dd + then i use mplayer + i'd be very disappointed if my data didn't remain in the cache :) + At least these pages could be placed into inactive list to be first + candidates for pageout. + that's what will happen by default + mcsim: if we need more properties for memory objects, we'll adjust + the call later, when we actually implement them + so, first call is vm_advise and second is changed + mo_change_attributes? + yes + there will appear 3 new parameters in mo_c_a: policy, pages before + and pages after? + braunr: With vm_advise I didn't understand one thing. This call is + defined in defs file, so that should mean that vm_advise is ordinal rpc + call. But on the same time it is defined as syscall in mach internals (in + mach_trap_table). + mcsim: what ? + were is it "defined" ? (it doesn't exit in gnumach currently) + Ok, let consider vm_map + I define it both in mach_trap_table and in defs file. + But why? + uh ? + let me see + Why defining in defs file is not enough? + and previous question: there will appear 3 new parameters in + mo_c_a: policy, pages before and pages after? + mcsim: give me the exact file paths please + mcsim: we'll discuss the new parameters after + kern/syscall_sw.c + right i see + here mach_trap_table in defined + i think they're not used + they were probably introduced for performance + and ./include/mach/mach.defs + don't bother adding vm_advise as a syscall + about the parameters, it's a bit more complicated + you should add 6 parameters + before and after, for the 3 policies + but + as seen in the posix page, there could be more policies .. + ok forget what i said, it's stupid + yes, the 3 parameters you had in mind are correct + don't forget a "don't change" value for the policy though, so the + kernel ignores the before/after values if we don't want to change that + ok + mcsim: another reason i asked about "file semantics" is the way we + handle the cache + mcsim: file semantics imply data is cached, whereas anonymous and + device memory usually isn't + (although having the cache at the vm layer instead of the pager + layer allows nice things like the swap cache) + But this shouldn't affect possibility of implementing of device + pager. + yes it may + consider how a fault is actually handled by a device + mach must use weird fictitious pages for that + whereas it would be better to simply let the pager handle the + fault as it sees fit + setting may_cache to false should resolve the issue + for the caching problem, yes + which is why i still think it's better to handle the cache at the + vm layer, unlike UVM which lets the vnode pager handle its own cache, and + removes the vm cache completely + The only issue with pager interface I see is implementing of + scatter-gather DMA (as current interface does not support non-consecutive + access) + right + but that's a performance issue + my problem with device pagers is correctness + currently, i think the kernel just asks pagers for "data" + whereas a device pager should really map its device memory where + the fault happen + braunr: You mean that every access to memory should cause page + fault? + I mean mapping of device memory + no + i mean a fault on device mapped memory should directly access a + shared region + whereas file pagers only implement backing store + let me explain a bit more + here is what happens with file mapped memory + you map it, access it (some I/O is done to get the page content in + physical memory), then later it's flushed back + whereas with device memory, there shouldn't be any I/O, the device + memory should directly be mapped (well, some devices need the same + caching behaviour, while others provide direct access) + one of the obvious consequences is that, when you map device + memory (e.g. a framebuffer), you expect changes in your mapped memory to + be effective right away + while with file mapped memory, you need to msync() it + (some framebuffers also need to be synced, which suggests greater + control is needed for external pagers) + Seems that I understand you. But how it is implemented in other + OS'es? Do they set something in mmu? + mcsim: in netbsd, pagers have a fault operatin in addition to get + and put + the device pager sets get and put to null and implements fault + only + the fault callback then calls the d_mmap callback of the specific + driver + which usually results in the mmu being programmed directly + (e.g. pmap_enter or similar) + in linux, i think raw device drivers, being implemented as + character device files, must provide raw read/write/mmap/etc.. functions + so it looks pretty much similar + i'd say our current external pager interface is insufficient for + device pagers + but antrik may know more since he worked on ggi + antrik: ^ + braunr: Seems he used io_map + mcsim: where ar eyou looking at ? the incubator ? + his master's thesis + ah the thesis + but where ? :) + I'll give you a link + http://dl.dropbox.com/u/36519904/kgi_on_hurd.pdf + thanks + see p 158 + arg, more than 200 pages, and he says he's lazy :/ + mcsim: btw, have a look at m_o_ready + braunr: This is old form of mo_change attributes + I'm not going to change it + mcsim: these are actually the default object parameters right ? + mcsim: if you don't change it, it means the kernel must set + default values until the pager changes them, if it does + yes. + mcsim: madvise() on Linux has a separate flag to indicate that + pages won't be reused. thus I think it would *not* be a good idea to + imply it in SEQUENTIAL + braunr: yes, my KMS code relies on mapping memory objects for the + framebuffer + (it should be noted though that on "modern" hardware, mapping + graphics memory directly usually gives very poor performance, and drivers + tend to avoid it...) + mcsim: BTW, it was most likely me who warned about legal issues + with KAM's work. AFAIK he never managed to get the copyright assignment + done :-( + (that's not really mandatory for the gnumach work though... only + for the Hurd userspace parts) + also I'd like to point out again that the cluster_size argument + from OSF Mach was probably *not* meant for advice from application + programs, but rather was supposed to reflect the cluster size of the + filesystem in question. at least that sounds much more plausible to me... + braunr: I have no idea whay you mean by "device pager". device + memory is mapped once when the VM mapping is established; there is no + need for any fault handling... + mcsim: to be clear, I think the cluster_size parameter is mostly + orthogonal to policy... and probably not very useful at all, as ext2 + almost always uses page-sized clusters. I'm strongly advise against + bothering with it in the initial implementation + mcsim: to avoid confusion, better use a completely different name + for the policy-decided readahead size + antrik: ok + braunr: well, yes, the thesis report turned out HUGE; but the + actual work I did on the KGI port is fairly tiny (not more than a few + weeks of actual hacking... everything else was just brooding) + braunr: more importantly, it's pretty much the last (and only + non-trivial) work I did on the Hurd :-( + (also, I don't think I used the word "lazy"... my problem is not + laziness per se; but rather inability to motivate myself to do anything + not providing near-instant gratification...) + antrik: right + antrik: i shouldn't consider myself lazy either + mcsim: i agree with antrik, as i told you weeks ago + about + 21:45 < antrik> mcsim: to be clear, I think the cluster_size + parameter is mostly orthogonal to policy... and probably not very useful + at all, as ext2 almost always uses page-sized clusters. I'm strongly + advise against bothering with it + in the initial implementation + antrik: but how do you actually map device memory ? + also, strangely enough, here is the comment in dragonflys + madvise(2) + 21:45 < antrik> mcsim: to be clear, I think the cluster_size + parameter is mostly orthogonal to policy... and probably not very useful + at all, as ext2 almost always uses page-sized clusters. I'm strongly + advise against bothering with it + in the initial implementation + arg + MADV_SEQUENTIAL Causes the VM system to depress the priority of + pages immediately preceding a given page when it is faulted in. + braunr: interesting... + (about SEQUENTIAL on dragonfly) + as for mapping device memory, I just use to device_map() on the + mem device to map the physical address space into a memory object, and + then through vm_map into the driver (and sometimes application) address + space + formally, there *is* a pager involved of course (implemented + in-kernel by the mem device), but it doesn't really do anything + interesting + thinking about it, there *might* actually be page faults involved + when the address ranges are first accessed... but even then, the handling + is really trivial and not terribly interesting + antrik: it does the most interesting part, create the physical + mapping + and as trivial as it is, it requires a special interface + i'll read about device_map again + but yes, the fact that it's in-kernel is what solves the problem + here + what i'm interested in is to do it outside the kernel :) + why would you want to do that? + there is no policy involved in doing an MMIO mapping + you ask for the pysical memory region you are interested in, and + that's it + whether the kernel adds the page table entries immediately or on + faults is really an implementation detail + braunr: ^ + yes it's a detail + but do we currently have the interface to make such mappings from + userspace ? + and i want to do that because i'd like as many drivers as possible + outside the kernel of course + again, the userspace driver asks the kernel to establish the + mapping (through device_map() and then vm_map() on the resulting memory + object) + hm i'm missing something + + http://www.gnu.org/software/hurd/gnumach-doc/Device-Map.html#Device-Map + <= this one ? + yes, this one + but this implies the device is implemented by the kernel + the mem device is, yes + but that's not a driver + ah + it's just the interface for doing MMIO + (well, any physical mapping... but MMIO is probably the only real + use case for that) + ok + i was thinking about completely removing the device interface from + the kernel actually + but it makes sense to have such devices there + well, in theory, specific kernel drivers can expose their own + device_map() -- but IIRC the only one that does (besides mem of course) + is maptime -- which is not a real driver either... + oh btw, i didn't know you had a blog :) + well, it would be possible to replace the device interface by + specific interfaces for the generic pseudo devices... I'm not sure how + useful that would be + there are lots of interesting stuff there + hehe... another failure ;-) + failure ? + well, when I realized that I'm speding a lot of time pondering + things, and never can get myself to actually impelemnt any of them, I had + the idea that if I write them down, there might at least be *some* good + from it... + unfortunately it turned out that I need so much effort to write + things down, that most of the time I can't get myself to do that either + :-( + i see + well it's still nice to have it + (notice that the latest entry is two years old... and I haven't + even started describing most of my central ideas :-( ) + antrik: i tried to create a blog once, and found what i wrote so + stupid i immediately removed it + hehe + actually some of my entries seem silly in retrospect as well... + but I guess that's just the way it is ;-) + :) + i'm almost sure other people would be interested in what i had to + say + BTW, I'm actually not sure whether the Mach interfaces are + sufficient to implement GEM/TTM... we would certainly need kernel support + for GART (as for any other kind IOMMU in fact); but beyond that it's not + clear to me + GEM ? TTM ? GART ? + GEM = Graphics Execution Manager. part of the "new" DRM interface, + closely tied with KMS + TTM = Translation Table Manager. does part of the background work + for most of the GEM drivers + "The Graphics Execution Manager (GEM) is a computer software + system developed by Intel to do memory management for device drivers for + graphics chipsets." hmm + (in fact it was originally meant to provide the actual interface; + but the Inter folks decided that it's not useful for their UMA graphics) + GART = Graphics Aperture + kind of an IOMMU for graphics cards + allowing the graphics card to work with virtual mappings of main + memory + (i.e. allowing safe DMA) + ok + all this graphics stuff looks so complex :/ + it is + I have a whole big chapter on that in my thesis... and I'm not + even sure I got everything right + what is nvidia using/doing (except for getting the finger) ? + flushing out all the details for KMS, GEM etc. took the developers + like two years (even longer if counting the history of TTM) + Nvidia's proprietary stuff uses a completely own kernel interface, + which is of course not exposed or docuemented in any way... but I guess + it's actually similar in what it does) + ok + (you could ask the nouveau guys if you are truly + interested... they are doing most of their reverse engineering at the + kernel interface level) + it seems graphics have very special needs, and a lot of them + and the interfaces are changing often + so it's not that much interesting currently + it just means we'll probably have to change the mach interface too + like you said + so the answer to my question, which was something like "do mach + external pagers only implement files ?", is likely yes + well, KMS/GEM had reached some stability; but now there are + further changes ahead with the embedded folks coming in with all their + dedicated hardware, calling for unified buffer management across the + whole pipeline (from capture to output) + and yes: graphics hardware tends to be much more complex regarding + the interface than any other hardware. that's because it's a combination + of actual I/O (like most other devices) with a very powerful coprocessor + and the coprocessor part is pretty much unique amongst peripherial + devices + (actually, the I/O part is also much more complex than most other + hardware... but that alone would only require a more complex driver, not + special interfaces) + embedded hardware makes it more interesting in that the I/O + part(s) are separate from the coprocessor ones; and that there are often + several separate specialised ones of each... the DRM/KMS stuff is not + prepared to deal with this + v4l over time has evolved to cover such things; but it's not + really the right place to implement graphics drivers... which is why + there are not efforts to unify these frameworks. funny times... + + +## IRC, freenode, #hurd, 2012-07-03 + + mcsim: vm_for_every_page should be static + braunr: ok + mcsim: see http://gcc.gnu.org/onlinedocs/gcc/Inline.html + and it looks big enough that you shouldn't make it inline + let the compiler decide for you (which is possible only if the + function is static) + (otherwise a global symbol needs to exist) + mcsim: i don't know where you copied that comment from, but you + should review the description of the vm_advice call in mach.Defs + braunr: I see + braunr: It was vm_inherit :) + mcsim: why isn't NORMAL defined in vm_advise.h ? + mcsim: i figured actually ;) + braunr: I was going to do it later when. + mcsim: for more info on inline, see + http://www.kernel.org/doc/Documentation/CodingStyle + arg that's an old one + braunr: I know that I do not follow coding style + mcsim: this one is about linux :p + mcsim: http://lxr.linux.no/linux/Documentation/CodingStyle should + have it + mcsim: "Chapter 15: The inline disease" + I was going to fix it later during refactoring when I'll merge + mplaneta/gsoc12/working to mplaneta/gsoc12/master + be sure not to forget :p + and the best not to forget is to do it asap + +way + As to inline. I thought that even if I specify function as inline + gcc makes final decision about it. + There was a specifier that made function always inline, AFAIR. + gcc can force a function not to be inline, yes + but inline is still considered as a strong hint + + +## IRC, freenode, #hurd, 2012-07-05 + + braunr: hello. You've said that pager has to supply 2 values to + kernel to give it an advice how execute page fault. These two values + should be number of pages before and after the page where fault + occurred. But for sequential policy number of pager before makes no + sense. For random policy too. For normal policy it would be sane to make + readahead symmetric. Probably it would be sane to make pager supply + cluster_size (if it is necessary to supply any) that w + *that will be advice for kernel of least sane value? And maximal + value will be f(free_memory, map_entry_size)? + mcsim1: I doubt symmetric readahead would be a good default + policy... while it's hard to estimate an optimum over all typical use + cases, I'm pretty sure most situtations will benefit almost exclusively + from reading following pages, not preceeding ones + I'm not even sure it's useful to read preceding pages at all in + the default policy -- the use cases are probably so rare that the penalty + in all other use cases is not justified. I might be wrong on that + though... + I wonder how other systems handle that + antrik: if there is a mismatch between pages and the underlying + store, like why changing small bits of data on an ssd is slow? + mcsim1: i don't see why not + antrik: netbsd reads a few pages before too + actually, what netbsd does vary on the version, some only mapped + in resident pages, later versions started asynchronous transfers in the + hope those pages would be there + LarstiQ: not sure what you are trying to say + in linux : + 321 * MADV_NORMAL - the default behavior is to read clusters. + This + 322 * results in some read-ahead and read-behind. + not sure if it's actually what the implementation does + well, right -- it's probably always useful to read whole clusters + at a time, especially if they are the same size as pages... that doesn't + mean it always reads preceding pages; only if the read is in the middle + of the cluster AIUI + antrik: basically what braunr just pasted + and in most cases, we will want to read some *following* clusters + as well, but probably not preceding ones + * LarstiQ nods + antrik: the default policy is usually rather sequential + here are the numbers for netbsd + 166 static struct uvm_advice uvmadvice[] = { + 167 { MADV_NORMAL, 3, 4 }, + 168 { MADV_RANDOM, 0, 0 }, + 169 { MADV_SEQUENTIAL, 8, 7}, + 170 }; + struct uvm_advice { + int advice; + int nback; + int nforw; + }; + surprising isn't it ? + they may suggest sequential may be backwards too + makes sense + braunr: what are these numbers? pages? + yes + braunr: I suspect the idea behind SEQUENTIAL is that with typical + sequential access patterns, you will start at one end of the file, and + then go towards the other end -- so the extra clusters in the "wrong" + direction do not actually come into play + only situation where some extra clusters are actually read is when + you start in the middle of a file, and thus do not know yet in which + direction the sequential read will go... + yes, there are similar comments in the linux code + mcsim1: so having before and after numbers seems both + straightforward and in par with other implementations + I'm still surprised about the almost symmetrical policy for NORMAL + though + BTW, is it common to use heuristics for automatically recognizing + random and sequential patterns in the absence of explicit madise? + i don't know + netbsd doesn't use any, linux seems to have different behaviours + for anonymous and file memory + when KAM was working on this stuff, someone suggested that... + there is a file_ra_state struct in linux, for per file read-ahead + policy + now the structure is of course per file system, since they all use + the same address + (which is why i wanted it to be per pager in the first place) + mcsim1: as I said before, it might be useful for the pager to + supply cluster size, if it's different than page size. but right now I + don't think this is something worth bothering with... + I seriously doubt it would be useful for the pager to supply any + other kind of policy + braunr: I don't understand your remark about using the same + address... + braunr: pre-mapping seems the obvious way to implement readahead + policy + err... per-mapping + the ra_state (read ahead state) isn't the policy + the policy is per mapping, parts of the implementation of the + policy is per file system + braunr: How do you look at following implementation of NORMAL + policy: We have fault page that is current. Than we have maximal size of + readahead block. First we find first absent pages before and after + current. Than we try to fit block that will be readahead into this + range. Here could be following situations: in range RBS/2 (RBS -- size of + readahead block) there is no any page, so readahead will be symmetric; if + current page is first absent page than all + RBS block will consist of pages that are after current; on the + contrary if current page is last absent than readahead will go backwards. + Additionally if current page is approximately in the middle of the + range we can decrease RBS, supposing that access is random. + mcsim1: i think your gsoc project is about readahead, we're in + july, and you need to get the job done + mcsim1: grab one policy that works, pages before and after are + good enough + use sane default values, let the pagers decide if they want + something else + and concentrate on the real work now + braunr: I still don't see why pagers should mess with that... only + complicates matters IMHO + antrik: probably, since they almost all use the default + implementation + mcsim1: just use sane values inside the kernel :p + this simplifies things by only adding the new vm_advise call and + not change the existing external pager interface diff --git a/open_issues/pfinet_vs_system_time_changes.mdwn b/open_issues/pfinet_vs_system_time_changes.mdwn index 513cbc73..46705047 100644 --- a/open_issues/pfinet_vs_system_time_changes.mdwn +++ b/open_issues/pfinet_vs_system_time_changes.mdwn @@ -1,4 +1,5 @@ -[[!meta copyright="Copyright © 2010, 2011 Free Software Foundation, Inc."]] +[[!meta copyright="Copyright © 2010, 2011, 2012 Free Software Foundation, +Inc."]] [[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable id="license" text="Permission is granted to copy, distribute and/or modify this @@ -58,3 +59,24 @@ IRC, freenode, #hurd, 2011-10-27: it's really fascinating that only the pfinet on the Hurd instance where I set the date is affected, and not the pfinet in the other instance + +IRC, freenode, #hurd, 2012-06-28: + + great, now setting the date/time fucked my machine + yes, we lack a monotonic clock + there are select() loops that use gettimeofday to determine how + much time to wait + thus if the time changes (eg goes back), the calculation goes + crazy + pinotree: didn't you implement a monotonic clock?... + started to + bddebian: did it really fuck the machine? normally it only resets + TCP connections... + yeah, i remember such gettimeofay-based select-loops at least in + pfinet + I don't think it's a loop. it just drops the connections, + believing they have timed out + antrik: Well in this case I don't know because I am at work but + it fucked me because I now cannot get to it.. :) + bddebian: that's odd... you should be able to just log in again + IIRC diff --git a/open_issues/qemu_writeback.mdwn b/open_issues/qemu_writeback.mdwn new file mode 100644 index 00000000..ab881705 --- /dev/null +++ b/open_issues/qemu_writeback.mdwn @@ -0,0 +1,18 @@ +[[!meta copyright="Copyright © 2012 Free Software Foundation, Inc."]] + +[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable +id="license" text="Permission is granted to copy, distribute and/or modify this +document under the terms of the GNU Free Documentation License, Version 1.2 or +any later version published by the Free Software Foundation; with no Invariant +Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license +is included in the section entitled [[GNU Free Documentation +License|/fdl]]."]]"""]] + +[[!tag open_issue_documentation]] + + +# IRC, freenode, #hurdfr, 2012-07-01 + + remplace "-hda file.img" par "-drive + cache=writeback,index=0,media=disk,file=file.img" + tu sentiras tout de suite la différence diff --git a/open_issues/strict_aliasing.mdwn b/open_issues/strict_aliasing.mdwn new file mode 100644 index 00000000..01019372 --- /dev/null +++ b/open_issues/strict_aliasing.mdwn @@ -0,0 +1,21 @@ +[[!meta copyright="Copyright © 2012 Free Software Foundation, Inc."]] + +[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable +id="license" text="Permission is granted to copy, distribute and/or modify this +document under the terms of the GNU Free Documentation License, Version 1.2 or +any later version published by the Free Software Foundation; with no Invariant +Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license +is included in the section entitled [[GNU Free Documentation +License|/fdl]]."]]"""]] + +[[!tag open_issue_glibc open_issue_gnumach open_issue_hurd open_issue_mig]] + + +# IRC, freenode, #hurd, 2012-07-04 + + we should perhaps build the hurd with -fno-strict-aliasing, + considering the number of warnings i can see during the build :/ + braunr: wouldn't be better to "just" fix the mig-generated stubs + instead? + pinotree: if we can rely on gcc for the warnings, yes + but i suspect there might be other silent issues in very old code -- cgit v1.2.3