summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorThomas Schwinge <tschwinge@gnu.org>2012-07-11 22:39:59 +0200
committerThomas Schwinge <tschwinge@gnu.org>2012-07-11 22:39:59 +0200
commit8cee055ec4fac00e59f19620ab06e2b30dccee3c (patch)
tree6cd7ca1b8ce7eba1820fdbd31ee5755ed33dabe2
parentb75e038615d51cb62c200e336e59202519db8cae (diff)
IRC.
-rw-r--r--hurd/debugging/rpctrace.mdwn80
-rw-r--r--hurd/translator/ext2fs.mdwn44
-rw-r--r--hurd/translator/procfs/jkoenig/discussion.mdwn53
-rw-r--r--microkernel/mach.mdwn6
-rw-r--r--microkernel/mach/deficiencies.mdwn260
-rw-r--r--microkernel/mach/gnumach/memory_management.mdwn35
-rw-r--r--open_issues/binutils_gold.mdwn181
-rw-r--r--open_issues/code_analysis.mdwn17
-rw-r--r--open_issues/dde.mdwn10
-rw-r--r--open_issues/fcntl_locking_dev_null.mdwn38
-rw-r--r--open_issues/gcc.mdwn54
-rw-r--r--open_issues/gdb.mdwn2
-rw-r--r--open_issues/gdb_attach.mdwn41
-rw-r--r--open_issues/glibc.mdwn2
-rw-r--r--open_issues/glibc/mremap.mdwn221
-rw-r--r--open_issues/gnumach_i686.mdwn26
-rw-r--r--open_issues/gnumach_integer_overflow.mdwn17
-rw-r--r--open_issues/gnumach_page_cache_policy.mdwn589
-rw-r--r--open_issues/gnumach_tick.mdwn35
-rw-r--r--open_issues/gnumach_vm_map_red-black_trees.mdwn20
-rw-r--r--open_issues/gnumach_vm_object_resident_page_count.mdwn22
-rw-r--r--open_issues/libpthread_CLOCK_MONOTONIC.mdwn24
-rw-r--r--open_issues/low_memory.mdwn113
-rw-r--r--open_issues/mach-defpager_swap.mdwn20
-rw-r--r--open_issues/metadata_caching.mdwn31
-rw-r--r--open_issues/multithreading.mdwn15
-rw-r--r--open_issues/nfs_trailing_slash.mdwn36
-rw-r--r--open_issues/page_cache.mdwn10
-rw-r--r--open_issues/performance.mdwn16
-rw-r--r--open_issues/performance/io_system/read-ahead.mdwn1176
-rw-r--r--open_issues/pfinet_vs_system_time_changes.mdwn24
-rw-r--r--open_issues/qemu_writeback.mdwn18
-rw-r--r--open_issues/strict_aliasing.mdwn21
33 files changed, 3059 insertions, 198 deletions
diff --git a/hurd/debugging/rpctrace.mdwn b/hurd/debugging/rpctrace.mdwn
index fd24f081..df6290f7 100644
--- a/hurd/debugging/rpctrace.mdwn
+++ b/hurd/debugging/rpctrace.mdwn
@@ -1,4 +1,4 @@
-[[!meta copyright="Copyright © 2007, 2008, 2009, 2010, 2011 Free Software
+[[!meta copyright="Copyright © 2007, 2008, 2009, 2010, 2011, 2012 Free Software
Foundation, Inc."]]
[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
@@ -89,6 +89,84 @@ See `rpctrace --help` about how to use it.
<pinotree> braunr: the output of rpctrace --help should tell the
default dir for msgids
+* IRC, freenode, #hurd, 2012-06-30
+
+ <mcsim> hello. Has anyone faced with problem when translator works
+ fine, but when it is started via rpctrace it hangs? Probably you know
+ what can cause this?
+ <antrik> mcsim: rpctrace itself is quite buggy
+ <antrik> zhengda once did a number of improvements, but they never went
+ upstream...
+ <youpi> well, he never explained how his fixes worked :)
+ <youpi> GNU/Hurd is no different from other projects in that regard: if
+ you don't explain how your patches work, there's low chance that they
+ are applied
+ <youpi> unless the maintainer has time to dive himself, which we don't
+ <pinotree> "it compiles, ship it!"
+ <braunr> pinotree: i guess the hurd is different in that particular
+ regard :p
+ <youpi> not different from linux
+ <braunr> eh, they include staging drivers now :)
+ <youpi> we have a sort-of staging tree as well, with netdde
+ <youpi> we don't really care about stability there
+ <antrik> youpi: actually, I think by now (and not to a small part
+ because of this episode) that we are too strict about patch
+ submission
+ <youpi> well, review really is needed, otherwise source gets into a bad
+ shape
+ <antrik> while zhengda's variant might not have been ideal (nobody of
+ us understands the workings of rpctrace enough to tell), I have
+ little doubt that it would be an improvement...
+ <youpi> it happened quite a few times that a fix revealed to be
+ actually bogus
+ <youpi> in that particular case, I agree
+ <youpi> the problem is that usually what happens is that questions are
+ asked
+ <youpi> and the answers never happen
+ <youpi> and thus the patch gets lost
+ <antrik> after all, when he when he submitted that patch, he had a much
+ better understanding of rpctrace than any of us...
+ <youpi> sure
+ <antrik> Linus is actually quite pragmatic about that. from what I've
+ seen, if he can be convinced that something is *probably* an
+ improvement over the previous status, he will usually merge it, even
+ if he has some qualms
+ <youpi> when there is a maintainer, he usually requires his approval,
+ doesn't he?
+ <antrik> in particular, for code that is new or has been in a very bad
+ shape before, standards shouldn't be as high as for changes to known
+ good code. and quite frankly, large parts of the Hurd code base
+ aren't all that good to begin with...
+ <youpi> sure
+ <antrik> well, sure. in this case, we should have just appointed
+ zhengda to be the rpctrace maintainer :-)
+ <antrik> BTW, as his version is quite fundamentally different, perhaps
+ instead of merging the very large patch, perhaps we should just ship
+ both versions, and perhaps drop the old one at some point if the new
+ one turns out to work well...
+ <antrik> (and perhaps I overused the word perhaps in that sentence
+ perhaps ;-) )
+ <youpi> about that particular patch, you had needed raised a few bits
+ <youpi> and there was no answers
+ <youpi> the patch is still in my mbox, far away
+ <youpi> so it was *not* technically lost
+ <youpi> it's just that as usual we lack manpower
+ <antrik> yeah, I know. but many of the things I raised were mostly
+ formalisms, which might be helpful for maintaining high-quality code,
+ but probably were just a waste of time and effort in this case... I'm
+ not surprised that zhengda lost motivation to pursue this further :-(
+ <braunr> it would help a lot to get the ton of patches in the debian
+ packages upstream :)
+ <youpi> braunr: there aren't many, and usually for a good reason
+ <youpi> some of them are in debian for testing, and can probably be
+ commited at some point
+ <pinotree> youpi: we could mark (with dep3 headers) the ones which are
+ meant to be debian-specific
+ <youpi> sure
+ <antrik> well, there are also a few patches that are not exactly
+ Debian-specific, but not ready for upstream either...
+ <youpi> antrik: yes
+
# See Also
diff --git a/hurd/translator/ext2fs.mdwn b/hurd/translator/ext2fs.mdwn
index ad79c7b9..8e15d1c7 100644
--- a/hurd/translator/ext2fs.mdwn
+++ b/hurd/translator/ext2fs.mdwn
@@ -18,6 +18,8 @@ License|/fdl]]."]]"""]]
* [[Page_cache]]
+ * [[metadata_caching]]
+
## Large Stores
@@ -43,6 +45,48 @@ Smaller block sizes are commonly automatically selected by `mke2fs` when using
small backend stores, like floppy devices.
+#### IRC, freenode, #hurd, 2012-06-30
+
+ <braunr> at least having the same api in the debian package and the git
+ source would be great (in reference to the large store patch ofc)
+ <youpi> braunr: the api part could be merged perhaps
+ <youpi> it's very small apparently
+ <antrik> braunr: the large store patch is a sad story. when it was first
+ submitted, one of the maintainers raised some concerns. the other didn't
+ share these (don't remember who is who), but the concerned one never
+ followed up with details. so it has been in limbo ever since. tschwinge
+ once promised to take it up, but didn't get around to it so far. plus,
+ the original author himself mentioned once that he didn't consider it
+ finished...
+ <youpi> antrik: it's clearly not finished
+ <youpi> there are XXXs here and there
+ <braunr> it's called an RC1 and RC2 is mentioned in the release notes
+ <antrik> youpi: well, that doesn't stop most other projects from commiting
+ stuff... including most emphatically the original Hurd code :-)
+ <youpi> what do you refer to my "that" ? :)
+ <braunr> "XXX"
+ <youpi> right
+ <youpi> at the time it made sense to delay applying
+ <youpi> but I guess by nowadays standard we should just as well commit it
+ <youpi> it works enough for Debian, already
+ <youpi> there is just one bug I nkow about
+ <youpi> the apt database file keeps haveing the wrong size, fixed by e2fsck
+ <pinotree> youpi: remember that patch should be fixed in the offset
+ declaration in diskfs.h
+ <youpi> I don't remember about that
+ <youpi> did we fix it in the debian package?
+ <pinotree> nope
+ <youpi> you had issues when fixing it, didn't you?
+ <youpi> (I don't remember where I can find the details about this)
+ <pinotree> i changed it, recompiled hurd and installed it, started a perl
+ rebuild and when running one of the two lfs tests it hard locked the vm
+ after ext2fs was taking 100% cpu for a bit
+ <pinotree> i don't exclude i could have done something stupid on my side
+ though
+ <youpi> or there could just be actual issues, uncovered here
+ <youpi> which can be quite probable
+
+
# Documentation
* <http://e2fsprogs.sourceforge.net/ext2.html>
diff --git a/hurd/translator/procfs/jkoenig/discussion.mdwn b/hurd/translator/procfs/jkoenig/discussion.mdwn
index e7fdf46e..182b438b 100644
--- a/hurd/translator/procfs/jkoenig/discussion.mdwn
+++ b/hurd/translator/procfs/jkoenig/discussion.mdwn
@@ -68,7 +68,7 @@ IRC, #hurd, around October 2010
owner, but always with root group
-# `/proc/$pid/stat` being 400 and not 444, and some more
+# `/proc/[PID]/stat` being 400 and not 444, and some more
IRC, freenode, #hurd, 2011-03-27
@@ -187,7 +187,7 @@ IRC, freenode, #hurd, 2011-07-22
server anyway, I think.
-# `/proc/mounts`, `/proc/$pid/mounts`
+# `/proc/mounts`, `/proc/[PID]/mounts`
IRC, freenode, #hurd, 2011-07-25
@@ -277,3 +277,52 @@ Needed by glibc's `pldd` tool (commit
<antrik> it's very weird for example for fd connected to files that have
been unlinked. it looks like a broken symlink, but when dereferencing
(e.g. with cp), you get the actual file contents...
+
+
+# `/proc/[PID]/maps`
+
+## IRC, OFTC, #debian-hurd, 2012-06-20
+
+ <pinotree> bdefreese: the two elfutils tests fail because there are no
+ /proc/$pid/maps files
+ <pinotree> that code is quite relying on linux features, like locating the
+ linux kernel executables and their modules, etc
+ <pinotree> (see eg libdwfl/linux-kernel-modules.c)
+ <pinotree> refactor elfutils to have the linux parts executed only on linux
+ :D
+ <bdefreese> Oh yeah, the maintainer already seems really thrilled about
+ Hurd.. Did you see
+ http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=662041 ?
+ <pinotree> kurt is generally helpful with us (= hurd)
+ <pinotree> most probably there he is complaining that we let elfutils build
+ with nocheck (ie skipping the test suite run) instead of investigate and
+ report why the test suite failed
+
+
+# IRC, freenode, #hurd, 2011-06-19
+
+ <pinotree> jkoenig: procfs question: in process.c, process_lookup_pid, why
+ is the entries[2].hook line repeated twice?
+ <jkoenig> pinotree, let me check
+ <jkoenig> pinotree, it's probably just a mistake, there's no way the second
+ one has any effect
+ <pinotree> jkoenig: i see, it looked like you c&p'd that code accidentally
+ <jkoenig> pinotree, it's probably what happened, yes.
+
+
+# IRC, freenode, #hurd, 2012-06-30
+
+ <pinotree> btw, what do you think about making jkoening's procfs master the
+ real master?
+ <youpi> probably a good idea
+ <youpi> it does work quite well, except a few pidof hangs
+ <pinotree> surely better than the old one :)
+ <youpi> yes :)
+
+
+# `/proc/[PID]/cwd`
+
+## IRC, freenode, #hurd, 2012-06-30
+
+ * pinotree has a local work to add the /proc/$pid/cwd symlink, but relying
+ on "internal" (but exported) glibc functions
diff --git a/microkernel/mach.mdwn b/microkernel/mach.mdwn
index deaf6788..02627766 100644
--- a/microkernel/mach.mdwn
+++ b/microkernel/mach.mdwn
@@ -1,4 +1,4 @@
-[[!meta copyright="Copyright © 2007, 2008, 2010 Free Software Foundation,
+[[!meta copyright="Copyright © 2007, 2008, 2010, 2012 Free Software Foundation,
Inc."]]
[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
@@ -14,6 +14,8 @@ microkernel currently used by the [[Hurd]].
* [[Concepts]]
+ * [[Deficiencies]]
+
* [[Documentation]]
* [[History]]
@@ -30,6 +32,8 @@ microkernel currently used by the [[Hurd]].
([API](http://developer.apple.com/documentation/Darwin/Conceptual/KernelProgramming/index.html))
(**non-free**)
+ * [[open_issues/OSF_Mach]]
+
# Related
diff --git a/microkernel/mach/deficiencies.mdwn b/microkernel/mach/deficiencies.mdwn
new file mode 100644
index 00000000..f2f49975
--- /dev/null
+++ b/microkernel/mach/deficiencies.mdwn
@@ -0,0 +1,260 @@
+[[!meta copyright="Copyright © 2012 Free Software Foundation, Inc."]]
+
+[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
+id="license" text="Permission is granted to copy, distribute and/or modify this
+document under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no Invariant
+Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
+is included in the section entitled [[GNU Free Documentation
+License|/fdl]]."]]"""]]
+
+[[!tag open_issue_documentation open_issue_gnumach]]
+
+
+# IRC, freenode, #hurd, 2012-06-29
+
+ <henrikcozza> I do not understand what are the deficiencies of Mach, the
+ content I find on this is vague...
+ <antrik> the major problems are that the IPC architecture offers poor
+ performance; and that resource usage can not be properly accounted to the
+ right parties
+ <braunr> antrik: the more i study it, the more i think ipc isn't the
+ problem when it comes to performance, not directly
+ <braunr> i mean, the implementation is a bit heavy, yes, but it's fine
+ <braunr> the problems are resource accounting/scheduling and still too much
+ stuff inside kernel space
+ <braunr> and with a very good implementation, the performance problem would
+ come from crossing address spaces
+ <braunr> (and even more on SMP, i've been thinking about it lately, since
+ it would require syncing mmu state on each processor currently using an
+ address space being modified)
+ <antrik> braunr: the problem with Mach IPC is that it requires too many
+ indirections to ever be performant AIUI
+ <braunr> antrik: can you mention them ?
+ <antrik> the semantics are generally quite complex, compared to Coyotos for
+ example, or even Viengoos
+ <braunr> antrik: the semantics are related to the message format, which can
+ be simplified
+ <braunr> i think everybody agrees on that
+ <braunr> i'm more interested in the indirections
+ <antrik> but then it's not Mach IPC anymore :-)
+ <braunr> right
+ <braunr> 22:03 < braunr> i mean, the implementation is a bit heavy, yes,
+ but it's fine
+ <antrik> that's not an implementation issue
+ <braunr> that's what i meant by heavy :)
+ <braunr> well, yes and no
+ <braunr> Mach IPC have changed over time
+ <braunr> it would be newer Mach IPC ... :)
+ <antrik> the fact that data types are (supposed to be) transparent to the
+ kernel is a major part of the concept, not just an implementation detail
+ <antrik> but it's not just the message format
+ <braunr> transparent ?
+ <braunr> but they're not :/
+ <antrik> the option to buffer in the kernel also adds a lot of complexity
+ <braunr> buffer in the kernel ?
+ <braunr> ah you mean message queues
+ <braunr> yes
+ <antrik> braunr: eh? the kernel parses all the type headers during transfer
+ <braunr> yes, so it's not transparent at all
+ <antrik> maybe you have a different understanding of "transparent" ;-)
+ <braunr> i guess
+ <antrik> I think most of the other complex semantics are kinda related to
+ the in-kernel buffering...
+ <braunr> i fail to see why :/
+ <antrik> well, it allows ports rights to be destroyed while a message is in
+ transfer. a lot of semantics revolve around what happens in that case
+ <braunr> yes but it doesn't affect performance a lot
+ <antrik> sure it does. it requires a lot of extra code and indirections
+ <braunr> not a lot of it
+ <antrik> "a lot" is quite a relative term :-)
+ <antrik> compared to L4 for example, it *is* a lot
+ <braunr> and those indirections (i think you refer to more branching here)
+ are taken only when appropriate, and can be isolated, improved through
+ locality, etc..
+ <braunr> the features they add are also huge
+ <braunr> L4 is clearly insufficient
+ <braunr> all current L4 forks have added capabilities ..
+ <braunr> (that, with the formal verification, make se4L one of the
+ "hottest" recent system projects)
+ <braunr> seL4*
+ <antrik> yes, but with very few extra indirection I think... similar to
+ EROS (which claims to have IPC almost as efficient as the original L4)
+ <braunr> possibly
+ <antrik> I still fail to see much real benefit in formal verification :-)
+ <braunr> but compared to other problems, this added code is negligible
+ <braunr> antrik: for a microkernel, me too :/
+ <braunr> the kernel is already so small you can simply audit it :)
+ <antrik> no, it's not neglible, if you go from say two cache lines touched
+ per IPC (original L4) to dozens (Mach)
+ <antrik> every additional variable that needs to be touched to resolve some
+ indirection, check some condition adds significant overhead
+ <braunr> if you compare the dozens to the huge amount of inter processor
+ interrupt you get each time you change the kernel map, it's next to
+ nothing ..
+ <antrik> change the kernel map? not sure what you mean
+ <braunr> syncing address spaces on hundreds of processors each time you
+ send a message is a real scalability issue here (as an example), where
+ Mach to L4 IPC seem like microoptimization
+ <youpi> braunr: modify, you mean?
+ <braunr> yes
+ <youpi> (not switchp
+ <youpi> )
+ <braunr> but that's only one example
+ <braunr> yes, modify, not switch
+ <braunr> also, we could easily get rid of the ihash library
+ <braunr> making the message provide the address of the object associated to
+ a receive right
+ <braunr> so the only real indirection is the capability, like in other
+ systems, and yes, buffering adds a bit of complexity
+ <braunr> there are other optimizations that could be made in mach, like
+ merging structures to improve locality
+ <pinotree> "locality"?
+ <braunr> having rights close to their target port when there are only a few
+ <braunr> pinotree: locality of reference
+ <youpi> for cache efficiency
+ <antrik> hundreds of processors? let's stay realistic here :-)
+ <braunr> i am ..
+ <braunr> a microkernel based system is also a very good environment for RCU
+ <braunr> (i yet have to understand how liburcu actually works on linux)
+ <antrik> I'm not interested in systems for supercomputers. and I doubt
+ desktop machines will get that many independant cores any time soon. we
+ still lack software that could even romotely exploit that
+ <braunr> hum, the glibc build system ? :>
+ <braunr> lol
+ <youpi> we have done a survey over the nix linux distribution
+ <youpi> quite few packages actually benefit from a lot of cores
+ <youpi> and we already know them :)
+ <braunr> what i'm trying to say is that, whenever i think or even measure
+ system performance, both of the hurd and others, i never actually see the
+ IPC as being the real performance problem
+ <braunr> there are many other sources of overhead to overcome before
+ getting to IPC
+ <youpi> I completely agree
+ <braunr> and with the advent of SMP, it's even more important to focus on
+ contention
+ <antrik> (also, 8 cores aren't exactly a lot...)
+ <youpi> antrik: s/8/7/ , or even 6 ;)
+ <antrik> braunr: it depends a lot on the use case. most of the problems we
+ see in the Hurd are probably not directly related to IPC performance; but
+ I pretty sure some are
+ <antrik> (such as X being hardly usable with UNIX domain sockets)
+ <braunr> antrik: these have more to do with the way mach blocks than IPC
+ itself
+ <braunr> similar to the ext2 "sleep storm"
+ <antrik> a lot of overhead comes from managing ports (for for example),
+ which also mostly comes down to IPC performance
+ <braunr> antrik: yes, that's the main indirection
+ <braunr> antrik: but you need such management, and the related semantics in
+ the kernel interface
+ <braunr> (although i wonder if those should be moved away from the message
+ passing call)
+ <antrik> you mean a different interface for kernel calls than for IPC to
+ other processes? that would break transparency in a major way. not sure
+ we really want that...
+ <braunr> antrik: no
+ <braunr> antrik: i mean calls specific to right management
+ <antrik> admittedly, transparency for port management is only useful in
+ special cases such as rpctrace, and that probably could be served better
+ with dedicated debugging interfaces...
+ <braunr> antrik: i.e. not passing rights inside messages
+ <antrik> passing rights inside messages is quite essential for a capability
+ system. the problem with Mach IPC in regard to that is that the message
+ format allows way more flexibility than necessary in that regard...
+ <braunr> antrik: right
+ <braunr> antrik: i don't understand why passing rights inside messages is
+ important though
+ <braunr> antrik: essential even
+ <youpi> braunr: I guess he means you need at least one way to pass rights
+ <antrik> braunr: well, for one, you need to pass a reply port with each RPC
+ request...
+ <braunr> youpi: well, as he put, the message passing call is overpowered,
+ and this leads to many branches in the code
+ <braunr> antrik: the reply port is obvious, and can be optimized
+ <braunr> antrik: but the case i worry about is passing references to
+ objects between tasks
+ <braunr> antrik: rights and identities with the auth server for example
+ <braunr> antrik: well ok forget it, i just recall how it actually works :)
+ <braunr> antrik: don't forget we lack thread migration
+ <braunr> antrik: you may not think it's important, but to me, it's a major
+ improvement for RPC performance
+ <antrik> braunr: how can seL4 be the most interesting microkernel
+ then?... ;-)
+ <braunr> antrik: hm i don't know the details, but if it lacks thread
+ migration, something is wrong :p
+ <braunr> antrik: they should work on viengoos :)
+ <antrik> (BTW, AIUI thread migration is quite related to passive objects --
+ something Hurd folks never dared seriously consider...)
+ <braunr> i still don't know what passive objects are, or i have forgotten
+ it :/
+ <antrik> no own control threads
+ <braunr> hm, i'm still missing something
+ <braunr> what do you refer to by control thread ?
+ <braunr> with*
+ <antrik> i.e. no main loop etc.; only activated by incoming calls
+ <braunr> ok
+ <braunr> well, if i'm right, thomas bushnel himself wrote (recently) that
+ the ext2 "sleep" performance issue was expected to be solved with thread
+ migration
+ <braunr> so i guess they definitely considered having it
+ <antrik> braunr: don't know what the "sleep peformance issue" is...
+ <braunr> http://lists.gnu.org/archive/html/bug-hurd/2011-12/msg00032.html
+ <braunr> antrik: also, the last message in the thread,
+ http://lists.gnu.org/archive/html/bug-hurd/2011-12/msg00050.html
+ <braunr> antrik: do you consider having a reply port being an avoidable
+ overhead ?
+ <antrik> braunr: not sure. I don't remember hearing of any capability
+ system doing this kind of optimisation though; so I guess there are
+ reasons for that...
+ <braunr> antrik: yes me too, even more since neal talked about it on
+ viengoos
+ <antrik> I wonder whether thread management is also such a large overhead
+ with fully sync IPC, on L4 or EROS for example...
+ <braunr> antrik: it's still a very handy optimization for thread scheduling
+ <braunr> antrik: it makes solving priority inversions a lot easier
+ <antrik> actually, is thread scheduling a problem at all with a thread
+ activation approach like in Viengoos?
+ <braunr> antrik: thread activation is part of thread migration
+ <braunr> antrik: actually, i'd say they both refer to the same thing
+ <antrik> err... scheduler activation was the term I wanted to use
+ <braunr> same
+ <braunr> well
+ <braunr> scheduler activation is too vague to assert that
+ <braunr> antrik: do you refer to scheduler activations as described in
+ http://en.wikipedia.org/wiki/Scheduler_activations ?
+ <antrik> my understanding was that Viengoos still has traditional threads;
+ they just can get scheduled directly on incoming IPC
+ <antrik> braunr: that Wikipedia article is strange. it seems to use
+ "scheduler activations" as a synonym for N:M multithreading, which is not
+ at all how I understood it
+ <youpi> antrik: I used to try to keep a look at those pages, to fix such
+ wrong things, but left it
+ <braunr> antrik: that's why i ask
+ <antrik> IIRC Viengoos has a thread associated with each receive
+ buffer. after copying the message, the kernel would activate the
+ processes activation handler, which in turn could decide to directly
+ schedule the thead associated with the buffer
+ <antrik> or something along these lines
+ <braunr> antrik: that's similar to mach handoff
+ <youpi> antrik: generally enough, all the thread-related pages on wikipedia
+ are quite bogus
+ <antrik> nah, handoff just schedules the process; which is not useful, if
+ the right thread isn't activated in turn...
+ <braunr> antrik: but i think it's more than that, even in viengoos
+ <youpi> for instance, the french "thread" page was basically saying that
+ they were invented for GUIs to overlap computation with user interaction
+ <braunr> .. :)
+ <antrik> youpi: good to know...
+ <braunr> antrik: the "misunderstanding" comes from the fact that scheduler
+ activations is the way N:M threading was implemented on netbsd
+ <antrik> youpi: that's a refreshing take on the matter... ;-)
+ <braunr> antrik: i'll read the critique and viengoos doc/source again to be
+ sure about what we're talking :)
+ <braunr> antrik: as threading is a major issue in mach, and one of the
+ things i completely changed (and intend to change) in x15, whenever i get
+ to work on that again ..... :)
+ <braunr> antrik: interestingly, the paper about scheduler activations was
+ written (among others) by brian bershad, in 92, when he was actively
+ working on research around mach
+ <antrik> braunr: BTW, I have little doubt that making RPC first-class would
+ solve a number of problems... I just wonder how many others it would open
diff --git a/microkernel/mach/gnumach/memory_management.mdwn b/microkernel/mach/gnumach/memory_management.mdwn
index ca2f42c4..c630af05 100644
--- a/microkernel/mach/gnumach/memory_management.mdwn
+++ b/microkernel/mach/gnumach/memory_management.mdwn
@@ -1,4 +1,4 @@
-[[!meta copyright="Copyright © 2011 Free Software Foundation, Inc."]]
+[[!meta copyright="Copyright © 2011, 2012 Free Software Foundation, Inc."]]
[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
id="license" text="Permission is granted to copy, distribute and/or modify this
@@ -8,9 +8,12 @@ Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
is included in the section entitled [[GNU Free Documentation
License|/fdl]]."]]"""]]
-[[!tag open_issue_documentation]]
+[[!tag open_issue_documentation open_issue_gnumach]]
-IRC, freenode, #hurd, 2011-02-15
+[[!toc]]
+
+
+# IRC, freenode, #hurd, 2011-02-15
<braunr> etenil: originally, mach had its own virtual space (the kernel
space)
@@ -37,14 +40,15 @@ IRC, freenode, #hurd, 2011-02-15
lage - pages without resetting the mmu often thanks to global pages, but
that didn't exist at the time)
-IRC, freenode, #hurd, 2011-02-15
+
+# IRC, freenode, #hurd, 2011-02-15
<antrik> however, the kernel won't work in 64 bit mode without some changes
to physical memory management
<braunr> and mmu management
<braunr> (but maybe that's what you meant by physical memory)
-IRC, freenode, #hurd, 2011-02-16
+## IRC, freenode, #hurd, 2011-02-16
<braunr> antrik: youpi added it for xen, yes
<braunr> antrik: but you're right, since mach uses a direct mapped kernel
@@ -52,9 +56,7 @@ IRC, freenode, #hurd, 2011-02-16
<braunr> which isn't required if the kernel space is really virtual
----
-
-IRC, freenode, #hurd, 2011-06-09
+# IRC, freenode, #hurd, 2011-06-09
<braunr> btw, how can gnumach use 1 GiB of RAM ? did you lower the
user/kernel boundary address ?
@@ -82,7 +84,7 @@ IRC, freenode, #hurd, 2011-06-09
RAM to fill the kernel space with struct page entries
-IRC, freenode, #hurd, 2011-11-12
+# IRC, freenode, #hurd, 2011-11-12
<youpi> well, the Hurd doesn't "artificially" limits itself to 1.5GiB
memory
@@ -102,3 +104,18 @@ IRC, freenode, #hurd, 2011-11-12
<youpi> kernel space is what determines how much physical memory you can
address
<youpi> unless using the linux-said-awful "bigmem" support
+
+
+# IRC, freenode, #hurd, 2012-07-05
+
+ <braunr> hm i got an address space exhaustion while building eglibc :/
+ <braunr> we really need the 3/1 split back with a 64-bits kernel
+ <pinotree> 3/1?
+ <braunr> 3 GiB userspace, 1 GiB kernel
+ <pinotree> ah
+ <braunr> the debian gnumach package is patched to use a 2/2 split
+ <braunr> and 2 GiB is really small for some needs
+ <braunr> on the bright side, the machine didn't crash
+ <braunr> there is issue with watch ./slabinfo which turned in a infinite
+ loop, but it didn't affect the stability of the system
+ <braunr> actually with a 64-bits kernel, we could use a 4/x split
diff --git a/open_issues/binutils_gold.mdwn b/open_issues/binutils_gold.mdwn
index aa6843a3..9eeebf2d 100644
--- a/open_issues/binutils_gold.mdwn
+++ b/open_issues/binutils_gold.mdwn
@@ -1,4 +1,5 @@
-[[!meta copyright="Copyright © 2010, 2011 Free Software Foundation, Inc."]]
+[[!meta copyright="Copyright © 2010, 2011, 2012 Free Software Foundation,
+Inc."]]
[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
id="license" text="Permission is granted to copy, distribute and/or modify this
@@ -8,180 +9,8 @@ Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
is included in the section entitled [[GNU Free Documentation
License|/fdl]]."]]"""]]
-[[!tag open_issue_binutils]]
+[[!tag open_issue_binutils open_issue_porting]]
-Have a look at GOLD / port as needed.
+Have a look at gold / port as needed.
-
-# teythoon's try / `mremap` issue
-
-IRC, #hurd, 2011-01-12
-
- <teythoon> I've been looking into building gold on hurd and it built fine
- with one minor tweak
- <teythoon> and it's working fine according to its test suite
- <teythoon> the only problem is that the build system is failing to detect
- the hurdish mremap which lives in libmemusage
- <teythoon> on linux it is in the libc so the check succeeds
- <teythoon> any hints on how to fix this properly?
- <antrik> hm... it's strange that it's a different library on the Hurd
- <antrik> are the implementations compatible?
- <teythoon> antrik: it seems so, though the declarations differ slightly
- <antrik> I guess the best thing is to ask on the appropriate list(s) why
- they are different...
- <teythoon> teythoon@ganymede:~/build/gold/binutils-2.21/gold$ grep -A1
- mremap /usr/include/sys/mman.h
- <teythoon> extern void *mremap (void *__addr, size_t __old_len, size_t
- __new_len, int __flags, ...) __THROW;
- <teythoon> vs
- <antrik> of course it would be possible to modify the configure script to
- check for the Hurd variant too; but first we should establish whether
- here is actually any reason for being different, or it's just some
- historical artefact that should be fixed...
- <teythoon> teythoon@ganymede:~/build/gold/binutils-2.21/gold$ fgrep 'extern
- void *mremap' mremap.c
- <teythoon> extern void *mremap (void *, size_t, size_t, int, ...);
- <teythoon> the problem is that the test fails to link due to the fact that
- mremap isn't in the libc on hurd
- <antrik> yeah, it would be possible for the configure script to check
- whether it works when the hurdish extra library is added explicitely
- <antrik> but again, I don't see any good reason for being different here in
- the first place...
- <teythoon> so should I create a patch to move mremap?
- <antrik> if it's not too complicated, that would be nice... it's always
- easier to discuss when you already have code :-)
- <antrik> OTOH, asking first might spare you some useless work if it turns
- out there *is* some reason for being different after all...
- so where is the right place to discuss this?
- <antrik> bug-hurd mailing list and/or glibc mailing list. not sure which
- one is better -- I guess it doesn't hurt to crosspost...
-
-[[mailing_lists/libc-alpha]] is the correct list, and cross-posting to
-[[mailing_lists/bug-hurd]] would be fine, too.
-
- <teythoon> antrik: some further digging revealed that mremap belongs to
- /lib/libmemusage.so on both hurd and linux
- <teythoon> the only difference is that on linux there is a weak reference
- to that function in /lib/libc-2.11.2.so
- <teythoon> $ objdump -T /lib/libc-2.11.2.so | fgrep mremap
- <teythoon> 00000000000cf7e0 w DF .text 0000000000000028 GLIBC_2.2.5
- mremap
- <antrik> ah, it's probably simply a bug that we don't have this weak
- reference too
- <antrik> IIRC we had similar bugs before
- <antrik> teythoon: can you provide a patch for that?
- <teythoon> antrik: unfortunately I have no idea how that weak ref ended up
- there
-
- <guillem> teythoon: also the libmemusage.s seems to be just a debugging
- library to be used by LD_PRELOAD or similar
- <guillem> which override those memory functions
- <guillem> the libc should provide actual code for those functions, even if
- the symbol is declared weak (so overridable)
- <guillem> teythoon: are you sure that's the actual problem? can you paste
- somewhere the build logs with the error?
- <teythoon> guillem: sure
- <teythoon> http://paste.debian.net/104437/
- <teythoon> that's the part of config.log that shows the detection (or the
- failure to detect it) of mremap
- <teythoon> this results in HAVE_MREMAP not being defined
- <teythoon> as a consequence it is declared in gold.h and this declaration
- conflicts with the one from sys/mman.h http://paste.debian.net/104438/
- <teythoon> on linux the test for mremap succeeds
- <guillem> teythoon: hmm oh I guess it's just what that, mremap is linux
- specific so it's not available on the hurd
- <guillem> teythoon: I just checked glibc and seems to confirm that
- <braunr> CONFORMING TO This call is Linux-specific, and should not be used
- in programs intended to be portable.
- <teythoon> ah okay
- <teythoon> so I guess we shouldn't ship an header with that declaration...
- <guillem> teythoon: yeah :/ good luck telling that to drepper :)
- <guillem> teythoon: I guess he'll suggest that everyone else needs to get
- our own copy of sys/mman.h
- <guillem> s/our/their/
- <teythoon> hm, so how should I proceed?
- <braunr> what's your goal ?
- <braunr> detecting mremap ?
- <teythoon> making binutils/gold compile ootb on hurd
- <teythoon> I picked it from the open issues page ;)
- <braunr> well, if there is no mremap, you need a replacement
- <teythoon> gold has a replacement
- <braunr> ok
- <braunr> so your problem is fixing the detection of mremap right ?
- <teythoon> yes
- <braunr> ok, that's a build system question then :/
- <braunr> you need to ask an autotools guy
- <teythoon> well, actually the build system correctly detects the absence of
- mremap
- <braunr> (gold does use the autotools right ?)
- <teythoon> yes
- <braunr> oh, i'm lost now (i admit i didn't read the whole issue :/)
- <teythoon> it is just that the declaration in sys/mman.h conflicts with
- their own declaration
- <braunr> ah
- <braunr> so in the absence of mremap, they use their own builtin function
- <teythoon> yes
- <teythoon> and according to the test suite it is working perfectly
- <teythoon> gold that is
- <teythoon> the declaration in mman.h has an extra __THROW
- <guillem> a workaround would be to rename gold's mremap to something else,
- gold_mremap for example
- <braunr> that's really the kind of annoying issue
- <braunr> you either have to change glibc, or gold
- <guillem> yeah
- <braunr> you'll face difficulty changing glibc, as guillem told you
- <guillem> the correct solution though IMO is to fix glibc
- <braunr> but this may be true for gold too
- <braunr> guillem: i agree
- <antrik> maybe it would be easiest actually to implement mremap()?...
- <braunr> but as this is something quite linux specific, it makes sense to
- use another internal name, and wrap that to the linux mremap if it's
- detected
- <braunr> antrik: i'm nto sure
- <antrik> braunr: I don't think using such workarounds is a good
- idea. clearly there would be no issue if the header file wouldn't be
- incorrect on Hurd
- <braunr> antrik: that's why i said i agree with guillem when he says "the
- correct solution though IMO is to fix glibc"
- <teythoon> what exactly is the problem with getting a patch into glibc?
- <braunr> the people involved
- <guillem> teythoon: and touching a generic header file
- <braunr> but feel free to try, you could be lucky
- <teythoon> but glibc is not an linux specific piece of software, right?
- <braunr> teythoon: no, it's not
- <guillem> erm...
- <braunr> teythoon: but in practice, it is
- <guillem> supposedly not :)
- <antrik> braunr: BTW, by "easiest" I don't mean coding alone, but
- coding+pushing upstream :-)
- <guillem> so the problem is, misc/sys/mman.h should be a generic header and
- as such not include linux specific parts, which are not present on hurd,
- kfreebsd, etc etc
- <braunr> antrik: yes, that's why guillem and i suggested the workaround
- thing in gold
- <antrik> that also requires pushing upstream. and quite frankly, if I were
- the gold maintainer, I wouldn't accept it.
- <guillem> but the easiest (and wrong) solution in glibc to avoid maintainer
- conflict will probably be copying that file under hurd's glibc tree and
- install that instead
- <braunr> antrik: implementing mremap could be relatively easy to do
- actually
- <braunr> antrik: IIRC, vm_map() supports overlapping
- <antrik> well, actually the easiest solution would be to create a patch
- that never goes upstream but is included in Debian, like many
- others... but that's obviously not a good long-term plan
- <antrik> braunr: yes, I think so too
- <antrik> braunr: haven't checked, but I have a vague recollection that the
- fundamentals are pretty much there
- <antrik> teythoon: so, apart from an ugly workaround in gold, there are
- essentially three options: 1. implement mremap; 2. make parts of mman.h
- conditional; 3. use our own copy of mman.h
- <antrik> 1. would be ideal, but might be non-trivial; 2. would might be
- tricky to get right, and even more tricky to get upstream; 3. would be
- simple, but a maintenance burden in the long term
- <teythoon> looking at golds replacement code (mmap & memcpy) 1 sounds like
- the best option performance wise
-
-[[!taglink open_issue_glibc]]: check if it is possible to implement `mremap`.
-[[I|tschwinge]] remember some discussion about this, but have not yet worked on
-locating it. [[Talk to me|tschwinge]] if you'd like to have a look at this.
+Apparently it needs [[glibc/mremap]].
diff --git a/open_issues/code_analysis.mdwn b/open_issues/code_analysis.mdwn
index d776d81a..00915651 100644
--- a/open_issues/code_analysis.mdwn
+++ b/open_issues/code_analysis.mdwn
@@ -1,4 +1,5 @@
-[[!meta copyright="Copyright © 2010, 2011 Free Software Foundation, Inc."]]
+[[!meta copyright="Copyright © 2010, 2011, 2012 Free Software Foundation,
+Inc."]]
[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
id="license" text="Permission is granted to copy, distribute and/or modify this
@@ -110,6 +111,20 @@ There is a [[!FF_project 276]][[!tag bounty]] on some of these tasks.
glibc's heap structure. its kinda handy, might help?
<vsrinivas> MALLOC_CHECK_ was the envvar you want, sorry.
+ * In context of [[!message-id
+ "1341350006-2499-1-git-send-email-rbraun@sceen.net"]]/the `alloca` issue
+ mentioned in [[gnumach_page_cache_policy]]:
+
+ IRC, freenode, #hurd, 2012-07-08:
+
+ <youpi> braunr: there's actually already an ifdef REDZONE in libthreads
+
+ It's `RED_ZONE`.
+
+ <youpi> except it seems clumsy :)
+ <youpi> ah, no, the libthreads code properly sets the guard, just for
+ grow-up stacks
+
* Input fuzzing
Not a new topic; has been used (and a paper published) for early UNIX
diff --git a/open_issues/dde.mdwn b/open_issues/dde.mdwn
index 725af646..aff988d5 100644
--- a/open_issues/dde.mdwn
+++ b/open_issues/dde.mdwn
@@ -451,3 +451,13 @@ At the microkernel davroom at [[community/meetings/FOSDEM_2012]]:
any movement in that regard :-(
<braunr> wasn't it needed for dde ?
<antrik> hm... good point
+
+
+# virtio
+
+
+## IRC, freenode, #hurd, 2012-07-01
+
+ <braunr> hm, i haven't looked but, does someone know if virtio is included
+ in netdde ?
+ <youpi> braunr: nope, there's an underlying virtio layer needed before
diff --git a/open_issues/fcntl_locking_dev_null.mdwn b/open_issues/fcntl_locking_dev_null.mdwn
new file mode 100644
index 00000000..4c65a5c4
--- /dev/null
+++ b/open_issues/fcntl_locking_dev_null.mdwn
@@ -0,0 +1,38 @@
+[[!meta copyright="Copyright © 2012 Free Software Foundation, Inc."]]
+
+[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
+id="license" text="Permission is granted to copy, distribute and/or modify this
+document under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no Invariant
+Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
+is included in the section entitled [[GNU Free Documentation
+License|/fdl]]."]]"""]]
+
+[[!meta title="fcntl locking /dev/null"]]
+
+[[!tag open_issue_hurd]]
+
+
+# IRC, OFTC, #debian-hurd, 2012-07-06
+
+ <pinotree> regarding the libwibble failure (which holds libbuffy →
+ libbuffy-bindings), the failing test happens because it logs to /dev/null
+ as test file,
+ <pinotree> and while doing that, it wants to lock it first, having a
+ ENOTSUP in return
+ <youpi> oh
+ <youpi> locking null, how interesting
+ <youpi> what is that supposed to do ? :o)
+ <pinotree> from what i was reading posix, it would seem that such object is
+ considered a "File"
+ <youpi> is it our unimplemented record lock, or just the lock operation
+ that /dev/null doesn't support ?
+ <youpi> what size is null supposed to be? zero, right?
+ <pinotree> the latter
+ <youpi> ah
+ <youpi> so we can simply make lock return 0
+ <youpi> since there's no byte to lock?
+ <youpi> I don't remember whether you can lock unexistant bytes
+ <pinotree> indeed, if i change the libwibble unit test to use eg /tmp/foo,
+ they pas
+ <pinotree> s
diff --git a/open_issues/gcc.mdwn b/open_issues/gcc.mdwn
index 04d399f0..9019939d 100644
--- a/open_issues/gcc.mdwn
+++ b/open_issues/gcc.mdwn
@@ -237,6 +237,60 @@ Last reviewed up to the [[Git mirror's 9aa4b6a8046270a9dbdf47827f1ea873217d7aa5
to find out why some stuff wasn't compiling even after kfreebsd
porting patches adding preprocessors checks for __GLIBC__
+ IRC, freenode, #hurd, 2012-05-25:
+
+ <gnu_srs> Hi, looks like __GLIBC__ is not defined by default for GNU?
+ <gnu_srs> touch foo.h; cpp -dM foo.h|grep LIBC: empty
+ <braunr> gnu_srs: well, this only tells your the compiler defaults
+ <tschwinge> gnu_srs: See the email I just sent.
+
+ [[!message-id "87396od3ej.fsf@schwinge.name"]]
+
+ <braunr> __GLIBC__ would probably be introduced by a glibc header
+ <gnu_srs> tschwinge: I saw your email. I wonder if features.h is
+ included in the kFreeBSD build of webkit.
+ <gnu_srs> It is defined in their build, but not in the Hurd build.
+ <pinotree> gcc on kfreebsd unconditionally defines __GLIBC__
+ <pinotree> (a bit stupid choice imho, but hardly something that could
+ be changed now...)
+ <braunr> :/
+ <braunr> personally i don't consider this only "a bit" stupid, as
+ kfreebsd is one of the various efforts pushing towards portability
+ <braunr> and using such hacks actually hinders portability ...
+ <pinotree> yeah don't tell me, i can remember at least half dozen of
+ occasions when a code wouldn't have been compiling at all on other
+ glibc platforms otherwise
+ <pinotree> sure, i have nothing against kfreebsd's efforts, but making
+ gcc define something which is proper of the libc used is stupid
+ <braunr> it is
+ <pinotree> i spotted changes like:
+ <pinotree> -#ifdef __linux
+ <pinotree> +#if defined(__linux__) || defined(__GLIBC__)
+ <pinotree> and wondered why they wouldn't work at all for us... and
+ then realized there were no #include in that file before that
+ preprocessor check
+ <tschwinge> This is even in upstream GCC gcc/config/kfreebsd-gnu.h:
+ <tschwinge> #define GNU_USER_TARGET_OS_CPP_BUILTINS() \
+ <tschwinge> do \
+ <tschwinge> { \
+ <tschwinge> builtin_define ("__FreeBSD_kernel__"); \
+ <tschwinge> builtin_define ("__GLIBC__"); \
+ <tschwinge> builtin_define_std ("unix"); \
+ <tschwinge> builtin_assert ("system=unix"); \
+ <tschwinge> builtin_assert ("system=posix"); \
+ <tschwinge> } \
+ <tschwinge> while (0)
+ <tschwinge> I might raise this upstream at some point.
+ <pinotree> tschwinge: i could guess the change was proposed by the
+ kfreebsd people, so asking them before at d-bsd@d.o would be a start
+ <tschwinge> pinotree: Ack.
+ <pinotree> especially that they would need to fix stuff afterwards
+ <pinotree> imho we could propose them the change, and if they agree put
+ that as local patch to debian's gcc4.6/.7 after wheezy, so there is
+ plenty of time for them to fix stuff
+ <pinotree> what should be done first is, however, find out why that
+ define has been added to gcc
+
* [low] Does `-mcpu=native` etc. work? (For example,
2ae1f0cc764e998bfc684d662aba0497e8723e52.)
diff --git a/open_issues/gdb.mdwn b/open_issues/gdb.mdwn
index 2ae3518c..dae18227 100644
--- a/open_issues/gdb.mdwn
+++ b/open_issues/gdb.mdwn
@@ -69,7 +69,7 @@ harmonized.
There are several occurences of *error: dereferencing type-punned pointer will
break strict-aliasing rules* in the MIG-generated stub files; thus no `-Werror`
-until that is resolved.
+until that is resolved ([[strict_aliasing]]).
This takes up around 140 MiB and needs roughly 6 min on kepler.SCHWINGE and 30
min on coulomb.SCHWINGE.
diff --git a/open_issues/gdb_attach.mdwn b/open_issues/gdb_attach.mdwn
new file mode 100644
index 00000000..4e4f2ea0
--- /dev/null
+++ b/open_issues/gdb_attach.mdwn
@@ -0,0 +1,41 @@
+[[!meta copyright="Copyright © 2012 Free Software Foundation, Inc."]]
+
+[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
+id="license" text="Permission is granted to copy, distribute and/or modify this
+document under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no Invariant
+Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
+is included in the section entitled [[GNU Free Documentation
+License|/fdl]]."]]"""]]
+
+[[!meta title="GDB: attach"]]
+
+[[!tag open_issue_gdb]]
+
+
+# [[gdb_thread_ids]]
+
+
+# IRC, freenode, #hurd, 2012-06-30
+
+ <braunr> hm, gdb isn't able to determine which thread is running when
+ attaching to a process
+
+
+# IRC, freenode, #hurd, 2012-07-02
+
+ <braunr> woah, now that's a weird message !
+ <braunr> when using gdb on a hanged ext2fs :
+ <braunr> Pid 938 has an additional task suspend count of 1; clear it? (y or
+ n)
+ <braunr> when hanged, gdb thinks the target task is already being debugged
+ :/
+ <braunr> no wonder why it's completely stuck
+ <braunr> hm, the task_suspend might actually be the crash-dump-core server
+ attempting to create the core :/
+ <braunr> hm interesting, looks like a problem with the
+ diskfs_catch_exception macro
+ <pinotree> braunr: what's up with it?
+ <braunr> pinotree: it uses setjmp
+ <braunr> hm random corruptions :/
+ <braunr> definitely looks like a concurrency problem
diff --git a/open_issues/glibc.mdwn b/open_issues/glibc.mdwn
index 1ce47560..2dea816a 100644
--- a/open_issues/glibc.mdwn
+++ b/open_issues/glibc.mdwn
@@ -267,6 +267,8 @@ Last reviewed up to the [[Git mirror's d40c5d54cb551acba4ef1617464760c5b3d41a14
initialization
<tschwinge> OK, that at least matches my understanding.
+ * [[`mremap`|mremap]]
+
* `syncfs`
We should be easily able to implement that one.
diff --git a/open_issues/glibc/mremap.mdwn b/open_issues/glibc/mremap.mdwn
new file mode 100644
index 00000000..a293eea0
--- /dev/null
+++ b/open_issues/glibc/mremap.mdwn
@@ -0,0 +1,221 @@
+[[!meta copyright="Copyright © 2011, 2012 Free Software Foundation, Inc."]]
+
+[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
+id="license" text="Permission is granted to copy, distribute and/or modify this
+document under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no Invariant
+Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
+is included in the section entitled [[GNU Free Documentation
+License|/fdl]]."]]"""]]
+
+[[!tag open_issue_glibc]]
+
+[[!toc]]
+
+
+# binutils gold
+
+## IRC, freenode, #hurd, 2011-01-12
+
+ <teythoon> I've been looking into building gold on hurd and it built fine
+ with one minor tweak
+ <teythoon> and it's working fine according to its test suite
+ <teythoon> the only problem is that the build system is failing to detect
+ the hurdish mremap which lives in libmemusage
+ <teythoon> on linux it is in the libc so the check succeeds
+ <teythoon> any hints on how to fix this properly?
+ <antrik> hm... it's strange that it's a different library on the Hurd
+ <antrik> are the implementations compatible?
+ <teythoon> antrik: it seems so, though the declarations differ slightly
+ <antrik> I guess the best thing is to ask on the appropriate list(s) why
+ they are different...
+ <teythoon> teythoon@ganymede:~/build/gold/binutils-2.21/gold$ grep -A1
+ mremap /usr/include/sys/mman.h
+ <teythoon> extern void *mremap (void *__addr, size_t __old_len, size_t
+ __new_len, int __flags, ...) __THROW;
+ <teythoon> vs
+ <antrik> of course it would be possible to modify the configure script to
+ check for the Hurd variant too; but first we should establish whether
+ here is actually any reason for being different, or it's just some
+ historical artefact that should be fixed...
+ <teythoon> teythoon@ganymede:~/build/gold/binutils-2.21/gold$ fgrep 'extern
+ void *mremap' mremap.c
+ <teythoon> extern void *mremap (void *, size_t, size_t, int, ...);
+ <teythoon> the problem is that the test fails to link due to the fact that
+ mremap isn't in the libc on hurd
+ <antrik> yeah, it would be possible for the configure script to check
+ whether it works when the hurdish extra library is added explicitely
+ <antrik> but again, I don't see any good reason for being different here in
+ the first place...
+ <teythoon> so should I create a patch to move mremap?
+ <antrik> if it's not too complicated, that would be nice... it's always
+ easier to discuss when you already have code :-)
+ <antrik> OTOH, asking first might spare you some useless work if it turns
+ out there *is* some reason for being different after all...
+ so where is the right place to discuss this?
+ <antrik> bug-hurd mailing list and/or glibc mailing list. not sure which
+ one is better -- I guess it doesn't hurt to crosspost...
+
+[[mailing_lists/libc-alpha]] is the correct list, and cross-posting to
+[[mailing_lists/bug-hurd]] would be fine, too.
+
+ <teythoon> antrik: some further digging revealed that mremap belongs to
+ /lib/libmemusage.so on both hurd and linux
+ <teythoon> the only difference is that on linux there is a weak reference
+ to that function in /lib/libc-2.11.2.so
+ <teythoon> $ objdump -T /lib/libc-2.11.2.so | fgrep mremap
+ <teythoon> 00000000000cf7e0 w DF .text 0000000000000028 GLIBC_2.2.5
+ mremap
+ <antrik> ah, it's probably simply a bug that we don't have this weak
+ reference too
+ <antrik> IIRC we had similar bugs before
+ <antrik> teythoon: can you provide a patch for that?
+ <teythoon> antrik: unfortunately I have no idea how that weak ref ended up
+ there
+
+ <guillem> teythoon: also the libmemusage.s seems to be just a debugging
+ library to be used by LD_PRELOAD or similar
+ <guillem> which override those memory functions
+ <guillem> the libc should provide actual code for those functions, even if
+ the symbol is declared weak (so overridable)
+ <guillem> teythoon: are you sure that's the actual problem? can you paste
+ somewhere the build logs with the error?
+ <teythoon> guillem: sure
+ <teythoon> http://paste.debian.net/104437/
+ <teythoon> that's the part of config.log that shows the detection (or the
+ failure to detect it) of mremap
+ <teythoon> this results in HAVE_MREMAP not being defined
+ <teythoon> as a consequence it is declared in gold.h and this declaration
+ conflicts with the one from sys/mman.h http://paste.debian.net/104438/
+ <teythoon> on linux the test for mremap succeeds
+ <guillem> teythoon: hmm oh I guess it's just what that, mremap is linux
+ specific so it's not available on the hurd
+ <guillem> teythoon: I just checked glibc and seems to confirm that
+ <braunr> CONFORMING TO This call is Linux-specific, and should not be used
+ in programs intended to be portable.
+ <teythoon> ah okay
+ <teythoon> so I guess we shouldn't ship an header with that declaration...
+ <guillem> teythoon: yeah :/ good luck telling that to drepper :)
+ <guillem> teythoon: I guess he'll suggest that everyone else needs to get
+ our own copy of sys/mman.h
+ <guillem> s/our/their/
+ <teythoon> hm, so how should I proceed?
+ <braunr> what's your goal ?
+ <braunr> detecting mremap ?
+ <teythoon> making binutils/gold compile ootb on hurd
+ <teythoon> I picked it from the open issues page ;)
+ <braunr> well, if there is no mremap, you need a replacement
+ <teythoon> gold has a replacement
+ <braunr> ok
+ <braunr> so your problem is fixing the detection of mremap right ?
+ <teythoon> yes
+ <braunr> ok, that's a build system question then :/
+ <braunr> you need to ask an autotools guy
+ <teythoon> well, actually the build system correctly detects the absence of
+ mremap
+ <braunr> (gold does use the autotools right ?)
+ <teythoon> yes
+ <braunr> oh, i'm lost now (i admit i didn't read the whole issue :/)
+ <teythoon> it is just that the declaration in sys/mman.h conflicts with
+ their own declaration
+ <braunr> ah
+ <braunr> so in the absence of mremap, they use their own builtin function
+ <teythoon> yes
+ <teythoon> and according to the test suite it is working perfectly
+ <teythoon> gold that is
+ <teythoon> the declaration in mman.h has an extra __THROW
+ <guillem> a workaround would be to rename gold's mremap to something else,
+ gold_mremap for example
+ <braunr> that's really the kind of annoying issue
+ <braunr> you either have to change glibc, or gold
+ <guillem> yeah
+ <braunr> you'll face difficulty changing glibc, as guillem told you
+ <guillem> the correct solution though IMO is to fix glibc
+ <braunr> but this may be true for gold too
+ <braunr> guillem: i agree
+ <antrik> maybe it would be easiest actually to implement mremap()?...
+ <braunr> but as this is something quite linux specific, it makes sense to
+ use another internal name, and wrap that to the linux mremap if it's
+ detected
+ <braunr> antrik: i'm nto sure
+ <antrik> braunr: I don't think using such workarounds is a good
+ idea. clearly there would be no issue if the header file wouldn't be
+ incorrect on Hurd
+ <braunr> antrik: that's why i said i agree with guillem when he says "the
+ correct solution though IMO is to fix glibc"
+ <teythoon> what exactly is the problem with getting a patch into glibc?
+ <braunr> the people involved
+ <guillem> teythoon: and touching a generic header file
+ <braunr> but feel free to try, you could be lucky
+ <teythoon> but glibc is not an linux specific piece of software, right?
+ <braunr> teythoon: no, it's not
+ <guillem> erm...
+ <braunr> teythoon: but in practice, it is
+ <guillem> supposedly not :)
+ <antrik> braunr: BTW, by "easiest" I don't mean coding alone, but
+ coding+pushing upstream :-)
+ <guillem> so the problem is, misc/sys/mman.h should be a generic header and
+ as such not include linux specific parts, which are not present on hurd,
+ kfreebsd, etc etc
+ <braunr> antrik: yes, that's why guillem and i suggested the workaround
+ thing in gold
+ <antrik> that also requires pushing upstream. and quite frankly, if I were
+ the gold maintainer, I wouldn't accept it.
+ <guillem> but the easiest (and wrong) solution in glibc to avoid maintainer
+ conflict will probably be copying that file under hurd's glibc tree and
+ install that instead
+ <braunr> antrik: implementing mremap could be relatively easy to do
+ actually
+ <braunr> antrik: IIRC, vm_map() supports overlapping
+ <antrik> well, actually the easiest solution would be to create a patch
+ that never goes upstream but is included in Debian, like many
+ others... but that's obviously not a good long-term plan
+ <antrik> braunr: yes, I think so too
+ <antrik> braunr: haven't checked, but I have a vague recollection that the
+ fundamentals are pretty much there
+ <antrik> teythoon: so, apart from an ugly workaround in gold, there are
+ essentially three options: 1. implement mremap; 2. make parts of mman.h
+ conditional; 3. use our own copy of mman.h
+ <antrik> 1. would be ideal, but might be non-trivial; 2. would might be
+ tricky to get right, and even more tricky to get upstream; 3. would be
+ simple, but a maintenance burden in the long term
+ <teythoon> looking at golds replacement code (mmap & memcpy) 1 sounds like
+ the best option performance wise
+
+[[!taglink open_issue_glibc]]: check if it is possible to implement `mremap`.
+[[I|tschwinge]] remember some discussion about this, but have not yet worked on
+locating it. [[Talk to me|tschwinge]] if you'd like to have a look at this.
+
+
+# IRC, OFTC, #debian-hurd, 2012-06-19
+
+ <bdefreese> OK, how the heck do you get an undefined reference to mremap?
+ <youpi> simply because we don't have it
+ <pinotree> mremap exists only on linux
+ <bdefreese> It's in sys/mman.h
+ <pinotree> on linux?
+ <bdefreese> No, on GNU/Hurd
+ <bdefreese> /usr/include/i386-gnu/sys/mman.h
+ <youpi> that's just the common file with linux
+ <youpi> containing just the prototype
+ <youpi> that doesn't mean there's an implementation behind
+ <pinotree> youpi: hm no, linux has an own version
+ <youpi> uh
+ <bdefreese> Ah, aye, I didn't look at the implementation.. :(
+ <youpi> it's then odd that it was added to the generic sys/mman.h :)
+ <bdefreese> Just another stub?
+ <pinotree> ah, only few linux archs have own versions
+ <youpi> for the macro values I guess
+ <pinotree> http://paste.debian.net/175173/ on glibc/master
+ <bdefreese> Hmm, so where is MREMAP_MAYMOVE coming in from?
+ <youpi> rgrep on a linux box ;)
+ <youpi> <bits/mman.h>
+ <youpi> but that's again linuxish
+ <bdefreese> Aye but with us having that in the header it is causing some
+ code to be run which utilizes mremap. If that wasn't defined we wouldn't
+ be calling it.
+ <youpi> ah
+ <youpi> we could try to remove it indeed
+ <bdefreese> Should I change the code to #ifdef MREMAP_MAYMOVE & !defined
+ __GNU__?
+ <youpi> no, I said we could remove the definition of MREMAP_MAYMOVE itself
diff --git a/open_issues/gnumach_i686.mdwn b/open_issues/gnumach_i686.mdwn
new file mode 100644
index 00000000..b34df73b
--- /dev/null
+++ b/open_issues/gnumach_i686.mdwn
@@ -0,0 +1,26 @@
+[[!meta copyright="Copyright © 2012 Free Software Foundation, Inc."]]
+
+[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
+id="license" text="Permission is granted to copy, distribute and/or modify this
+document under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no Invariant
+Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
+is included in the section entitled [[GNU Free Documentation
+License|/fdl]]."]]"""]]
+
+[[!tag open_issue_gnumach]]
+
+
+# IRC, freenode, #hurd, 2012-07-05
+
+ <braunr> we could use a gnumach-i686 too
+ <pinotree> how would you compile gnumach as i686 variant btw? add
+ -march=.. or something like that in CFLAGS?
+ <braunr> yes
+ <braunr> at least we'll get some cmovs :)
+
+
+## IRC, freenode, #hurd, 2012-07-07
+
+ <braunr> it was rejected in the past because we didn't think it would bring
+ real performance benefit, but it actually may
diff --git a/open_issues/gnumach_integer_overflow.mdwn b/open_issues/gnumach_integer_overflow.mdwn
new file mode 100644
index 00000000..2166e591
--- /dev/null
+++ b/open_issues/gnumach_integer_overflow.mdwn
@@ -0,0 +1,17 @@
+[[!meta copyright="Copyright © 2012 Free Software Foundation, Inc."]]
+
+[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
+id="license" text="Permission is granted to copy, distribute and/or modify this
+document under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no Invariant
+Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
+is included in the section entitled [[GNU Free Documentation
+License|/fdl]]."]]"""]]
+
+[[!tag open_issue_gnumach]]
+
+
+# IRC, freenode, #hurd, 2012-07-04
+
+ <braunr> yes, we have integer overflows on resident_page_count, but
+ luckily, the member is rarely used
diff --git a/open_issues/gnumach_page_cache_policy.mdwn b/open_issues/gnumach_page_cache_policy.mdwn
index 75fcdd88..6f51d713 100644
--- a/open_issues/gnumach_page_cache_policy.mdwn
+++ b/open_issues/gnumach_page_cache_policy.mdwn
@@ -10,6 +10,11 @@ License|/fdl]]."]]"""]]
[[!tag open_issue_gnumach]]
+[[!toc]]
+
+
+# [[page_cache]]
+
# IRC, freenode, #hurd, 2012-04-26
@@ -33,3 +38,587 @@ License|/fdl]]."]]"""]]
have either lots of free pages because tha max limit is reached, or lots
of pressure and system freezes :/
<youpi> yes
+
+
+## IRC, freenode, #hurd, 2012-06-17
+
+ <braunr> youpi: i don't understand your patch :/
+ <youpi> arf
+ <youpi>  which part don't you understand?
+ <braunr> the global idea :/
+ <youpi> first, drop the limit on number of objects
+ <braunr> you added a new collect call at pageout time
+ <youpi> (i.e. here, hack overflow into 0)
+ <braunr> yes
+ <braunr> obviously
+ <youpi> but then the cache keeps filling up with objects
+ <youpi> which sooner or later become empty
+ <youpi> thus the collect, which is supposed to look for empty objects, and
+ just drop them
+ <braunr> but not at the right time
+ <braunr> objects should be collected as soon as their ref count drops to 0
+ <braunr> err
+ <youpi> now, the code of the collect is just a crude attempt without
+ knowing much about the vm
+ <braunr> when their resident page count drops to 0
+ <youpi> so don't necessarily read it :)
+ <braunr> ok
+ <braunr> i've begin playing with the vm recently
+ <braunr> the limits (arbitrary, and very old obviously) seem far too low
+ for current resources
+ <braunr> (e.g. the threshold on free pages is 50 iirc ...)
+ <youpi> yes
+ <braunr> i'll probably use a different approach
+ <braunr> the one i mentioned (collecting one object at a time - or pushing
+ them on a list for bursts - when they become empty)
+ <braunr> this should relax the kernel allocator more
+ <braunr> (since there will be less empty vm_objects remaining until the
+ next global collecttion)
+
+
+## IRC, freenode, #hurd, 2012-06-30
+
+ <braunr> the threshold values of the page cache seem quite enough actually
+ <youpi> braunr: ah
+ <braunr> youpi: yes, it seems the problems are in ext2, not in the VM
+ <youpi> k
+ <youpi> the page cache limitation still doesn't help :)
+ <braunr> the problem in the VM is the recycling of vm_objects, which aren't
+ freed once empty
+ <braunr> but it only wastes some of the slab memory, it doesn't prevent
+ correct processing
+ <youpi> braunr: thus the limitation, right?
+ <braunr> no
+ <braunr> well
+ <braunr> that's the policy they chose at the time
+ <braunr> for what reason .. i can't tell
+ <youpi> ok, but I mean
+ <youpi> we can't remove the policy because of the non-free of empty objects
+ <braunr> we must remove vm_objects at some point
+ <braunr> but even without it, it makes no sense to disable the limit while
+ ext2 is still unstable
+ <braunr> also, i noticed that the page count in vm_objects never actually
+ drop to 0 ...
+ <youpi> you mean the limit permits to avoid going into the buggy scenarii
+ too often?
+ <braunr> yes
+ <youpi> k
+ <braunr> at least, that's my impression
+ <braunr> my test case is tar xf files.tar.gz, which contains 50000 files of
+ 12k random data
+ <braunr> i'll try with other values
+ <braunr> i get crashes, deadlocks, livelocks, and it's not pretty :)
+ <braunr> and always in ext2, mach doesn't seem affected by the issue, other
+ than the obvious
+ <braunr> (well i get the usual "deallocating an invalid port", but as
+ mentioned, it's "most probably a bug", which is the case here :)
+ <youpi> braunr: looks coherent with the hangs I get on the buildds
+ <braunr> youpi: so that's the nasty bug i have to track now
+ <youpi> though I'm also still getting some out of memory from gnumach
+ sometimes
+ <braunr> the good thing is i can reproduce it very quickly
+ <youpi> a dump from the allocator to know which zone took all the room
+ might help
+ <braunr> youpi: yes i promised that too
+ <youpi> although that's probably related with ext2 issues :)
+ <braunr> youpi: can you send me the panic message so i can point the code
+ which must output the allocator state please ?
+ <youpi> next time I get it, sure :)
+ <pinotree> braunr: you could implement a /proc/slabinfo :)
+ <braunr> pinotree: yes but when a panic happens, it's too late
+ <braunr> http://git.sceen.net/rbraun/slabinfo.git/ btw
+ <braunr> although it's not part of procfs
+ <braunr> and the mach_debug interface isn't provided :(
+
+
+## IRC, freenode, #hurd, 2012-07-03
+
+ <braunr> it looks like pagers create a thread per memory object ...
+ <antrik> braunr: oh. so if I open a lot of files, ext2fs will *inevitably*
+ have lots of threads?...
+ <braunr> antrik: i'm not sure
+ <braunr> it may only be required to flush them
+ <braunr> but when there are lots of them, the threads could run slowly,
+ giving the impression there is one per object
+ <braunr> in sync mode i don't see many threads
+ <braunr> and i don't get the bug either for now
+ <braunr> while i can see physical memory actually being used
+ <braunr> (and the bug happens before there is any memory pressure in the
+ kernel)
+ <braunr> so it definitely looks like a corruption in ext2fs
+ <braunr> and i have an idea .... :>
+ <braunr> hm no, i thought an alloca with a big size parameter could erase
+ memory outside the stack, but it's something else
+ <braunr> (although alloca should really be avoided)
+ <braunr> arg, the problem seems to be in diskfs_sync_everything ->
+ ports_bucket_iterate (pager_bucket, sync_one); :/
+ <braunr> :(
+ <braunr> looks like the ext2 problem is triggered by calling pager_sync
+ from diskfs_sync_everything
+ <braunr> and is possibly related to
+ http://lists.gnu.org/archive/html/bug-hurd/2010-03/msg00127.html
+ <braunr> (and for reference, the rest of the discussion
+ http://lists.gnu.org/archive/html/bug-hurd/2010-04/msg00012.html)
+ <braunr> multithreading in libpager is scary :/
+ <antrik> braunr: s/in libpager/ ;-)
+ <braunr> antrik: right
+ <braunr> omg the ugliness :/
+ <braunr> ok i found a bug
+ <braunr> a real one :)
+ <braunr> (but not sure it's the only one since i tried that before)
+ <braunr> 01:38 < braunr> hm no, i thought an alloca with a big size
+ parameter could erase memory outside the stack, but it's something else
+ <braunr> turns out alloca is sometimes used for 64k+ allocations
+ <braunr> which explains the stack corruptions
+ <pinotree> ouch
+ <braunr> as it's used to duplicate the node table before traversing it, it
+ also explains why the cache limit affects the frequency of the bug
+ <braunr> now the fun part, write the patch following GNU protocol .. :)
+
+[[!message-id "1341350006-2499-1-git-send-email-rbraun@sceen.net"]]
+
+ <braunr> if someone feels like it, there are a bunch of alloca calls in the
+ hurd (like around 30 if i'm right)
+ <braunr> most of them look safe, but some could trigger that same problem
+ in other servers
+ <braunr> ok so far, no problem with the upstream ext2fs code :)
+ <braunr> 20 loops of tar xf / rm -rf consuming all free memory as cache :)
+ <braunr> the hurd uses far too much cpu time for no valid reason in many
+ places :/
+ * braunr happy
+ <braunr> my hurd is completely using its ram :)
+ <gnu_srs> Meaning, the bug is solved? Congrats if so :)
+ <braunr> well, ext2fs looks way more stable now
+ <braunr> i haven't had a single issue since the change, so i guess i messed
+ something with my previous test
+ <braunr> and the Mach VM cache implementation looks good enough
+ <braunr> now the only thing left is to detect unused objects and release
+ them
+ <braunr> which is actually the core of my work :)
+ <braunr> but i'm glad i could polish ext2fs
+ <braunr> with luck, this is the issue that was striking during "thread
+ storms" in the past
+ * pinotree hugs braunr
+ <braunr> i'm also very happy to see the slab allocator reacting well upon
+ memory pressure :>
+ <mcsim> braunr: Why alloca corrupted memory diskfs_node_iterate? Was
+ temporary node to big to keep it in stack?
+ <braunr> mcsim: yes
+ <braunr> 17:54 < braunr> turns out alloca is sometimes used for 64k+
+ allocations
+ <braunr> and i wouldn't be surprised if our thread stacks are
+ simplecontiguous 64k mappings of zero-filled memory
+ <braunr> (as Mach only provides bottom-up allocation)
+ <braunr> our thread implementation should leave unmapped areas between
+ thread stacks, to easily catch such overflows
+ <pinotree> braunr: wouldn't also fatfs/inode.c and tmpfs/node.c need the
+ same fix?
+ <braunr> pinotree: possibly
+ <braunr> i haven't looked
+ <braunr> more than 300 loops of tar xf / rm -rf on an archive of 20000
+ files of 12 KiB each, without any issue, still going on :)
+ <youpi> braunr: yay
+
+
+## [[!message-id "20120703121820.GA30902@mail.sceen.net"]], 2012-07-03
+
+
+## IRC, freenode, #hurd, 2012-07-04
+
+ <braunr> mach is so good it caches objects which *no* page in physical
+ memory
+ <braunr> hm i think i have a working and not too dirty vm cache :>
+ <kilobug> braunr: congrats :)
+ <braunr> kilobug: hey :)
+ <braunr> the dangerous side effect is the increased swappiness
+ <braunr> we'll have to monitor that on the buildds
+ <braunr> otherwise the cache is effectively used, and the slab allocator
+ reports reasonable amounts of objects, not increasing once the ram is
+ full
+ <braunr> let's see what happens with 1.8 GiB of RAM now
+ <braunr> damn glibc is really long to build :)
+ <braunr> and i fear my vm cache patch makes non scalable algorithms negate
+ some of its benefits :/
+ <braunr> 72 tasks, 2090 threads
+ <braunr> we need the ability to monitor threads somewhere
+
+
+## IRC, freenode, #hurd, 2012-07-05
+
+ <braunr> hm i get kernel panics when not using the host cache :/
+ <braunr> no virtual memory for stack allocations
+ <braunr> that's scary
+ <antrik> ?
+ <braunr> i guess the lack of host cache makes I/O slow enough to create a
+ big thread storm
+ <braunr> that completely exhausts the kernel space
+ <braunr> my patch challenges scalability :)
+ <antrik> and not having a zalloc zone anymore, instead of getting a nice
+ panic when trying to allocate yet another thread, you get an address
+ space exhaustion on an unrelated event instead. I see ;-)
+ <braunr> thread stacks are not allocated from a zone/cache
+ <braunr> also, the panic concerned aligned memory, but i don't think that
+ matters
+ <braunr> the kernel panic clearly mentions it's about thread stack
+ allocation
+ <antrik> oh, by "stack allocations" you actually mean allocating a stack
+ for a new thread...
+ <braunr> yes
+ <antrik> that's not what I normally understand when reading "stack
+ allocations" :-)
+ <braunr> user stacks are simple zero filled memory objects
+ <braunr> so we usually get a deadlock on them :>
+ <braunr> i wonder if making ports_manage_port_operations_multithread limit
+ the number of threads would be a good thing to do
+ <antrik> braunr: last time slpz did that, it turned out that it causes
+ deadlocks in at least one (very specific) situation
+ <braunr> ok
+ <antrik> I think you were actually active at the time slpz proposed the
+ patch (and it was added to Debian) -- though probably not at the time
+ where youpi tracked it down as the cause of certain lockups, so it was
+ dropped again...
+ <braunr> what seems very weird though is that we're normally using
+ continuations
+ <antrik> braunr: you mean in the kernel? how is that relevant to the topic
+ at hand?...
+ <braunr> antrik: continuations have been designed to reduce the number of
+ stacks to one per cpu :/
+ <braunr> but they're not used everywhere
+ <antrik> they are not used *anywhere* in the Hurd...
+ <braunr> antrik: continuations are supposed to be used by kernel code
+ <antrik> braunr: not sure what you are getting at. of course we should use
+ some kind of continuations in the Hurd instead of having an active thread
+ for every single request in flight -- but that's not something that could
+ be done easily...
+ <braunr> antrik: oh no, i don't want to use continuations at all
+ <braunr> i just want to use less threads :)
+ <braunr> my panic definitely looks like a thread storm
+ <braunr> i guess increasing the kmem_map will help for the time bein
+ <braunr> g
+ <braunr> (it's not the whole kernel space that gets filled up actually)
+ <braunr> also, stacks are kept on a local cache until there is memory
+ pressure oO
+ <braunr> their slab cache can fill the backing map before there is any
+ pressure
+ <braunr> and it makes a two level cache, i'll have to remove that
+ <antrik> well, how do you reduce the number of threads? apart from
+ optimising scheduling (so requests are more likely to be completed before
+ new ones are handled), the only way to reduce the number of threads is to
+ avoid having a thread per request
+ <braunr> exactly
+ <antrik> so instead the state of each request being handled has to be
+ explicitly stored...
+ <antrik> i.e. continuations
+ <braunr> hm actually, no
+ <braunr> you use thread migration :)
+ <braunr> i don't want to artificially use the number of kernel threads
+ <braunr> the hurd should be revamped not to use that many threads
+ <braunr> but it looks like a hard task
+ <antrik> well, thread migration would reduce the global number of threads
+ in the system... it wouldn't prevent a server from having thousands of
+ threads
+ <braunr> threads would allready be allocated before getting in the server
+ <antrik> again, the only way not to use a thread for each outstanding
+ request is having some explicit request state management,
+ i.e. continuations
+ <braunr> hm right
+ <braunr> but we can nonetheless reduce the number of threads
+ <braunr> i wonder if the sync threads are created on behalf of the pagers
+ or the kernel
+ <braunr> one good thing is that i can already feel better performance
+ without using the host cache until the panic happens
+ <antrik> the tricky bit about that is that I/O can basically happen at any
+ point during handling a request, by hitting a page fault. so we need to
+ be able to continue with some other request at any point...
+ <braunr> yes
+ <antrik> actually, readahead should help a lot in reducing the number of
+ request and thus threads... still will be quite a lot though
+ <braunr> we should have a bunch of pageout threads handling requests
+ asynchronously
+ <braunr> it depends on the implementation
+ <braunr> consider readahead detects that, in the next 10 pages, 3 are not
+ resident, then 1 is, then 3 aren't, then 1 is again, and the last two
+ aren't
+ <braunr> how is this solved ? :)
+ <braunr> about the stack allocation issue, i actually think it's very
+ simple to solv
+ <braunr> the code is a remnant of the old BSD days, when processes were
+ heavily swapped
+ <braunr> so when a thread is created, its stack isn't allocated
+ <braunr> the allocation happens when the thread is dispatched, and the
+ scheduler finds it's swapped (which is the initial state)
+ <braunr> the stack is allocated, and the operation is assumed to succeed,
+ which is why failure produces a panic
+ <antrik> well, actually, not just readahead... clustered paging in
+ general. the thread storms happen mostly on write not read AIUI
+ <braunr> changing that to allocate at thread creation time will allow a
+ cleaner error handling
+ <braunr> antrik: yes, at writeback
+ <braunr> antrik: so i guess even when some physical pages are already
+ present, we should aim at larger sizes for fewer I/O requests
+ <antrik> not sure that would be worthwhile... probably doesn't happen all
+ that often. and if some of the pages are dirty, we would have to make
+ sure that they are ignored although they were part of the request...
+ <braunr> yes
+ <braunr> so one request per missing area ?
+ <antrik> the opposite might be a good idea though -- if every other page is
+ dirty, it *might* indeed be preferable to do a single request rewriting
+ even the clean ones in between...
+ <braunr> yes
+ <braunr> i personally think one request, then replace only what was
+ missing, is simpler and preferable
+ <antrik> OTOH, rewriting clean pages might considerably increase write time
+ (and wear) on SSDs
+ <braunr> why ?
+ <antrik> I doubt the controller is smart enough to recognies if a page
+ doesn't really need rewriting
+ <antrik> so it will actually allocate and write a new cluster
+ <braunr> no but it won't spread writes on different internal sectors, will
+ it ?
+ <braunr> sectors are usually really big
+ <antrik> "sectors" is not a term used in SSDs :-)
+ <braunr> they'll be erased completely whatever the amount of data at some
+ point if i'm right
+ <braunr> ah
+ <braunr> need to learn more about that
+ <braunr> i thought their internal hardware was much like nand flash
+ <antrik> admittedly I don't remember the correct terminology either...
+ <antrik> they *are* NAND flash
+ <antrik> writing is actually not the problem -- it can happen in small
+ chunks. the problem is erasing, which is only possible in large blocks
+ <braunr> yes
+ <braunr> so having larger requests doesn't seem like a problem to me
+ <braunr> because of that
+ <antrik> thus smart controllers (which pretty much all SSD nowadays have,
+ and apparently even SD cards) do not actually overwrite. instead, writes
+ always happen to clean portions, and erasing only happens when a block is
+ mostly clean
+ <antrik> (after relocating the remaining used parts to other clean areas)
+ <antrik> braunr: the problem is not having larger requests. the problem is
+ rewriting clusters that don't really need rewriting. it means the dist
+ performs unnecessary writing actions.
+ <antrik> it doesn't hurt for magnetic disks, as the head has to pass over
+ the unchanged sectors anyways; and rewriting the unnecessarily doesn't
+ increase wear
+ <antrik> but it's different for SSDs
+ <antrik> each write has a penalty there
+ <braunr> i thought only erases were the real penalty
+ <antrik> well, erase happens in the background with modern controllers; so
+ it has no direct penalty. the write has a direct performance penalty when
+ saturating the bandwith, and always has a direct wear penalty
+ <braunr> can't controllers handle 32k requests ? like everything does ? :/
+ <antrik> sure they can. but that's beside the point...
+ <braunr> if they do, they won't mind the clean data inside such large
+ blocks
+ <antrik> apparently we are talking past each other
+ <braunr> i must be missing something important about SSD
+ <antrik> braunr: the point is, the controller doesn't *know* it's clean
+ data; so it will actually write it just like the really unclean data
+ <braunr> yes
+ <braunr> and it will choose an already clean sector for that (previously
+ erased), so writing larger blocks shouldn't hurt
+ <braunr> there will be a slight increase in bandwidth usage, but that's
+ pretty much all of it
+ <braunr> isn't it ?
+ <antrik> well, writing always happens to clean blocks. but writing more
+ blocks obviously needs more time, and causes more wear...
+ <braunr> aiui, blocks are always far larger than the amount of pages we
+ want to writeback in one request
+ <braunr> the only way to use more than one is crossing a boundary
+ <antrik> no. again, the blocks that can be *written* are actually quite
+ small. IIRC most SSDs use 4k nowadays
+ <braunr> ok
+ <antrik> only erasing operates on much larger blocks
+ <braunr> so writing is a problem too
+ <braunr> i didn't think it would cause wear leveling to happen
+ <antrik> well, I'm not sure whether the wear actually happens on write or
+ on erase... but that doesn't matter, as the number of blocks that need to
+ be erased is equivalent to the number of blocks written...
+ <braunr> sorry, i'm really not sure
+ <braunr> if you erase one sector, then write the first and third block,
+ it's clearly not equivalent
+ <braunr> i mean
+ <braunr> let's consider two kinds of pageout requests
+ <braunr> 1/ a big one including clean pages
+ <braunr> 2/ several ones for dirty pages only
+ <braunr> let's assume they both need an erase when they happen
+ <braunr> what's the actual difference between them ?
+ <braunr> wear will increase only if the controller handle it on writes, if
+ i'm right
+ <braunr> but other than that, it's just bandwidth
+ <antrik> strictly speaking erase is only *necessary* when there are no
+ clean blocks anymore. but modern controllers will try to perform erase of
+ unused blocks in the background, so it doesn't delay actual writes
+ <braunr> i agree on that
+ <antrik> but the point is that for each 16 pages (or so) written, we need
+ to erase one block so we get 16 clean pages to write...
+ <braunr> yes
+ <braunr> which is about the size of a request for the sequential policy
+ <braunr> so it fits
+ <antrik> just to be clear: it doesn't matter at all how the pages
+ "fit". the controller will reallocate them anyways
+ <antrik> what matters is how many pages you write
+ <braunr> ah
+ <braunr> i thought it would just put the whole request in a single sector
+ (or two)
+ <antrik> I'm not sure what you mean by "sector". as I said, it's not a term
+ used in SSD technology
+ <braunr> so do you imply that writes can actually get spread over different
+ sectors ?
+ <braunr> the sector is the unit at the nand flash level, its size is the
+ erase size
+ <antrik> actually, I used the right terminology... the erase unit is the
+ block; the write unit is the page
+ <braunr> sector is a synonym of block
+ <antrik> never seen it. and it's very confusing, as it isn't in any way
+ similar to sectors in magnetic disks...
+ <braunr> http://en.wikipedia.org/wiki/Flash_memory#NAND_flash
+ <braunr> it's actually in the NOR part right before, paragraph "Erasing"
+ <braunr> "Modern NOR flash memory chips are divided into erase segments
+ (often called blocks or sectors)."
+ <antrik> ah. I skipped the NOR part :-)
+ <braunr> i've only heard sector where i worked, but i don't consider french
+ computer engineers to be authorities on the matter :)
+ <antrik> hehe
+ <braunr> let's call them block
+ <braunr> so, thread stacks are allocated out of the kernel map
+ <braunr> this is already a bad thing (which is probably why there is a
+ local cache btw)
+ <antrik> anyways, yes. modern controllers might split a contiguous write
+ request onto several blocks, as well as put writes to completely
+ different logical pages into one block. the association between addresses
+ and actual blocks is completely free
+ <braunr> now i wonder why the kernel map is so slow, as the panic happens
+ at about 3k threads, so about 11M of thread stacks
+ <braunr> antrik: ok
+ <braunr> antrik: well then it makes sense to send only dirty pages
+ <braunr> s/slow/low/
+ <antrik> it's different for raw flash (using MTD subsystem in Linux) -- but
+ I don't think this is something we should consider any time soon :-)
+ <antrik> (also, raw flash is only really usable with specialised
+ filesystems anyways)
+ <braunr> yes
+ <antrik> are the thread stacks really only 4k? I would expect them to be
+ larger in many cases...
+ <braunr> youpi reduced them some time ago, yes
+ <braunr> they're 4k on xen
+ <braunr> uh, 16k
+ <braunr> damn, i'm wondering why i created separate submaps for the slab
+ allocator :/
+ <braunr> probably because that's how it was done by the zone allocator
+ before
+ <braunr> but that's stupid :/
+ <braunr> hm the stack issue is actually more complicated than i thought
+ because of interrupt priority levels
+ <braunr> i increased the kernel map size to avoid the panic instead
+ <braunr> now libc0.3 seems to build fine
+ <braunr> and there seems to be a clear decrease of I/O :)
+
+
+### IRC, freenode, #hurd, 2012-07-06
+
+ <antrik> braunr: there is a submap for the slab allocator? that's strange
+ indeed. I know we talked about this; and I am pretty sure we agreed
+ removing the submap would actually be among the major benefits of a new
+ allocator...
+ <braunr> antrik: a submap is a good idea anyway
+ <braunr> antrik: it avoids fragmenting the kernel space too much
+ <braunr> it also breaks down locking
+ <braunr> but we could consider it
+ <braunr> as a first step, i'll merge the kmem and kalloc submaps (the ones
+ used for the slab caches and the malloc-like allocations respectively)
+ <braunr> then i'll change the allocation of thread stacks to use a slab
+ cache
+ <braunr> and i'll also remove the thread swapping stuff
+ <braunr> it will take some time, but by the end we should be able to
+ allocate tens of thousands of threads, and suffer no panic when the limit
+ is reached
+ <antrik> braunr: I'm not sure "no panic" is really a worthwhile goal in
+ such a situation...
+ <braunr> antrik: uh ?N
+ <braunr> antrik: it only means the system won't allow the creation of
+ threads until there is memory available
+ <braunr> from my pov, the microkernel should never fail up to a point it
+ can't continue its job
+ <antrik> braunr: the system won't be able to recover from such a situation
+ anyways. without actual resource management/priorisation, not having a
+ panic is not really helpful. it only makes it harder to guess what
+ happened I fear...
+ <braunr> i don't see why it couldn't recover :/
+
+
+## IRC, freenode, #hurd, 2012-07-07
+
+ <braunr> grmbl, there are a lot of issues with making the page cache larger
+ :(
+ <braunr> it actually makes the system slower in half of my tests
+ <braunr> we have to test that on real hardware
+ <braunr> unfortunately my current results seem to indicate there is no
+ clear benefit from my patch
+ <braunr> the current limit of 4000 objects creates a good balance between
+ I/O and cpu time
+ <braunr> with the previous limit of 200, I/O is often extreme
+ <braunr> with my patch, either the working set is less than 4k objects, so
+ nothing is gained, or the lack of scalability of various parts of the
+ system add overhead that affect processing speed
+ <braunr> also, our file systems are cached, but our block layer isn't
+ <braunr> which means even when accessing data from the cache, accesses
+ still cause some I/O for metadata
+
+
+## IRC, freenode, #hurd, 2012-07-08
+
+ <braunr> youpi: basically, it works fine, but exposes scalability issues,
+ and increases swapiness
+ <youpi> so it doens't help with stability?
+ <braunr> hum, that was never the goal :)
+ <braunr> the goal was to reduce I/O, and increase performance
+ <youpi> sure
+ <youpi> but does it at least not lower stability too much?
+ <braunr> not too much, no
+ <youpi> k
+ <braunr> most of the issues i found could be reproduced without the patch
+ <youpi> ah
+ <youpi> then fine :)
+ <braunr> random deadlocks on heavy loads
+ <braunr> youpi: but i'm not sure it helps with performance
+ <braunr> youpi: at least not when emulated, and the host cache is used
+ <youpi> that's not very surprising
+ <braunr> it does help a lot when there is no host cache and the working set
+ is greater (or far less) than 4k objects
+ <youpi> ok
+ <braunr> the amount of vm_object and ipc_port is gracefully adjusted
+ <youpi> that'd help us with not having to tell people to use the complex
+ -drive option :)
+ <braunr> so you can easily run a hurd with 128 MiB with decent performance
+ and no leak in ext2fs
+ <braunr> yes
+ <braunr> for example
+ <youpi> braunr: I'd say we should just try it on buildds
+ <braunr> (it's not finished yet, i'd like to work more on reducing
+ swapping)
+ <youpi> (though they're really not busy atm, so the stability change can't
+ really be measured)
+ <braunr> when building the hurd, which takes about 10 minutes in my kvm
+ instances, there is only a 30 seconds difference between using the host
+ cache and not using it
+ <braunr> this is already the case with the current kernel, since the
+ working set is less than 4k objects
+ <braunr> while with the previous limit of 200 objects, it took 50 minutes
+ without host cache, and 15 with it
+ <braunr> so it's a clear benefit for most uses, except my virtual machines
+ :)
+ <youpi> heh
+ <braunr> because there, the amount of ram means a lot of objects can be
+ cached, and i can measure an increase in cpu usage
+ <braunr> slight, but present
+ <braunr> youpi: isn't it a good thing that buildds are resting a bit ? :)
+ <youpi> on one hand, yes
+ <youpi> but on the other hand, that doesn't permit to continue
+ stress-testing the Hurd :)
+ <braunr> we're not in a hurry for this patch
+ <braunr> because using it really means you're tickling the pageout daemon a
+ lot :)
+
+
+## [[metadata_caching]]
diff --git a/open_issues/gnumach_tick.mdwn b/open_issues/gnumach_tick.mdwn
new file mode 100644
index 00000000..eed447f6
--- /dev/null
+++ b/open_issues/gnumach_tick.mdwn
@@ -0,0 +1,35 @@
+[[!meta copyright="Copyright © 2012 Free Software Foundation, Inc."]]
+
+[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
+id="license" text="Permission is granted to copy, distribute and/or modify this
+document under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no Invariant
+Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
+is included in the section entitled [[GNU Free Documentation
+License|/fdl]]."]]"""]]
+
+[[!tag open_issue_gnumach]]
+
+
+# IRC, freenode, #hurd, 2012-07-05
+
+ <pinotree> braunr: wrt to mach: it seems to me it ticks every 10ms or so,
+ it is true?
+ <braunr> yes
+ <braunr> and it's not preemptible
+ <pinotree> braunr: that means a gnumach kernel currently has a maximum
+ uptime of almost 500 days
+ <braunr> pinotree: what do you mean ?
+ <pinotree> there's an int (or uint, i don't remember) variable that keeps
+ the tick count
+ <braunr> yes the tick variable should probably be a 64-bits type
+ <braunr> or a struct
+ <braunr> but the tick count should only be used for computation on "short"
+ delays
+ <braunr> and it should be safe to use it even when it overflows
+ <braunr> it's not the wall clock
+ <pinotree> i found that when investigating why the maximum timeout for a
+ mach_msg is like INT_MAX >> 2 (or 4) or something like that, also due to
+ the tick count
+ <braunr> iirc, in linux, they mostly use the lower 32-bits on 32-bits
+ architecture, updating the 32 upper only when necessary
diff --git a/open_issues/gnumach_vm_map_red-black_trees.mdwn b/open_issues/gnumach_vm_map_red-black_trees.mdwn
index 17263099..d7407bfe 100644
--- a/open_issues/gnumach_vm_map_red-black_trees.mdwn
+++ b/open_issues/gnumach_vm_map_red-black_trees.mdwn
@@ -152,3 +152,23 @@ License|/fdl]]."]]"""]]
entries)
[[glibc/fork]].
+
+
+## IRC, freenode, #hurdfr, 2012-06-02
+
+ <youpi> braunr: oh, un bug de rbtree
+ <youpi> Assertion `diff != 0' failed in file "vm/vm_map.c", line 1002
+ <youpi> c'est dans rbtree_insert()
+ <youpi> vm_map_enter (vm/vm_map.c:1002).
+ <youpi> vm_map (vm/vm_user.c:373).
+ <youpi> syscall_vm_map (kern/ipc_mig.c:657).
+ <youpi> erf j'ai tué mon débuggueur, je ne peux pas inspecter plus
+ <youpi> le peu qui me reste c'est qu'apparemment target_map == 1, size ==
+ 0, mask == 0
+ <youpi> anywhere = 1
+ <braunr> youpi: ça signifie sûrement que des adresses overlappent
+ <braunr> je rejetterai un coup d'oeil sur le code demain
+ <braunr> (si ça se trouve c'est un bug rare de la vm, le genre qui fait
+ crasher le noyau)
+ <braunr> (enfin jveux dire, qui faisait crasher le noyau de façon très
+ obscure avant le patch rbtree)
diff --git a/open_issues/gnumach_vm_object_resident_page_count.mdwn b/open_issues/gnumach_vm_object_resident_page_count.mdwn
new file mode 100644
index 00000000..cc1b8897
--- /dev/null
+++ b/open_issues/gnumach_vm_object_resident_page_count.mdwn
@@ -0,0 +1,22 @@
+[[!meta copyright="Copyright © 2012 Free Software Foundation, Inc."]]
+
+[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
+id="license" text="Permission is granted to copy, distribute and/or modify this
+document under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no Invariant
+Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
+is included in the section entitled [[GNU Free Documentation
+License|/fdl]]."]]"""]]
+
+[[!tag open_issue_gnumach]]
+
+
+# IRC, freenode, #hurd, 2012-07-03
+
+ <braunr> omg the ugliness
+ <braunr> the number of pages in physical memory for on object is a short
+ ... which limits the amount to .. 128 MiB
+ * braunr cries
+ <braunr> luckily, this should be easy to solve
+
+`vm/vm_object.h:vm_object:resident_page_count`.
diff --git a/open_issues/libpthread_CLOCK_MONOTONIC.mdwn b/open_issues/libpthread_CLOCK_MONOTONIC.mdwn
index f9195540..2c8f10f8 100644
--- a/open_issues/libpthread_CLOCK_MONOTONIC.mdwn
+++ b/open_issues/libpthread_CLOCK_MONOTONIC.mdwn
@@ -15,7 +15,7 @@ License|/fdl]]."]]"""]]
[[!message-id "201204220058.37328.toscano.pino@tiscali.it"]]
-# IRC, freenode, #hurd- 2012-04-22
+# IRC, freenode, #hurd, 2012-04-22
<pinotree> youpi: what i thought would be creating a
glib/hurd/hurdtime.{c,h}, adding _hurd_gettimeofday and
@@ -34,7 +34,7 @@ License|/fdl]]."]]"""]]
<youpi> (and others)
-## IRC, freenode, #hurd- 2012-04-23
+## IRC, freenode, #hurd, 2012-04-23
<youpi> pinotree: about librt vs libpthread, don't worry too much for now
<youpi> libpthread can lib against the already-installed librt
@@ -56,3 +56,23 @@ License|/fdl]]."]]"""]]
at all
<youpi> pinotree: yes, things work even with -lrt
<pinotree> wow
+
+
+## IRC, OFTC, #debian-hurd, 2012-06-04
+
+ <youpi> pinotree: -lrt in libpthread is what is breaking glib2.0
+ <youpi> during ./configure it makes clock_gettime linked in, while at
+ library link it doesn't
+ <youpi> probably for obscure reasons
+ <youpi> I'll have to disable it in debian
+
+
+### IRC, OFTC, #debian-hurd, 2012-06-05
+
+ <pinotree> youpi: i saw your message about the linking issues with
+ pthread/rt; do you want me to provide a patch to switch clock_gettime →
+ gettimeofday in libpthread?
+ <youpi> you mean switch it back as it was previously?
+ <pinotree> kind of, yes
+ <youpi> I have reverted the change in libc for now
+ <pinotree> ok
diff --git a/open_issues/low_memory.mdwn b/open_issues/low_memory.mdwn
new file mode 100644
index 00000000..22470c65
--- /dev/null
+++ b/open_issues/low_memory.mdwn
@@ -0,0 +1,113 @@
+[[!meta copyright="Copyright © 2012 Free Software Foundation, Inc."]]
+
+[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
+id="license" text="Permission is granted to copy, distribute and/or modify this
+document under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no Invariant
+Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
+is included in the section entitled [[GNU Free Documentation
+License|/fdl]]."]]"""]]
+
+[[!tag open_issue_gnumach open_issue_glibc open_issue_hurd]]
+
+Issues relating to system behavior under memory pressure.
+
+[[!toc]]
+
+
+# [[gnumach_page_cache_policy]]
+
+
+# IRC, freenode, #hurd, 2012-07-08
+
+ <braunr> am i mistaken or is the default pager simply not vm privileged ?
+ <braunr> (which would explain the hangs when memory is very low)
+ <youpi> no idea
+ <youpi> but that's very possible
+ <youpi> we start it by hand from the init scripts
+ <braunr> actually, i see no way provided by mach to set that
+ <braunr> i'd assume it would set the property when a thread would register
+ itself as the default pager, but it doesn't
+ <braunr> i'll check at runtime and see if fixing helps
+ <youpi> thread_wire(host, thread, 1) ?
+ <youpi> ./hurd/mach-defpager/wiring.c: kr =
+ thread_wire(priv_host_port,
+ <braunr> no
+ <braunr> look in cprocs.c
+ <braunr> iir
+ <braunr> iirc
+ <braunr> iiuc, it sets a 1:1 kernel/user mapping
+ <youpi> ??
+ <youpi> thread_wire, not cthread_wire
+ <braunr> ah
+ <braunr> right, i'm getting tired
+ <braunr> youpi: do you understand the comment in default_pager_thread() ?
+ <youpi> well, I'm not sure to know what external vs internal is
+ <braunr> i'm almost sure the default pager is blocked because of a relation
+ with an unprivlege thread
+ <braunr> +d
+ <braunr> when hangs happen, the pageout daemon is still running, waiting
+ for an event so he can continue
+ <braunr> it*
+
+ <braunr> all right, our pageout stuff completely sucks
+ <braunr> when you think the system is hanged, it's actually not
+ <pinotree> and what's happening instead?
+ <braunr> instead, it seems it's in a very complex resursive state which
+ ends in the slab allocator not being able to allocate kernel map entries
+ <braunr> recursive*
+ <braunr> the pageout daemon, unable to continue, progressively slows
+ <braunr> in hope the default pager is able to service the pageout requests,
+ but it's not
+ <braunr> probably the most complicated deadlock i've seen :)
+ <braunr> luckily !
+ <braunr> i've been playing with some tunables involved in waking up the
+ pageout daemon
+ <braunr> and got good results so far
+ <braunr> (although it's clearly not a proper solution)
+ <braunr> one thing the kernel lacks is a way to separate clean from dirty
+ pages
+ <braunr> this stupid kernel doesn't try to free clean pages first .. :)
+ <braunr> hm
+ <braunr> now i can see the system recover, but some applications are still
+ stuck :(
+ <braunr> (but don't worry, my tests are rather aggressive)
+ <braunr> what i mean by aggressive is several builds and various dd of a
+ few hundred MiB in parallel, on various file systems
+ <braunr> so far the file systems have been very resilient
+ <braunr> ok, let's try running the hurd with 64 MiB of RAM
+ <braunr> after some initial swapping, it runs smoothly :)
+ <braunr> uh ?
+ <braunr> ah no, i'm still doing my parallel builds
+ <braunr> although less
+ <braunr> gcc: internal compiler error: Resource lost (program as)
+ <braunr> arg
+ <braunr> lol
+ <braunr> the file system crashed under the compiler
+ <pinotree> too much memory required during linking? or ram+swap should have
+ been enough?
+ <braunr> there is a lot of swap, i doubt it
+ <braunr> the hurd is such a dumb and impressive system at the same time
+ <braunr> pinotree: what does this tell you ?
+ <braunr> git: hurdsig.c:948: post_signal: Unexpected error: (os/kern)
+ failure.
+ <pinotree> something samuel spots often during the builds of haskell
+ packages
+
+Probably also the *sigpost* case mentioned in [[!message-id
+"87bol6aixd.fsf@schwinge.name"]].
+
+ <braunr> actually i should be asking jkoenig
+ <braunr> it seems the lack of memory has a strong impact on signal delivery
+ <braunr> which is bad
+ <antrik> braunr: I have a vague recollection of slpz also saying something
+ about missing dirty page tracking a while back... I might be confusing
+ stuff though
+ <braunr> pinotree: yes it happens often during links
+ <braunr> which makes sense
+ <pinotree> braunr: "happens often" == "hurdsig.c:948: post_signal: ..."?
+ <braunr> yes
+ <pinotree> if you can reproduce it often, what about debugging it? :P
+ <braunr> i mean, the few times i got it, it was often during a link :p
+ <braunr> i'd rather debug the pageout deadlock :(
+ <braunr> but it's hard
diff --git a/open_issues/mach-defpager_swap.mdwn b/open_issues/mach-defpager_swap.mdwn
new file mode 100644
index 00000000..7d3b001c
--- /dev/null
+++ b/open_issues/mach-defpager_swap.mdwn
@@ -0,0 +1,20 @@
+[[!meta copyright="Copyright © 2012 Free Software Foundation, Inc."]]
+
+[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
+id="license" text="Permission is granted to copy, distribute and/or modify this
+document under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no Invariant
+Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
+is included in the section entitled [[GNU Free Documentation
+License|/fdl]]."]]"""]]
+
+[[!tag open_issue_gnumach]]
+
+[[!toc]]
+
+
+# IRC, OFTC, #debian-hurd, 2012-06-16
+
+ <lifeng> I allocated a 5GB partition as swap, but hurd only found 1GB
+ <youpi> use 2GiB swaps only, >2Gib are not supported
+ <youpi> (and apparently it just truncates the size, to be investigated)
diff --git a/open_issues/metadata_caching.mdwn b/open_issues/metadata_caching.mdwn
new file mode 100644
index 00000000..f7f4cb53
--- /dev/null
+++ b/open_issues/metadata_caching.mdwn
@@ -0,0 +1,31 @@
+[[!meta copyright="Copyright © 2011, 2012 Free Software Foundation, Inc."]]
+
+[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
+id="license" text="Permission is granted to copy, distribute and/or modify this
+document under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no Invariant
+Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
+is included in the section entitled [[GNU Free Documentation
+License|/fdl]]."]]"""]]
+
+[[!tag open_issue_gnumach open_issue_hurd]]
+
+[[!toc]]
+
+
+# IRC, freenode, #hurd, 2012-07-08
+
+ <braunr> youpi: there is still quite a lot of I/O even for cached objects
+ <braunr> youpi: i strongly suspect these are for the metadata
+ <braunr> i.e. we don't have a "buffer cache", only a file cache
+ <braunr> (gnu is really not unix lol)
+ <youpi> doesn't ext2fs cache these?
+ <youpi> (as long as the corresponding object is cached
+ <youpi> )
+ <braunr> i didn't look too much, but if it does, it does a bad job
+ <braunr> i would guess it does, but possibly only writethrough
+ <youpi> iirc it does writeback
+ <youpi> there's a sorta "node needs written" flag somewhere iirc
+ <braunr> but that's for the files, not the metadata
+ <youpi> I mean the metadata of the node
+ <braunr> then i have no idea what happens
diff --git a/open_issues/multithreading.mdwn b/open_issues/multithreading.mdwn
index 0f6b9f19..5924d3f9 100644
--- a/open_issues/multithreading.mdwn
+++ b/open_issues/multithreading.mdwn
@@ -1,4 +1,5 @@
-[[!meta copyright="Copyright © 2010, 2011 Free Software Foundation, Inc."]]
+[[!meta copyright="Copyright © 2010, 2011, 2012 Free Software Foundation,
+Inc."]]
[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
id="license" text="Permission is granted to copy, distribute and/or modify this
@@ -36,6 +37,18 @@ Control*](http://soft.vub.ac.be/~tvcutsem/talks/presentations/T37_nobackground.p
Tom Van Cutsem, 2009.
+## IRC, freenode, #hurd, 2012-07-08
+
+ <youpi> braunr: about limiting number of threads, IIRC the problem is that
+ for some threads, completing their work means triggering some action in
+ the server itself, and waiting for it (with, unfortunately, some lock
+ held), which never terminates when we can't create new threads any more
+ <braunr> youpi: the number of threads should be limited, but not globally
+ by libports
+ <braunr> pagers should throttle their writeback requests
+ <youpi> right
+
+
# Alternative approaches:
* <http://www.concurrencykit.org/>
diff --git a/open_issues/nfs_trailing_slash.mdwn b/open_issues/nfs_trailing_slash.mdwn
new file mode 100644
index 00000000..90f138e3
--- /dev/null
+++ b/open_issues/nfs_trailing_slash.mdwn
@@ -0,0 +1,36 @@
+[[!meta copyright="Copyright © 2012 Free Software Foundation, Inc."]]
+
+[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
+id="license" text="Permission is granted to copy, distribute and/or modify this
+document under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no Invariant
+Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
+is included in the section entitled [[GNU Free Documentation
+License|/fdl]]."]]"""]]
+
+[[!tag open_issue_glibc open_issue_hurd]]
+
+
+# IRC, freenode, #hurd, 2012-05-27
+
+ <gg0> ok, on nfs "mkdir dir0" succeeds, "mkdir dir0/" fails. RPC struct is bad
+
+
+## IRC, freenode, #hurd, 2012-05-27
+
+ <gg0> 150->dir_mkdir ("foo1/" 493) = 0x40000048 (RPC struct is bad)
+ <gg0> task2876->mach_port_deallocate (pn{ 18}) = 0
+ <gg0> mkdir: 136->io_write_request ("mkdir: " -1) = 0 7
+ <gg0> cannot create directory `/nfsroot/foo1/' 136->io_write_request
+ ("cannot create directory `/nfsroot/foo1/'" -1) = 0 40
+ <gg0> : RPC struct is bad 136->io_write_request (": RPC struct is bad" -1)
+ = 0 19
+ <gg0> 136->io_write_request ("
+ <gg0> " -1) = 0 1
+ <tschwinge> gg0: Yes, I think we knew about this before. Nobody felt like
+ working on it yet. Might be a nfs, libnetfs, glibc issue.
+ <tschwinge> gg0: If you want to work on it, please ask here or on bug-hurd
+ if you need some guidance.
+ <gg0> yeah found this thread
+ http://lists.gnu.org/archive/html/bug-hurd/2008-04/msg00069.html I don't
+ think I'll work on it
diff --git a/open_issues/page_cache.mdwn b/open_issues/page_cache.mdwn
index 062fb8d6..fd503fdc 100644
--- a/open_issues/page_cache.mdwn
+++ b/open_issues/page_cache.mdwn
@@ -1,4 +1,4 @@
-[[!meta copyright="Copyright © 2011 Free Software Foundation, Inc."]]
+[[!meta copyright="Copyright © 2011, 2012 Free Software Foundation, Inc."]]
[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
id="license" text="Permission is granted to copy, distribute and/or modify this
@@ -10,7 +10,10 @@ License|/fdl]]."]]"""]]
[[!tag open_issue_gnumach]]
-IRC, freenode, #hurd, 2011-11-28:
+[[!toc]]
+
+
+# IRC, freenode, #hurd, 2011-11-28
<braunr> youpi: would you find it reasonable to completely disable the page
cache in gnumach ?
@@ -71,3 +74,6 @@ IRC, freenode, #hurd, 2011-11-28:
<youpi> restarting them every few days is ok
<youpi> so I'd rather keep the performance :)
<braunr> ok
+
+
+# [[gnumach_page_cache_policy]]
diff --git a/open_issues/performance.mdwn b/open_issues/performance.mdwn
index 2fd34621..8dbe1160 100644
--- a/open_issues/performance.mdwn
+++ b/open_issues/performance.mdwn
@@ -1,4 +1,5 @@
-[[!meta copyright="Copyright © 2010, 2011 Free Software Foundation, Inc."]]
+[[!meta copyright="Copyright © 2010, 2011, 2012 Free Software Foundation,
+Inc."]]
[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
id="license" text="Permission is granted to copy, distribute and/or modify this
@@ -38,3 +39,16 @@ call|/glibc/fork]]'s case.
* [[microbenchmarks]]
* [[microkernel_multi-server]]
+
+ * [[gnumach_page_cache_policy]]
+
+ * [[metadata_caching]]
+
+---
+
+
+# IRC, freenode, #hurd, 2012-07-05
+
+ <braunr> the more i study the code, the more i think a lot of time is
+ wasted on cpu, unlike the common belief of the lack of performance being
+ only due to I/O
diff --git a/open_issues/performance/io_system/read-ahead.mdwn b/open_issues/performance/io_system/read-ahead.mdwn
index d6a98070..710c746b 100644
--- a/open_issues/performance/io_system/read-ahead.mdwn
+++ b/open_issues/performance/io_system/read-ahead.mdwn
@@ -16,6 +16,9 @@ License|/fdl]]."]]"""]]
# [[community/gsoc/project_ideas/disk_io_performance]]
+# [[gnumach_page_cache_policy]]
+
+
# 2011-02
[[Etenil]] has been working in this area.
@@ -389,3 +392,1176 @@ License|/fdl]]."]]"""]]
with appropriate frame size. Is that right?
<youpi> question of taste, better ask on the list
<mcsim> ok
+
+
+## IRC, freenode, #hurd, 2012-06-09
+
+ <mcsim> hello. What fictitious pages in gnumach are needed for?
+ <mcsim> I mean why real page couldn't be grabbed straight, but in sometimes
+ fictitious page is grabbed first and than converted to real?
+ <braunr> mcsim: iirc, fictitious pages are needed by device pagers which
+ must comply with the vm pager interface
+ <braunr> mcsim: specifically, they must return a vm_page structure, but
+ this vm_page describes device memory
+ <braunr> mcsim: and then, it must not be treated like normal vm_page, which
+ can be added to page queues (e.g. page cache)
+
+
+## IRC, freenode, #hurd, 2012-06-22
+
+ <mcsim> braunr: Ah. Patch for large storages introduced new callback
+ pager_notify_evict. User had to define this callback on his own as
+ pager_dropweak, for instance. But neal's patch change this. Now all
+ callbacks could have any name, but user defines structure with pager ops
+ and supplies it in pager_create.
+ <mcsim> So, I just changed notify_evict to confirm it to new style.
+ <mcsim> braunr: I want to changed interface of mo_change_attributes and
+ test my changes with real partitions. For both these I have to update
+ ext2fs translator, but both partitions I have are bigger than 2Gb, that's
+ why I need apply this patch.z
+ <mcsim> But what to do with mo_change_attributes? I need somehow inform
+ kernel about page fault policy.
+ <mcsim> When I change mo_ interface in kernel I have to update all programs
+ that use this interface and ext2fs is one of them.
+
+ <mcsim> braunr: Who do you think better to inform kernel about fault
+ policy? At the moment I've added fault_strategy parameter that accepts
+ following strategies: randow, sequential with single page cluster,
+ sequential with double page cluster and sequential with quad page
+ cluster. OSF/mach has completely another interface of
+ mo_change_attributes. In OSF/mach mo_change_attributes accepts structure
+ of parameter. This structure could have different formats depending o
+ <mcsim> This rpc could be useful because it is not very handy to update
+ mo_change_attributes for kernel, for hurd libs and for glibc. Instead of
+ this kernel will accept just one more structure format.
+ <braunr> well, like i wrote on the mailing list several weeks ago, i don't
+ think the policy selection is of concern currently
+ <braunr> you should focus on the implementation of page clustering and
+ readahead
+ <braunr> concerning the interface, i don't think it's very important
+ <braunr> also, i really don't like the fact that the policy is per object
+ <braunr> it should be per map entry
+ <braunr> i think it mentioned that in my mail too
+ <braunr> i really think you're wasting time on this
+ <braunr> http://lists.gnu.org/archive/html/bug-hurd/2012-04/msg00064.html
+ <braunr> http://lists.gnu.org/archive/html/bug-hurd/2012-04/msg00029.html
+ <braunr> mcsim: any reason you completely ignored those ?
+ <mcsim> braunr: Ok. I'll do clustering for map entries.
+ <braunr> no it's not about that either :/
+ <braunr> clustering is grouping several pages in the same transfer between
+ kernel and pager
+ <braunr> the *policy* is held in map entries
+ <antrik> mcsim: I'm not sure I properly understand your question about the
+ policy interface... but if I do, it's IMHO usually better to expose
+ individual parameters as RPC arguments explicitly, rather than hiding
+ them in an opaque structure...
+ <antrik> (there was quite some discussion about that with libburn guy)
+ <mcsim> antrik: Following will be ok? kern_return_t vm_advice(map, address,
+ length, advice, cluster_size)
+ <mcsim> Where advice will be either random or sequential
+ <antrik> looks fine to me... but then, I'm not an expert on this stuff :-)
+ <antrik> perhaps "policy" would be clearer than "advice"?
+ <mcsim> madvise has following prototype: int madvise(void *addr, size_t
+ len, int advice);
+ <mcsim> hmm... looks like I made a typo. Or advi_c_e is ok too?
+ <antrik> advise is a verb; advice a noun... there is a reason why both
+ forms show up in the madvise prototype :-)
+ <mcsim> so final variant should be kern_return_t vm_advise(map, address,
+ length, policy, cluster_size)?
+ <antrik> mcsim: nah, you are probably right that its better to keep
+ consistency with madvise, even if the name of the "advice" parameter
+ there might not be ideal...
+ <antrik> BTW, where does cluster_size come from? from the filesystem?
+ <antrik> I see merits both to naming the parameter "policy" (clearer) or
+ "advice" (more consistent) -- you decide :-)
+ <mcsim> antrik: also there is variant strategy, like with inheritance :)
+ I'll choose advice for now.
+ <mcsim> What do you mean under "where does cluster_size come from"?
+ <antrik> well, madvise doesn't have this parameter; so the value must come
+ from a different source?
+ <mcsim> in madvise implementation it could fixed value or somehow
+ calculated basing on size of memory range. In OSF/mach cluster size is
+ supplied too (via mo_change_attributes).
+ <antrik> ah, so you don't really know either :-)
+ <antrik> well, my guess is that it is derived from the cluster size used by
+ the filesystem in question
+ <antrik> so for us it would always be 4k for now
+ <antrik> (and thus you can probably leave it out alltogether...)
+ <antrik> well, fatfs can use larger clusters
+ <antrik> I would say, implement it only if it's very easy to do... if it's
+ extra effort, it's probably not worth it
+ <mcsim> There is sense to make cluster size bigger for ext2 too, since most
+ likely consecutive clusters will be within same group.
+ <mcsim> But anyway I'll handle this later.
+ <antrik> well, I don't know what cluster_size does exactly; but by the
+ sound of it, I'd guess it makes an assumption that it's *always* better
+ to read in this cluster size, even for random access -- which would be
+ simply wrong for 4k filesystem clusters...
+ <antrik> BTW, I agree with braunr that madvice() is optional -- it is way
+ way more important to get readahead working as a default policy first
+
+
+## IRC, freenode, #hurd, 2012-07-01
+
+ <mcsim> youpi: Do you think you could review my code?
+ <youpi> sure, just post it to the list
+ <youpi> make sure to break it down into logical pieces
+ <mcsim> youpi: I pushed it my branch at gnumach repository
+ <mcsim> youpi: or it is still better to post changes to list?
+ <youpi> posting to the list would permit feedback from other people too
+ <youpi> mcsim: posix distinguishes normal, sequential and random
+ <youpi> we should probably too
+ <youpi> the system call should probably be named "vm_advise", to be a verb
+ like allocate etc.
+ <mcsim> youpi: ok. A have a talk with antrik regarding naming, I'll change
+ this later because compiling of glibc take a lot of time.
+ <youpi> mcsim: I find it odd that vm_for_every_page allocates non-existing
+ pages
+ <youpi> there should probably be at least a flag to request it or not
+ <mcsim> youpi: normal policy is synonym to default. And this could be
+ treated as either random or sequential, isn't it?
+ <braunr> mcsim: normally, no
+ <youpi> yes, the normal policy would be the default
+ <youpi> it doesn't mean random or sequential
+ <youpi> it's just to be a compromise between both
+ <youpi> random is meant to make no read-ahead, since that'd be spurious
+ anyway
+ <youpi> while by default we should make readahead
+ <braunr> and sequential makes even more aggressive readahead, which usually
+ implies a greater number of pages to fetch
+ <braunr> that's all
+ <youpi> yes
+ <youpi> well, that part is handled by the cluster_size parameter actually
+ <braunr> what about reading pages preceding the faulted paged ?
+ <mcsim> Shouldn't sequential clean some pages (if they, for example, are
+ not precious) that are placed before fault page?
+ <braunr> ?
+ <youpi> that could make sense, yes
+ <braunr> you lost me
+ <youpi> and something that you wouldn't to with the normal policy
+ <youpi> braunr: clear what has been read previously
+ <braunr> ?
+ <youpi> since the access is supposed to be sequential
+ <braunr> oh
+ <youpi> the application will proabably not re-read what was already read
+ <braunr> you mean to avoid caching it ?
+ <youpi> yes
+ <braunr> inactive memory is there for that
+ <youpi> while with the normal policy you'd assume that the application
+ might want to go back etc.
+ <youpi> yes, but you can help it
+ <braunr> yes
+ <youpi> instead of making other pages compete with it
+ <braunr> but then, it's for precious pages
+ <youpi> I have to say I don't know what a precious page it
+ <youpi> s
+ <youpi> does it mean dirty pages?
+ <braunr> no
+ <braunr> precious means cached pages
+ <braunr> "If precious is FALSE, the kernel treats the data as a temporary
+ and may throw it away if it hasn't been changed. If the precious value is
+ TRUE, the kernel treats its copy as a data repository and promises to
+ return it to the manager; the manager may tell the kernel to throw it
+ away instead by flushing and not cleaning the data"
+ <braunr> hm no
+ <braunr> precious means the kernel must keep it
+ <mcsim> youpi: According to vm_for_every_page. What kind of flag do you
+ suppose? If object is internal, I suppose not to cross the bound of
+ object, setting in_end appropriately in vm_calculate_clusters.
+ <mcsim> If object is external we don't know its actual size, so we should
+ make mo request first. And for this we should create fictitious pages.
+ <braunr> mcsim: but how would you implement this "cleaning" with sequential
+ ?
+ <youpi> mcsim: ah, ok, I thought you were allocating memory, but it's just
+ fictitious pages
+ <youpi> comment "Allocate a new page" should be fixed :)
+ <mcsim> braunr: I don't now how I will implement this specifically (haven't
+ tried yet), but I don't think that this is impossible
+ <youpi> braunr: anyway it's useful as an example where normal and
+ sequential would be different
+ <braunr> if it can be done simply
+ <braunr> because i can see more trouble than gains in there :)
+ <mcsim> braunr: ok :)
+ <braunr> mcsim: hm also, why fictitious pages ?
+ <braunr> fictitious pages should normally be used only when dealing with
+ memory mapped physically which is not real physical memory, e.g. device
+ memory
+ <mcsim> but vm_fault could occur when object represent some device memory.
+ <braunr> that's exactly why there are fictitious pages
+ <mcsim> at the moment of allocating of fictitious page it is not know what
+ backing store of object is.
+ <braunr> really ?
+ <braunr> damn, i've got used to UVM too much :/
+ <mcsim> braunr: I said something wrong?
+ <braunr> no no
+ <braunr> it's just that sometimes, i'm confusing details about the various
+ BSD implementations i've studied
+ <braunr> out-of-gsoc-topic question: besides network drivers, do you think
+ we'll have other drivers that will run in userspace and have to implement
+ memory mapping ? like framebuffers ?
+ <braunr> or will there be a translation layer such as storeio that will
+ handle mapping ?
+ <youpi> framebuffers typically will, yes
+ <youpi> that'd be antrik's work on drm
+ <braunr> hmm
+ <braunr> ok
+ <youpi> mcsim: so does the implementation work, and do you see performance
+ improvement?
+ <mcsim> youpi: I haven't tested it yet with large ext2 :/
+ <mcsim> youpi: I'm going to finish now moving of ext2 to new interface,
+ than other translators in hurd repository and than finish memory policies
+ in gnumach. Is it ok?
+ <youpi> which new interface?
+ <mcsim> Written by neal. I wrote some temporary code to make ext2 work with
+ it, but I'm going to change this now.
+ <youpi> you mean the old unapplied patch?
+ <mcsim> yes
+ <youpi> did you have a look at Karim's work?
+ <youpi> (I have to say I never found the time to check how it related with
+ neal's patch)
+ <mcsim> I found only his work in kernel. I didn't see his work in applying
+ of neal's patch.
+ <youpi> ok
+ <youpi> how do they relate with each other?
+ <youpi> (I have never actually looked at either of them :/)
+ <mcsim> his work in kernel and neal's patch?
+ <youpi> yes
+ <mcsim> They do not correlate with each other.
+ <youpi> ah, I must be misremembering what each of them do
+ <mcsim> in kam's patch was changes to support sequential reading in reverse
+ order (as in OSF/Mach), but posix does not support such behavior, so I
+ didn't implement this either.
+ <youpi> I can't find the pointer to neal's patch, do you have it off-hand?
+ <mcsim> http://comments.gmane.org/gmane.os.hurd.bugs/351
+ <youpi> thx
+ <youpi> I think we are not talking about the same patch from Karim
+ <youpi> I mean lists.gnu.org/archive/html/bug-hurd/2010-06/msg00023.html
+ <mcsim> I mean this patch:
+ http://lists.gnu.org/archive/html/bug-hurd/2010-06/msg00024.html
+ <mcsim> Oh.
+ <youpi> ok
+ <mcsim> seems, this is just the same
+ <youpi> yes
+ <youpi> from a non-expert view, I would have thought these patches play
+ hand in hand, do they really?
+ <mcsim> this patch is completely for kernel and neal's one is completely
+ for libpager.
+ <youpi> i.e. neal's fixes libpager, and karim's fixes the kernel
+ <mcsim> yes
+ <youpi> ending up with fixing the whole path?
+ <youpi> AIUI, karim's patch will be needed so that your increased readahead
+ will end up with clustered page request?
+ <mcsim> I will not use kam's patch
+ <youpi> is it not needed to actually get pages in together?
+ <youpi> how do you tell libpager to fetch pages together?
+ <youpi> about the cluster size, I'd say it shouldn't be specified at
+ vm_advise() level
+ <youpi> in other OSes, it is usually automatically tuned
+ <youpi> by ramping it up to a maximum readahead size (which, however, could
+ be specified)
+ <youpi> that's important for the normal policy, where there are typically
+ successive periods of sequential reads, but you don't know in advance for
+ how long
+ <mcsim> braunr said that there are legal issues with his code, so I cannot
+ use it.
+ <braunr> did i ?
+ <braunr> mcsim: can you give me a link to the code again please ?
+ <youpi> see above :)
+ <braunr> which one ?
+ <youpi> both
+ <youpi> they only differ by a typo
+ <braunr> mcsim: i don't remember saying that, do you have any link ?
+ <braunr> or log ?
+ <mcsim> sorry, can you rephrase "ending up with fixing the whole path"?
+ <mcsim> cluster_size in vm_advise also could be considered as advise
+ <braunr> no
+ <braunr> it must be the third time we're talking about this
+ <youpi> mcsim: I mean both parts would be needed to actually achieve
+ clustered i/o
+ <braunr> again, why make cluster_size a per object attribute ? :(
+ <youpi> wouldn't some objects benefit from bigger cluster sizes, while
+ others wouldn't?
+ <youpi> but again, I believe it should rather be autotuned
+ <youpi> (for each object)
+ <braunr> if we merely want posix compatibility (and for a first attempt,
+ it's quite enough), vm_advise is good, and the kernel selects the
+ implementation (and thus the cluster sizes)
+ <braunr> if we want finer grained control, perhaps a per pager cluster_size
+ would be good, although its efficiency depends on several parameters
+ <braunr> (e.g. where the page is in this cluster)
+ <braunr> but a per object cluster size is a large waste of memory
+ considering very few applications (if not none) would use the "feature"
+ ..
+ <braunr> (if any*)
+ <youpi> there must be a misunderstanding
+ <youpi> why would it be a waste of memory?
+ <braunr> "per object"
+ <youpi> so?
+ <braunr> there can be many memory objects in the kernel
+ <youpi> so?
+ <braunr> so such an overhead must be useful to accept it
+ <youpi> in my understanding, a cluster size per object is just a mere
+ integer for each object
+ <youpi> what overhead?
+ <braunr> yes
+ <youpi> don't we have just thousands of objects?
+ <braunr> for now
+ <braunr> remember we're trying to remove the page cache limit :)
+ <youpi> that still won't be more than tens of thousands of objects
+ <youpi> times an integer
+ <youpi> that's completely neglectible
+ <mcsim> braunr: Strange, Can't find in logs. Weird things are happening in
+ my memory :/ Sorry.
+ <braunr> mcsim: i'm almost sure i never said that :/
+ <braunr> but i don't trust my memory too much either
+ <braunr> youpi: depends
+ <youpi> mcsim: I mean both parts would be needed to actually achieve
+ clustered i/o
+ <mcsim> braunr: I made I call vm_advise that applies policy to memory range
+ (vm_map_entry to be specific)
+ <braunr> mcsim: good
+ <youpi> actually the cluster size should even be per memory range
+ <mcsim> youpi: In this sense, yes
+ <youpi> k
+ <mcsim> sorry, Internet connection lags
+ <braunr> when changing a structure used to create many objects, keep in
+ mind one thing
+ <braunr> if its size gets larger than a threshold (currently, powers of
+ two), the cache used by the slab allocator will allocate twice the
+ necessary amount
+ <youpi> sure
+ <braunr> this is the case with most object caching allocators, although
+ some can have specific caches for common sizes such as 96k which aren't
+ powers of two
+ <braunr> anyway, an integer is negligible, but the final structure size
+ must be checked
+ <braunr> (for both 32 and 64 bits)
+ <mcsim> braunr: ok.
+ <mcsim> But I didn't understand what should be done with cluster size in
+ vm_advise? Should I delete it?
+ <braunr> to me, the cluster size is a pager property
+ <youpi> to me, the cluster size is a map property
+ <braunr> whereas vm_advise indicates what applications want
+ <youpi> you could have several process accessing the same file in different
+ ways
+ <braunr> youpi: that's why there is a policy
+ <youpi> isn't cluster_size part of the policy?
+ <braunr> but if the pager abilities are limited, it won't change much
+ <braunr> i'm not sure
+ <youpi> cluster_size is the amount of readahead, isn't it?
+ <braunr> no, it's the amount of data in a single transfer
+ <mcsim> Yes, it is.
+ <braunr> ok, i'll have to check your code
+ <youpi> shouldn't transfers permit unbound amounts of data?
+ <mcsim> braunr: than I misunderstand what readahead is
+ <braunr> well then cluster size is per policy :)
+ <braunr> e.g. random => 0, normal => 3, sequential => 15
+ <braunr> why make it per map entry ?
+ <youpi> because it depends on what the application doezs
+ <braunr> let me check the code
+ <youpi> if it's accessing randomly, no need for big transfers
+ <youpi> just page transfers will be fine
+ <youpi> if accessing sequentially, rather use whole MiB of transfers
+ <youpi> and these behavior can be for the same file
+ <braunr> mcsim: the call is vm_advi*s*e
+ <braunr> mcsim: the call is vm_advi_s_e
+ <braunr> not advice
+ <youpi> yes, he agreed earlier
+ <braunr> ok
+ <mcsim> cluster_size is the amount of data that I try to read at one time.
+ <mcsim> at singe mo_data_request
+ <mcsim> *single
+ <youpi> which, to me, will depend on the actual map
+ <braunr> ok so it is the transfer size
+ <youpi> and should be autotuned, especially for normal behavior
+ <braunr> youpi: it makes no sense to have both the advice and the actual
+ size per map entry
+ <youpi> to get big readahead with all apps
+ <youpi> braunr: the size is not only dependent on the advice, but also on
+ the application behavior
+ <braunr> youpi: how does this application tell this ?
+ <youpi> even for sequential, you shouldn't necessarily use very big amounts
+ of transfers
+ <braunr> there is no need for the advice if there is a cluster size
+ <youpi> there can be, in the case of sequential, as we said, to clear
+ previous pages
+ <youpi> but otherwise, indeed
+ <youpi> but for me it's the converse
+ <youpi> the cluster size should be tuned anyway
+ <braunr> and i'm against giving the cluster size in the advise call, as we
+ may want to prefetch previous data as well
+ <youpi> I don't see how that collides
+ <braunr> well, if you consider it's the transfer size, it doesn't
+ <youpi> to me cluster size is just the size of a window
+ <braunr> if you consider it's the amount of pages following a faulted page,
+ it will
+ <braunr> also, if your policy says e.g. "3 pages before, 10 after", and
+ your cluster size is 2, what happens ?
+ <braunr> i would find it much simpler to do what other VM variants do:
+ compute the I/O sizes directly from the policy
+ <youpi> don't they autotune, and use the policy as a maximum ?
+ <braunr> depends on the implementations
+ <youpi> ok, but yes I agree
+ <youpi> although casting the size into stone in the policy looks bogus to
+ me
+ <braunr> but making cluster_size part of the kernel interface looks way too
+ messy
+ <braunr> it is
+ <braunr> that's why i would have thought it as part of the pager properties
+ <braunr> the pager is the true component besides the kernel that is
+ actually involved in paging ...
+ <youpi> well, for me the flexibility should still be per application
+ <youpi> by pager you mean the whole pager, not each file, right?
+ <braunr> if a pager can page more because e.g. it's a file system with big
+ block sizes, why not fetch more ?
+ <braunr> yes
+ <braunr> it could be each file
+ <braunr> but only if we have use for it
+ <braunr> and i don't see that currently
+ <youpi> well, posix currently doesn't provide a way to set it
+ <youpi> so it would be useless atm
+ <braunr> i was thinking about our hurd pagers
+ <youpi> could we perhaps say that the policy maximum could be a fraction of
+ available memory?
+ <braunr> why would we want that ?
+ <youpi> (total memory, I mean)
+ <youpi> to make it not completely cast into stone
+ <youpi> as have been in the past in gnumach
+ <braunr> i fail to understand :/
+ <youpi> there must be a misunderstanding then
+ <youpi> (pun not intended)
+ <braunr> why do you want to limit the policy maximum ?
+ <youpi> how to decide it?
+ <braunr> the pager sets it
+ <youpi> actually I don't see how a pager could decide it
+ <youpi> on what ground does it make the decision?
+ <youpi> readahead should ideally be as much as 1MiB
+ <braunr> 02:02 < braunr> if a pager can page more because e.g. it's a file
+ system with big block sizes, why not fetch more ?
+ <braunr> is the example i have in mind
+ <braunr> otherwise some default values
+ <youpi> that's way smaller than 1MiB, isn't it?
+ <braunr> yes
+ <braunr> and 1 MiB seems a lot to me :)
+ <youpi> for readahead, not really
+ <braunr> maybe for sequential
+ <youpi> that's what we care about!
+ <braunr> ah, i thought we cared about normal
+ <youpi> "as much as 1MiB", I said
+ <youpi> I don't mean normal :)
+ <braunr> right
+ <braunr> but again, why limit ?
+ <braunr> we could have 2 or more ?
+ <youpi> at some point you don't get more efficiency
+ <youpi> but eat more memory
+ <braunr> having the pager set the amount allows us to easily adjust it over
+ time
+ <mcsim> braunr: Do you think that readahead should be implemented in
+ libpager?
+ <youpi> than needed
+ <braunr> mcsim: no
+ <braunr> mcsim: err
+ <braunr> mcsim: can't answer
+ <youpi> mcsim: do you read the log of what you have missed during
+ disconnection?
+ <braunr> i'm not sure about what libpager does actually
+ <mcsim> yes
+ <braunr> for me it's just mutualisation of code used by pagers
+ <braunr> i don't know the details
+ <braunr> youpi: yes
+ <braunr> youpi: that's why we want these values not hardcoded in the kernel
+ <braunr> youpi: so that they can be adjusted by our shiny user space OS
+ <youpi> (btw apparently linux uses minimum 16k, maximum 128 or 256k)
+ <braunr> that's more reasonable
+ <youpi> that's just 4 times less :)
+ <mcsim> braunr: You say that pager should decide how much data should be
+ read ahead, but each pager can't implement it on it's own as there will
+ be too much overhead. So the only way is to implement this in libpager.
+ <braunr> mcsim: gni ?
+ <braunr> why couldn't they ?
+ <youpi> mcsim: he means the size, not the actual implementation
+ <youpi> the maximum size, actually
+ <braunr> actually, i would imagine it as the pager giving per policy
+ parameters
+ <youpi> right
+ <braunr> like how many before and after
+ <youpi> I agree, then
+ <braunr> the kernel could limit, sure, to avoid letting pagers use
+ completely insane values
+ <youpi> (and that's just a max, the kernel autotunes below that)
+ <braunr> why not
+ <youpi> that kernel limit could be a fraction of memory, then?
+ <braunr> it could, yes
+ <braunr> i see what you mean now
+ <youpi> mcsim: did you understand our discussion?
+ <youpi> don't hesitate to ask for clarification
+ <mcsim> I supposed cluster_size to be such parameter. And advice will help
+ to interpret this parameter (whether data should be read after fault page
+ or some data should be cleaned before)
+ <youpi> mcsim: we however believe that it's rather the pager than the
+ application that would tell that
+ <youpi> at least for the default values
+ <youpi> posix doesn't have a way to specify it, and I don't think it will
+ in the future
+ <braunr> and i don't think our own hurd-specific programs will need more
+ than that
+ <braunr> if they do, we can slightly change the interface to make it a per
+ object property
+ <braunr> i've checked the slab properties, and it seems we can safely add
+ it per object
+ <braunr> cf http://www.sceen.net/~rbraun/slabinfo.out
+ <braunr> so it would still be set by the pager, but if depending on the
+ object, the pager could set different values
+ <braunr> youpi: do you think the pager should just provide one maximum size
+ ? or per policy sizes ?
+ <youpi> I'd say per policy size
+ <youpi> so people can increase sequential size like crazy when they know
+ their sequential applications need it, without disturbing the normal
+ behavior
+ <braunr> right
+ <braunr> so the last decision is per pager or per object
+ <braunr> mcsim: i'd say whatever makes your implementation simpler :)
+ <mcsim> braunr: how kernel knows that object are created by specific pager?
+ <braunr> that's the kind of things i'm referring to with "whatever makes
+ your implementation simpler"
+ <braunr> but usually, vm_objects have an ipc port and some properties
+ relatedto their pagers
+ <braunr> -usually
+ <braunr> the problem i had in mind was the locking protocol but our spin
+ locks are noops, so it will be difficult to detect deadlocks
+ <mcsim> braunr: and for every policy there should be variable in vm_object
+ structure with appropriate cluster_size?
+ <braunr> if you want it per object, yes
+ <braunr> although i really don't think we want it
+ <youpi> better keep it per pager for now
+ <braunr> let's imagine youpi finishes his 64-bits support, and i can
+ successfully remove the page cache limit
+ <braunr> we'd jump from 1.8 GiB at most to potentially dozens of GiB of RAM
+ <braunr> and 1.8, mostly unused
+ <braunr> to dozens almost completely used, almost all the times for the
+ most interesting use cases
+ <braunr> we may have lots and lots of objects to keep around
+ <braunr> so if noone really uses the feature ... there is no point
+ <youpi> but also lots and lots of memory to spend on it :)
+ <youpi> a lot of objects are just one page, but a lof of them are not
+ <braunr> sure
+ <braunr> we wouldn't be doing that otherwise :)
+ <braunr> i'm just saying there is no reason to add the overhead of several
+ integers for each object if they're simply not used at all
+ <braunr> hmm, 64-bits, better page cache, clustered paging I/O :>
+ <braunr> (and readahead included in the last ofc)
+ <braunr> good night !
+ <mcsim> than, probably, make system-global max-cluster_size? This will save
+ some memory. Also there is usually no sense in reading really huge chunks
+ at once.
+ <youpi> but that'd be tedious to set
+ <youpi> there are only a few pagers, that's no wasted memory
+ <youpi> the user being able to set it for his own pager is however a very
+ nice feature, which can be very useful for databases, image processing,
+ etc.
+ <mcsim> In conclusion I have to implement following: 3 memory policies per
+ object and per vm_map_entry. Max cluster size for every policy should be
+ set per pager.
+ <mcsim> So, there should be 2 system calls for setting memory policy and
+ one for setting cluster sizes.
+ <mcsim> Also amount of data to transfer should be tuned automatically by
+ every page fault.
+ <mcsim> youpi: Correct me, please, if I'm wrong.
+ <youpi> I believe that's what we ended up to decide, yes
+
+
+## IRC, freenode, #hurd, 2012-07-02
+
+ <braunr> is it safe to say that all memory objects implemented by external
+ pagers have "file" semantics ?
+ <braunr> i wonder if the current memory manager interface is suitable for
+ device pagers
+ <mcsim> braunr: What does "file" semantics mean?
+ <braunr> mcsim: anonymous memory doesn't have the same semantics as a file
+ for example
+ <braunr> anonymous memory that is discontiguous in physical memory can be
+ contiguous in swap
+ <braunr> and its location can change with time
+ <braunr> whereas with a memory object, the data exchanged with pagers is
+ identified with its offset
+ <braunr> in (probably) all other systems, this way of specifying data is
+ common to all files, whatever the file system
+ <braunr> linux uses the struct vm_file name, while in BSD/Solaris they are
+ called vnodes (the link between a file system inode and virtual memory)
+ <braunr> my question is : can we implement external device pagers with the
+ current interface, or is this interface really meant for files ?
+ <braunr> also
+ <braunr> mcsim: something about what you said yesterday
+ <braunr> 02:39 < mcsim> In conclusion I have to implement following: 3
+ memory policies per object and per vm_map_entry. Max cluster size for
+ every policy should be set per pager.
+ <braunr> not per object
+ <braunr> one policy per map entry
+ <braunr> transfer parameters (pages before and after the faulted page) per
+ policy, defined by pagers
+ <braunr> 02:39 < mcsim> So, there should be 2 system calls for setting
+ memory policy and one for setting cluster sizes.
+ <braunr> adding one call for vm_advise is good because it mirrors the posix
+ call
+ <braunr> but for the parameters, i'd suggest changing an already existing
+ call
+ <braunr> not sure which one though
+ <mcsim> braunr: do you know how mo_change_attributes implemented in
+ OSF/Mach?
+ <braunr> after a quick reading of the reference manual, i think i
+ understand why they made it per object
+ <braunr> mcsim: no
+ <braunr> did they change the call to include those paging parameters ?
+ <mcsim> it accept two parameters: flavor and pointer to structure with
+ parameters.
+ <mcsim> flavor determines semantics of structure with parameters.
+ <mcsim>
+ http://www.darwin-development.org/cgi-bin/cvsweb/osfmk/src/mach_kernel/vm/memory_object.c?rev=1.1
+ <mcsim> structure can have 3 different views and what exect view will be is
+ determined by value of flavor
+ <mcsim> So, I thought about implementing similar call that could be used
+ for various purposes.
+ <mcsim> like ioctl
+ <braunr> "pointer to structure with parameters" <= which one ?
+ <braunr> mcsim: don't model anything anywhere like ioctl please
+ <mcsim> memory_object_info_t attributes
+ <braunr> ioctl is the very thing we want NOT to have on the hurd
+ <braunr> ok attributes
+ <braunr> and what are the possible values of flavour, and what kinds of
+ attributes ?
+ <mcsim> and then appears something like this on each case: behave =
+ (old_memory_object_behave_info_t) attributes;
+ <braunr> ok i see
+ <mcsim> flavor could be OLD_MEMORY_OBJECT_BEHAVIOR_INFO,
+ MEMORY_OBJECT_BEHAVIOR_INFO, MEMORY_OBJECT_PERFORMANCE_INFO etc
+ <braunr> i don't really see the point of flavour here, other than
+ compatibility
+ <braunr> having attributes is nice, but you should probably add it as a
+ call parameter, not inside a structure
+ <braunr> as a general rule, we don't like passing structures too much
+ to/from the kernel, because handling them with mig isn't very clean
+ <mcsim> ok
+ <mcsim> What policy parameters should be defined by pager?
+ <braunr> i'd say number of pages to page-in before and after the faulted
+ page
+ <mcsim> Only pages before and after the faulted page?
+ <braunr> for me yes
+ <braunr> youpi might have different things in mind
+ <braunr> the page cleaning in sequential mode is something i wouldn't do
+ <braunr> 1/ applications might want data read sequentially to remain in the
+ cache, for other sequential accesses
+ <braunr> 2/ applications that really don't want to cache anything should
+ use O_DIRECT
+ <braunr> 3/ it's complicated, and we're in july
+ <braunr> i'd rather have a correct and stable result than too many unused
+ features
+ <mcsim> braunr: MADV_SEQUENTIAL Expect page references in sequential order.
+ (Hence, pages in the given range can be aggressively read ahead, and may
+ be freed soon after they are accessed.)
+ <mcsim> this is from linux man
+ <mcsim> braunr: Can I at least make keeping in mind that it could be
+ implemented?
+ <mcsim> I mean future rpc interface
+ <mcsim> braunr: From behalf of kernel pager is just a port.
+ <mcsim> That's why it is not clear for me how I can make in kernel
+ per-pager policy
+ <braunr> mcsim: you can't
+ <braunr> 15:19 < braunr> after a quick reading of the reference manual, i
+ think i understand why they made it per object
+ <braunr>
+ http://pubs.opengroup.org/onlinepubs/009695399/functions/posix_madvise.html
+ <braunr> POSIX_MADV_SEQUENTIAL
+ <braunr> Specifies that the application expects to access the specified
+ range sequentially from lower addresses to higher addresses.
+ <braunr> linux might free pages after their access, why not, but this is
+ entirely up to the implementation
+ <mcsim> I know, when but applications might want data read sequentially to
+ remain in the cache, for other sequential accesses this kind of access
+ could be treated rather normal or random
+ <braunr> we can do differently
+ <braunr> mcsim: no
+ <braunr> sequential means the access will be sequential
+ <braunr> so aggressive readahead (e.g. 0 pages before, many after), should
+ be used
+ <braunr> for better performance
+ <braunr> from my pov, it has nothing to do with caching
+ <braunr> i actually sometimes expect data to remain in cache
+ <braunr> e.g. before playing a movie from sshfs, i sometimes prefetch it
+ using dd
+ <braunr> then i use mplayer
+ <braunr> i'd be very disappointed if my data didn't remain in the cache :)
+ <mcsim> At least these pages could be placed into inactive list to be first
+ candidates for pageout.
+ <braunr> that's what will happen by default
+ <braunr> mcsim: if we need more properties for memory objects, we'll adjust
+ the call later, when we actually implement them
+ <mcsim> so, first call is vm_advise and second is changed
+ mo_change_attributes?
+ <braunr> yes
+ <mcsim> there will appear 3 new parameters in mo_c_a: policy, pages before
+ and pages after?
+ <mcsim> braunr: With vm_advise I didn't understand one thing. This call is
+ defined in defs file, so that should mean that vm_advise is ordinal rpc
+ call. But on the same time it is defined as syscall in mach internals (in
+ mach_trap_table).
+ <braunr> mcsim: what ?
+ <braunr> were is it "defined" ? (it doesn't exit in gnumach currently)
+ <mcsim> Ok, let consider vm_map
+ <mcsim> I define it both in mach_trap_table and in defs file.
+ <mcsim> But why?
+ <braunr> uh ?
+ <braunr> let me see
+ <mcsim> Why defining in defs file is not enough?
+ <mcsim> and previous question: there will appear 3 new parameters in
+ mo_c_a: policy, pages before and pages after?
+ <braunr> mcsim: give me the exact file paths please
+ <braunr> mcsim: we'll discuss the new parameters after
+ <mcsim> kern/syscall_sw.c
+ <braunr> right i see
+ <mcsim> here mach_trap_table in defined
+ <braunr> i think they're not used
+ <braunr> they were probably introduced for performance
+ <mcsim> and ./include/mach/mach.defs
+ <braunr> don't bother adding vm_advise as a syscall
+ <braunr> about the parameters, it's a bit more complicated
+ <braunr> you should add 6 parameters
+ <braunr> before and after, for the 3 policies
+ <braunr> but
+ <braunr> as seen in the posix page, there could be more policies ..
+ <braunr> ok forget what i said, it's stupid
+ <braunr> yes, the 3 parameters you had in mind are correct
+ <braunr> don't forget a "don't change" value for the policy though, so the
+ kernel ignores the before/after values if we don't want to change that
+ <mcsim> ok
+ <braunr> mcsim: another reason i asked about "file semantics" is the way we
+ handle the cache
+ <braunr> mcsim: file semantics imply data is cached, whereas anonymous and
+ device memory usually isn't
+ <braunr> (although having the cache at the vm layer instead of the pager
+ layer allows nice things like the swap cache)
+ <mcsim> But this shouldn't affect possibility of implementing of device
+ pager.
+ <braunr> yes it may
+ <braunr> consider how a fault is actually handled by a device
+ <braunr> mach must use weird fictitious pages for that
+ <braunr> whereas it would be better to simply let the pager handle the
+ fault as it sees fit
+ <mcsim> setting may_cache to false should resolve the issue
+ <braunr> for the caching problem, yes
+ <braunr> which is why i still think it's better to handle the cache at the
+ vm layer, unlike UVM which lets the vnode pager handle its own cache, and
+ removes the vm cache completely
+ <mcsim> The only issue with pager interface I see is implementing of
+ scatter-gather DMA (as current interface does not support non-consecutive
+ access)
+ <braunr> right
+ <braunr> but that's a performance issue
+ <braunr> my problem with device pagers is correctness
+ <braunr> currently, i think the kernel just asks pagers for "data"
+ <braunr> whereas a device pager should really map its device memory where
+ the fault happen
+ <mcsim> braunr: You mean that every access to memory should cause page
+ fault?
+ <mcsim> I mean mapping of device memory
+ <braunr> no
+ <braunr> i mean a fault on device mapped memory should directly access a
+ shared region
+ <braunr> whereas file pagers only implement backing store
+ <braunr> let me explain a bit more
+ <braunr> here is what happens with file mapped memory
+ <braunr> you map it, access it (some I/O is done to get the page content in
+ physical memory), then later it's flushed back
+ <braunr> whereas with device memory, there shouldn't be any I/O, the device
+ memory should directly be mapped (well, some devices need the same
+ caching behaviour, while others provide direct access)
+ <braunr> one of the obvious consequences is that, when you map device
+ memory (e.g. a framebuffer), you expect changes in your mapped memory to
+ be effective right away
+ <braunr> while with file mapped memory, you need to msync() it
+ <braunr> (some framebuffers also need to be synced, which suggests greater
+ control is needed for external pagers)
+ <mcsim> Seems that I understand you. But how it is implemented in other
+ OS'es? Do they set something in mmu?
+ <braunr> mcsim: in netbsd, pagers have a fault operatin in addition to get
+ and put
+ <braunr> the device pager sets get and put to null and implements fault
+ only
+ <braunr> the fault callback then calls the d_mmap callback of the specific
+ driver
+ <braunr> which usually results in the mmu being programmed directly
+ <braunr> (e.g. pmap_enter or similar)
+ <braunr> in linux, i think raw device drivers, being implemented as
+ character device files, must provide raw read/write/mmap/etc.. functions
+ <braunr> so it looks pretty much similar
+ <braunr> i'd say our current external pager interface is insufficient for
+ device pagers
+ <braunr> but antrik may know more since he worked on ggi
+ <braunr> antrik: ^
+ <mcsim> braunr: Seems he used io_map
+ <braunr> mcsim: where ar eyou looking at ? the incubator ?
+ <mcsim> his master's thesis
+ <braunr> ah the thesis
+ <braunr> but where ? :)
+ <mcsim> I'll give you a link
+ <mcsim> http://dl.dropbox.com/u/36519904/kgi_on_hurd.pdf
+ <braunr> thanks
+ <mcsim> see p 158
+ <braunr> arg, more than 200 pages, and he says he's lazy :/
+ <braunr> mcsim: btw, have a look at m_o_ready
+ <mcsim> braunr: This is old form of mo_change attributes
+ <mcsim> I'm not going to change it
+ <braunr> mcsim: these are actually the default object parameters right ?
+ <braunr> mcsim: if you don't change it, it means the kernel must set
+ default values until the pager changes them, if it does
+ <mcsim> yes.
+ <antrik> mcsim: madvise() on Linux has a separate flag to indicate that
+ pages won't be reused. thus I think it would *not* be a good idea to
+ imply it in SEQUENTIAL
+ <antrik> braunr: yes, my KMS code relies on mapping memory objects for the
+ framebuffer
+ <antrik> (it should be noted though that on "modern" hardware, mapping
+ graphics memory directly usually gives very poor performance, and drivers
+ tend to avoid it...)
+ <antrik> mcsim: BTW, it was most likely me who warned about legal issues
+ with KAM's work. AFAIK he never managed to get the copyright assignment
+ done :-(
+ <antrik> (that's not really mandatory for the gnumach work though... only
+ for the Hurd userspace parts)
+ <antrik> also I'd like to point out again that the cluster_size argument
+ from OSF Mach was probably *not* meant for advice from application
+ programs, but rather was supposed to reflect the cluster size of the
+ filesystem in question. at least that sounds much more plausible to me...
+ <antrik> braunr: I have no idea whay you mean by "device pager". device
+ memory is mapped once when the VM mapping is established; there is no
+ need for any fault handling...
+ <antrik> mcsim: to be clear, I think the cluster_size parameter is mostly
+ orthogonal to policy... and probably not very useful at all, as ext2
+ almost always uses page-sized clusters. I'm strongly advise against
+ bothering with it in the initial implementation
+ <antrik> mcsim: to avoid confusion, better use a completely different name
+ for the policy-decided readahead size
+ <mcsim> antrik: ok
+ <antrik> braunr: well, yes, the thesis report turned out HUGE; but the
+ actual work I did on the KGI port is fairly tiny (not more than a few
+ weeks of actual hacking... everything else was just brooding)
+ <antrik> braunr: more importantly, it's pretty much the last (and only
+ non-trivial) work I did on the Hurd :-(
+ <antrik> (also, I don't think I used the word "lazy"... my problem is not
+ laziness per se; but rather inability to motivate myself to do anything
+ not providing near-instant gratification...)
+ <braunr> antrik: right
+ <braunr> antrik: i shouldn't consider myself lazy either
+ <braunr> mcsim: i agree with antrik, as i told you weeks ago
+ <braunr> about
+ <braunr> 21:45 < antrik> mcsim: to be clear, I think the cluster_size
+ parameter is mostly orthogonal to policy... and probably not very useful
+ at all, as ext2 almost always uses page-sized clusters. I'm strongly
+ advise against bothering with it
+ <braunr> in the initial implementation
+ <braunr> antrik: but how do you actually map device memory ?
+ <braunr> also, strangely enough, here is the comment in dragonflys
+ madvise(2)
+ <braunr> 21:45 < antrik> mcsim: to be clear, I think the cluster_size
+ parameter is mostly orthogonal to policy... and probably not very useful
+ at all, as ext2 almost always uses page-sized clusters. I'm strongly
+ advise against bothering with it
+ <braunr> in the initial implementation
+ <braunr> arg
+ <braunr> MADV_SEQUENTIAL Causes the VM system to depress the priority of
+ pages immediately preceding a given page when it is faulted in.
+ <antrik> braunr: interesting...
+ <antrik> (about SEQUENTIAL on dragonfly)
+ <antrik> as for mapping device memory, I just use to device_map() on the
+ mem device to map the physical address space into a memory object, and
+ then through vm_map into the driver (and sometimes application) address
+ space
+ <antrik> formally, there *is* a pager involved of course (implemented
+ in-kernel by the mem device), but it doesn't really do anything
+ interesting
+ <antrik> thinking about it, there *might* actually be page faults involved
+ when the address ranges are first accessed... but even then, the handling
+ is really trivial and not terribly interesting
+ <braunr> antrik: it does the most interesting part, create the physical
+ mapping
+ <braunr> and as trivial as it is, it requires a special interface
+ <braunr> i'll read about device_map again
+ <braunr> but yes, the fact that it's in-kernel is what solves the problem
+ here
+ <braunr> what i'm interested in is to do it outside the kernel :)
+ <antrik> why would you want to do that?
+ <antrik> there is no policy involved in doing an MMIO mapping
+ <antrik> you ask for the pysical memory region you are interested in, and
+ that's it
+ <antrik> whether the kernel adds the page table entries immediately or on
+ faults is really an implementation detail
+ <antrik> braunr: ^
+ <braunr> yes it's a detail
+ <braunr> but do we currently have the interface to make such mappings from
+ userspace ?
+ <braunr> and i want to do that because i'd like as many drivers as possible
+ outside the kernel of course
+ <antrik> again, the userspace driver asks the kernel to establish the
+ mapping (through device_map() and then vm_map() on the resulting memory
+ object)
+ <braunr> hm i'm missing something
+ <braunr>
+ http://www.gnu.org/software/hurd/gnumach-doc/Device-Map.html#Device-Map
+ <= this one ?
+ <antrik> yes, this one
+ <braunr> but this implies the device is implemented by the kernel
+ <antrik> the mem device is, yes
+ <antrik> but that's not a driver
+ <braunr> ah
+ <antrik> it's just the interface for doing MMIO
+ <antrik> (well, any physical mapping... but MMIO is probably the only real
+ use case for that)
+ <braunr> ok
+ <braunr> i was thinking about completely removing the device interface from
+ the kernel actually
+ <braunr> but it makes sense to have such devices there
+ <antrik> well, in theory, specific kernel drivers can expose their own
+ device_map() -- but IIRC the only one that does (besides mem of course)
+ is maptime -- which is not a real driver either...
+ <braunr> oh btw, i didn't know you had a blog :)
+ <antrik> well, it would be possible to replace the device interface by
+ specific interfaces for the generic pseudo devices... I'm not sure how
+ useful that would be
+ <braunr> there are lots of interesting stuff there
+ <antrik> hehe... another failure ;-)
+ <braunr> failure ?
+ <antrik> well, when I realized that I'm speding a lot of time pondering
+ things, and never can get myself to actually impelemnt any of them, I had
+ the idea that if I write them down, there might at least be *some* good
+ from it...
+ <antrik> unfortunately it turned out that I need so much effort to write
+ things down, that most of the time I can't get myself to do that either
+ :-(
+ <braunr> i see
+ <braunr> well it's still nice to have it
+ <antrik> (notice that the latest entry is two years old... and I haven't
+ even started describing most of my central ideas :-( )
+ <braunr> antrik: i tried to create a blog once, and found what i wrote so
+ stupid i immediately removed it
+ <antrik> hehe
+ <antrik> actually some of my entries seem silly in retrospect as well...
+ <antrik> but I guess that's just the way it is ;-)
+ <braunr> :)
+ <braunr> i'm almost sure other people would be interested in what i had to
+ say
+ <antrik> BTW, I'm actually not sure whether the Mach interfaces are
+ sufficient to implement GEM/TTM... we would certainly need kernel support
+ for GART (as for any other kind IOMMU in fact); but beyond that it's not
+ clear to me
+ <braunr> GEM ? TTM ? GART ?
+ <antrik> GEM = Graphics Execution Manager. part of the "new" DRM interface,
+ closely tied with KMS
+ <antrik> TTM = Translation Table Manager. does part of the background work
+ for most of the GEM drivers
+ <braunr> "The Graphics Execution Manager (GEM) is a computer software
+ system developed by Intel to do memory management for device drivers for
+ graphics chipsets." hmm
+ <antrik> (in fact it was originally meant to provide the actual interface;
+ but the Inter folks decided that it's not useful for their UMA graphics)
+ <antrik> GART = Graphics Aperture
+ <antrik> kind of an IOMMU for graphics cards
+ <antrik> allowing the graphics card to work with virtual mappings of main
+ memory
+ <antrik> (i.e. allowing safe DMA)
+ <braunr> ok
+ <braunr> all this graphics stuff looks so complex :/
+ <antrik> it is
+ <antrik> I have a whole big chapter on that in my thesis... and I'm not
+ even sure I got everything right
+ <braunr> what is nvidia using/doing (except for getting the finger) ?
+ <antrik> flushing out all the details for KMS, GEM etc. took the developers
+ like two years (even longer if counting the history of TTM)
+ <antrik> Nvidia's proprietary stuff uses a completely own kernel interface,
+ which is of course not exposed or docuemented in any way... but I guess
+ it's actually similar in what it does)
+ <braunr> ok
+ <antrik> (you could ask the nouveau guys if you are truly
+ interested... they are doing most of their reverse engineering at the
+ kernel interface level)
+ <braunr> it seems graphics have very special needs, and a lot of them
+ <braunr> and the interfaces are changing often
+ <braunr> so it's not that much interesting currently
+ <braunr> it just means we'll probably have to change the mach interface too
+ <braunr> like you said
+ <braunr> so the answer to my question, which was something like "do mach
+ external pagers only implement files ?", is likely yes
+ <antrik> well, KMS/GEM had reached some stability; but now there are
+ further changes ahead with the embedded folks coming in with all their
+ dedicated hardware, calling for unified buffer management across the
+ whole pipeline (from capture to output)
+ <antrik> and yes: graphics hardware tends to be much more complex regarding
+ the interface than any other hardware. that's because it's a combination
+ of actual I/O (like most other devices) with a very powerful coprocessor
+ <antrik> and the coprocessor part is pretty much unique amongst peripherial
+ devices
+ <antrik> (actually, the I/O part is also much more complex than most other
+ hardware... but that alone would only require a more complex driver, not
+ special interfaces)
+ <antrik> embedded hardware makes it more interesting in that the I/O
+ part(s) are separate from the coprocessor ones; and that there are often
+ several separate specialised ones of each... the DRM/KMS stuff is not
+ prepared to deal with this
+ <antrik> v4l over time has evolved to cover such things; but it's not
+ really the right place to implement graphics drivers... which is why
+ there are not efforts to unify these frameworks. funny times...
+
+
+## IRC, freenode, #hurd, 2012-07-03
+
+ <braunr> mcsim: vm_for_every_page should be static
+ <mcsim> braunr: ok
+ <braunr> mcsim: see http://gcc.gnu.org/onlinedocs/gcc/Inline.html
+ <braunr> and it looks big enough that you shouldn't make it inline
+ <braunr> let the compiler decide for you (which is possible only if the
+ function is static)
+ <braunr> (otherwise a global symbol needs to exist)
+ <braunr> mcsim: i don't know where you copied that comment from, but you
+ should review the description of the vm_advice call in mach.Defs
+ <mcsim> braunr: I see
+ <mcsim> braunr: It was vm_inherit :)
+ <braunr> mcsim: why isn't NORMAL defined in vm_advise.h ?
+ <braunr> mcsim: i figured actually ;)
+ <mcsim> braunr: I was going to do it later when.
+ <braunr> mcsim: for more info on inline, see
+ http://www.kernel.org/doc/Documentation/CodingStyle
+ <braunr> arg that's an old one
+ <mcsim> braunr: I know that I do not follow coding style
+ <braunr> mcsim: this one is about linux :p
+ <braunr> mcsim: http://lxr.linux.no/linux/Documentation/CodingStyle should
+ have it
+ <braunr> mcsim: "Chapter 15: The inline disease"
+ <mcsim> I was going to fix it later during refactoring when I'll merge
+ mplaneta/gsoc12/working to mplaneta/gsoc12/master
+ <braunr> be sure not to forget :p
+ <braunr> and the best not to forget is to do it asap
+ <braunr> +way
+ <mcsim> As to inline. I thought that even if I specify function as inline
+ gcc makes final decision about it.
+ <mcsim> There was a specifier that made function always inline, AFAIR.
+ <braunr> gcc can force a function not to be inline, yes
+ <braunr> but inline is still considered as a strong hint
+
+
+## IRC, freenode, #hurd, 2012-07-05
+
+ <mcsim1> braunr: hello. You've said that pager has to supply 2 values to
+ kernel to give it an advice how execute page fault. These two values
+ should be number of pages before and after the page where fault
+ occurred. But for sequential policy number of pager before makes no
+ sense. For random policy too. For normal policy it would be sane to make
+ readahead symmetric. Probably it would be sane to make pager supply
+ cluster_size (if it is necessary to supply any) that w
+ <mcsim1> *that will be advice for kernel of least sane value? And maximal
+ value will be f(free_memory, map_entry_size)?
+ <antrik> mcsim1: I doubt symmetric readahead would be a good default
+ policy... while it's hard to estimate an optimum over all typical use
+ cases, I'm pretty sure most situtations will benefit almost exclusively
+ from reading following pages, not preceeding ones
+ <antrik> I'm not even sure it's useful to read preceding pages at all in
+ the default policy -- the use cases are probably so rare that the penalty
+ in all other use cases is not justified. I might be wrong on that
+ though...
+ <antrik> I wonder how other systems handle that
+ <LarstiQ> antrik: if there is a mismatch between pages and the underlying
+ store, like why changing small bits of data on an ssd is slow?
+ <braunr> mcsim1: i don't see why not
+ <braunr> antrik: netbsd reads a few pages before too
+ <braunr> actually, what netbsd does vary on the version, some only mapped
+ in resident pages, later versions started asynchronous transfers in the
+ hope those pages would be there
+ <antrik> LarstiQ: not sure what you are trying to say
+ <braunr> in linux :
+ <braunr> 321 * MADV_NORMAL - the default behavior is to read clusters.
+ This
+ <braunr> 322 * results in some read-ahead and read-behind.
+ <braunr> not sure if it's actually what the implementation does
+ <antrik> well, right -- it's probably always useful to read whole clusters
+ at a time, especially if they are the same size as pages... that doesn't
+ mean it always reads preceding pages; only if the read is in the middle
+ of the cluster AIUI
+ <LarstiQ> antrik: basically what braunr just pasted
+ <antrik> and in most cases, we will want to read some *following* clusters
+ as well, but probably not preceding ones
+ * LarstiQ nods
+ <braunr> antrik: the default policy is usually rather sequential
+ <braunr> here are the numbers for netbsd
+ <braunr> 166 static struct uvm_advice uvmadvice[] = {
+ <braunr> 167 { MADV_NORMAL, 3, 4 },
+ <braunr> 168 { MADV_RANDOM, 0, 0 },
+ <braunr> 169 { MADV_SEQUENTIAL, 8, 7},
+ <braunr> 170 };
+ <braunr> struct uvm_advice {
+ <braunr> int advice;
+ <braunr> int nback;
+ <braunr> int nforw;
+ <braunr> };
+ <braunr> surprising isn't it ?
+ <braunr> they may suggest sequential may be backwards too
+ <braunr> makes sense
+ <antrik> braunr: what are these numbers? pages?
+ <braunr> yes
+ <antrik> braunr: I suspect the idea behind SEQUENTIAL is that with typical
+ sequential access patterns, you will start at one end of the file, and
+ then go towards the other end -- so the extra clusters in the "wrong"
+ direction do not actually come into play
+ <antrik> only situation where some extra clusters are actually read is when
+ you start in the middle of a file, and thus do not know yet in which
+ direction the sequential read will go...
+ <braunr> yes, there are similar comments in the linux code
+ <braunr> mcsim1: so having before and after numbers seems both
+ straightforward and in par with other implementations
+ <antrik> I'm still surprised about the almost symmetrical policy for NORMAL
+ though
+ <antrik> BTW, is it common to use heuristics for automatically recognizing
+ random and sequential patterns in the absence of explicit madise?
+ <braunr> i don't know
+ <braunr> netbsd doesn't use any, linux seems to have different behaviours
+ for anonymous and file memory
+ <antrik> when KAM was working on this stuff, someone suggested that...
+ <braunr> there is a file_ra_state struct in linux, for per file read-ahead
+ policy
+ <braunr> now the structure is of course per file system, since they all use
+ the same address
+ <braunr> (which is why i wanted it to be per pager in the first place)
+ <antrik> mcsim1: as I said before, it might be useful for the pager to
+ supply cluster size, if it's different than page size. but right now I
+ don't think this is something worth bothering with...
+ <antrik> I seriously doubt it would be useful for the pager to supply any
+ other kind of policy
+ <antrik> braunr: I don't understand your remark about using the same
+ address...
+ <antrik> braunr: pre-mapping seems the obvious way to implement readahead
+ policy
+ <antrik> err... per-mapping
+ <braunr> the ra_state (read ahead state) isn't the policy
+ <braunr> the policy is per mapping, parts of the implementation of the
+ policy is per file system
+ <mcsim1> braunr: How do you look at following implementation of NORMAL
+ policy: We have fault page that is current. Than we have maximal size of
+ readahead block. First we find first absent pages before and after
+ current. Than we try to fit block that will be readahead into this
+ range. Here could be following situations: in range RBS/2 (RBS -- size of
+ readahead block) there is no any page, so readahead will be symmetric; if
+ current page is first absent page than all
+ <mcsim1> RBS block will consist of pages that are after current; on the
+ contrary if current page is last absent than readahead will go backwards.
+ <mcsim1> Additionally if current page is approximately in the middle of the
+ range we can decrease RBS, supposing that access is random.
+ <braunr> mcsim1: i think your gsoc project is about readahead, we're in
+ july, and you need to get the job done
+ <braunr> mcsim1: grab one policy that works, pages before and after are
+ good enough
+ <braunr> use sane default values, let the pagers decide if they want
+ something else
+ <braunr> and concentrate on the real work now
+ <antrik> braunr: I still don't see why pagers should mess with that... only
+ complicates matters IMHO
+ <braunr> antrik: probably, since they almost all use the default
+ implementation
+ <braunr> mcsim1: just use sane values inside the kernel :p
+ <braunr> this simplifies things by only adding the new vm_advise call and
+ not change the existing external pager interface
diff --git a/open_issues/pfinet_vs_system_time_changes.mdwn b/open_issues/pfinet_vs_system_time_changes.mdwn
index 513cbc73..46705047 100644
--- a/open_issues/pfinet_vs_system_time_changes.mdwn
+++ b/open_issues/pfinet_vs_system_time_changes.mdwn
@@ -1,4 +1,5 @@
-[[!meta copyright="Copyright © 2010, 2011 Free Software Foundation, Inc."]]
+[[!meta copyright="Copyright © 2010, 2011, 2012 Free Software Foundation,
+Inc."]]
[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
id="license" text="Permission is granted to copy, distribute and/or modify this
@@ -58,3 +59,24 @@ IRC, freenode, #hurd, 2011-10-27:
<antrik> it's really fascinating that only the pfinet on the Hurd instance
where I set the date is affected, and not the pfinet in the other
instance
+
+IRC, freenode, #hurd, 2012-06-28:
+
+ <bddebian> great, now setting the date/time fucked my machine
+ <pinotree> yes, we lack a monotonic clock
+ <pinotree> there are select() loops that use gettimeofday to determine how
+ much time to wait
+ <pinotree> thus if the time changes (eg goes back), the calculation goes
+ crazy
+ <antrik> pinotree: didn't you implement a monotonic clock?...
+ <pinotree> started to
+ <antrik> bddebian: did it really fuck the machine? normally it only resets
+ TCP connections...
+ <pinotree> yeah, i remember such gettimeofay-based select-loops at least in
+ pfinet
+ <antrik> I don't think it's a loop. it just drops the connections,
+ believing they have timed out
+ <bddebian> antrik: Well in this case I don't know because I am at work but
+ it fucked me because I now cannot get to it.. :)
+ <antrik> bddebian: that's odd... you should be able to just log in again
+ IIRC
diff --git a/open_issues/qemu_writeback.mdwn b/open_issues/qemu_writeback.mdwn
new file mode 100644
index 00000000..ab881705
--- /dev/null
+++ b/open_issues/qemu_writeback.mdwn
@@ -0,0 +1,18 @@
+[[!meta copyright="Copyright © 2012 Free Software Foundation, Inc."]]
+
+[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
+id="license" text="Permission is granted to copy, distribute and/or modify this
+document under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no Invariant
+Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
+is included in the section entitled [[GNU Free Documentation
+License|/fdl]]."]]"""]]
+
+[[!tag open_issue_documentation]]
+
+
+# IRC, freenode, #hurdfr, 2012-07-01
+
+ <braunr> remplace "-hda file.img" par "-drive
+ cache=writeback,index=0,media=disk,file=file.img"
+ <braunr> tu sentiras tout de suite la différence
diff --git a/open_issues/strict_aliasing.mdwn b/open_issues/strict_aliasing.mdwn
new file mode 100644
index 00000000..01019372
--- /dev/null
+++ b/open_issues/strict_aliasing.mdwn
@@ -0,0 +1,21 @@
+[[!meta copyright="Copyright © 2012 Free Software Foundation, Inc."]]
+
+[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
+id="license" text="Permission is granted to copy, distribute and/or modify this
+document under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no Invariant
+Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
+is included in the section entitled [[GNU Free Documentation
+License|/fdl]]."]]"""]]
+
+[[!tag open_issue_glibc open_issue_gnumach open_issue_hurd open_issue_mig]]
+
+
+# IRC, freenode, #hurd, 2012-07-04
+
+ <braunr> we should perhaps build the hurd with -fno-strict-aliasing,
+ considering the number of warnings i can see during the build :/
+ <pinotree> braunr: wouldn't be better to "just" fix the mig-generated stubs
+ instead?
+ <braunr> pinotree: if we can rely on gcc for the warnings, yes
+ <braunr> but i suspect there might be other silent issues in very old code