summaryrefslogtreecommitdiff
path: root/open_issues
diff options
context:
space:
mode:
authorThomas Schwinge <tschwinge@gnu.org>2011-11-30 21:21:45 +0100
committerThomas Schwinge <tschwinge@gnu.org>2011-11-30 21:21:45 +0100
commitbe4193108513f02439a211a92fd80e0651f6721b (patch)
treea8fa187c9a6d4ba806a1b7799fa82f712f667c4e /open_issues
parentbe49aa7ddec52e121d562e14d4d93fd301b05fbb (diff)
IRC.
Diffstat (limited to 'open_issues')
-rw-r--r--open_issues/anatomy_of_a_hurd_system.mdwn28
-rw-r--r--open_issues/ext2fs_page_cache_swapping_leak.mdwn88
-rw-r--r--open_issues/gnumach_memory_management.mdwn202
-rw-r--r--open_issues/libmachuser_libhurduser_rpc_stubs.mdwn50
-rw-r--r--open_issues/mig_portable_rpc_declarations.mdwn58
-rw-r--r--open_issues/mission_statement.mdwn12
-rw-r--r--open_issues/page_cache.mdwn73
-rw-r--r--open_issues/perl.mdwn50
-rw-r--r--open_issues/robustness.mdwn64
-rw-r--r--open_issues/syslog.mdwn27
-rw-r--r--open_issues/translator_stdout_stderr.mdwn32
11 files changed, 680 insertions, 4 deletions
diff --git a/open_issues/anatomy_of_a_hurd_system.mdwn b/open_issues/anatomy_of_a_hurd_system.mdwn
index 46526641..13599e19 100644
--- a/open_issues/anatomy_of_a_hurd_system.mdwn
+++ b/open_issues/anatomy_of_a_hurd_system.mdwn
@@ -87,7 +87,7 @@ RPC stubs.
More stuff like [[hurd/IO_path]].
---
+---
IRC, freenode, #hurd, 2011-10-18:
@@ -96,3 +96,29 @@ IRC, freenode, #hurd, 2011-10-18:
<antrik> short version: grub loads mach, ext2, and ld.so/exec; mach starts
ext2; ext2 starts exec; ext2 execs a few other servers; ext2 execs
init. from there on, it's just standard UNIX stuff
+
+---
+
+IRC, OFTC, #debian-hurd, 2011-11-02:
+
+ <sekon_> is __dir_lookup a RPC ??
+ <sekon_> where can i find the source of __dir_lookup ??
+ <sekon_> grepping most gives out rvalue assignments
+ <sekon_> -assignments
+ <sekon_> but in hurs/fs.h it is used as a function ??
+ <pinotree> it should be the mig-generated function for that rpc
+ <sekon_> how do i know how its implemented ??
+ <sekon_> is there any way to delve deeprer into mig-generated functions
+ <tschwinge> sekon_: The MIG-generated stuff will either be found in the
+ package's build directory (if it's building it for themselves), or in the
+ glibc build directory (libhurduser, libmachuser; which are all the
+ available user RPC stubs).
+ <tschwinge> sekon_: The implementation can be found in the various Hurd
+ servers/libraries.
+ <tschwinge> sekon_: For example, [hurd]/libdiskfs/dir-lookup.c.
+ <tschwinge> sekon_: What MIG does is provide a function call interface for
+ these ``functions'', and the Mach microkernel then dispatches the
+ invocation to the corresponding server, for example a /hurd/ext2fs file
+ system (via libdiskfs).
+ <tschwinge> sekon_: This may help a bit:
+ http://www.gnu.org/software/hurd/hurd/hurd_hacking_guide.html
diff --git a/open_issues/ext2fs_page_cache_swapping_leak.mdwn b/open_issues/ext2fs_page_cache_swapping_leak.mdwn
index c0d0867b..075533e7 100644
--- a/open_issues/ext2fs_page_cache_swapping_leak.mdwn
+++ b/open_issues/ext2fs_page_cache_swapping_leak.mdwn
@@ -12,7 +12,10 @@ License|/fdl]]."]]"""]]
There is a [[!FF_project 272]][[!tag bounty]] on this task.
-IRC, OFTC, #debian-hurd, 2011-03-24
+[[!toc]]
+
+
+# IRC, OFTC, #debian-hurd, 2011-03-24
<youpi> I still believe we have an ext2fs page cache swapping leak, however
<youpi> as the 1.8GiB swap was full, yet the ld process was only 1.5GiB big
@@ -24,7 +27,7 @@ IRC, OFTC, #debian-hurd, 2011-03-24
<youpi> yes
<youpi> the disk content, basicallyt :)
-IRC, freenode, #hurd, 2011-04-18
+# IRC, freenode, #hurd, 2011-04-18
<antrik> damn, a cp -a simply gobbles down swap space...
<braunr> really ?
@@ -173,3 +176,84 @@ IRC, freenode, #hurd, 2011-04-18
backing store of memory objects created from its pager
<braunr> so you can view swap as the file system for everything that isn't
an external memory object
+
+
+# IRC, freenode, #hurd, 2011-11-15
+
+ <braunr> hm, now my system got unstable
+ <braunr> swap is increasing, without any apparent reason
+ <antrik> you mean without any load?
+ <braunr> with load, yes
+ <braunr> :)
+ <antrik> well, with load is "normal"...
+ <antrik> at least for some loads
+ <braunr> i can't create memory pressure to stress reclaiming without any
+ load
+ <antrik> what load are you using?
+ <braunr> ftp mirrorring
+ <antrik> hm... never tried that; but I guess it's similar to apt-get
+ <antrik> so yes, that's "normal". I talked about it several times, and also
+ wrote to the ML
+ <braunr> antrik: ok
+ <antrik> if you find out how to fix this, you are my hero ;-)
+ <braunr> arg :)
+ <antrik> I suspect it's the infamous double swapping problem; but that's
+ just a guess
+ <braunr> looks like this
+ <antrik> BTW, if you give me the exact command, I could check if I see it
+ too
+ <braunr> i use lftp (mirror -Re) from a linux git repository
+ <braunr> through sftp
+ <braunr> (lots of small files, big content)
+ <antrik> can't you just give me the exact command? I don't feel like
+ figuring it out myself
+ <braunr> antrik: cd linux-stable; lftp sftp://hurd_addr/
+ <braunr> inside lftp: mkdir linux-stable; cd linux-stable; mirror -Re
+ <braunr> hm, half of physical memory just got freed
+ <braunr> our page cache is really weird :/
+ <braunr> (i didn't delete any file when that happened)
+ <antrik> hurd_addr?
+ <braunr> ssh server ip address
+ <braunr> or name
+ <braunr> of your hurd :)
+ <antrik> I'm confused. you are mirroring *from* the Hurd box?
+ <braunr> no, to it
+ <antrik> ah, so you login via sftp and then push to it?
+ <braunr> yes
+ <braunr> fragmentation looks very fine
+ <braunr> even for the huge pv_entry cache and its 60k+ entries
+ <braunr> (and i'm running a kernel with the cpu layer enabled)
+ <braunr> git reset/status/diff/log/grep all work correctly
+ <braunr> anyway, mcsim's branch looks quite stable to me
+ <antrik> braunr: I can't reproduce the swap leak with ftp. free memory
+ idles around 6.5 k (seems to be the threshold where paging starts), and
+ swap use is constant
+ <antrik> might be because everything swappable is already present in swap
+ from previous load I guess...
+ <antrik> err... scratch that. was connected to the wrong host, silly me
+ <antrik> indeed swap gets eaten away, as expected
+ <antrik> but only if free memory actually falls below the
+ threshold. otherwise it just oscillates around a constant value, and
+ never touches swap
+ <antrik> so this seems to confirm the double swapping theory
+ <youpi> antrik: is that "double swap" theory written somewhere?
+ <youpi> (no, a quick google didn't tell me)
+
+
+## IRC, freenode, #hurd, 2011-11-16
+
+ <antrik> youpi:
+ http://lists.gnu.org/archive/html/l4-hurd/2002-06/msg00001.html talks
+ about "double paging". probably it's also the term others used for it;
+ however, the term is generally used in a completely different meaning, so
+ I guess it's not really suitable for googling either ;-)
+ <antrik> IIRC slpz (or perhaps someone else?) proposed a solution to this,
+ but I don't remember any details
+ <youpi> ok so it's the same thing I was thinking about with swap getting
+ filled
+ <youpi> my question was: is there something to release the double swap,
+ once the ext2fs pager managed to recover?
+ <antrik> apparently not
+ <antrik> the only way to free the memory seems to be terminating the FS
+ server
+ <youpi> uh :/
diff --git a/open_issues/gnumach_memory_management.mdwn b/open_issues/gnumach_memory_management.mdwn
index 9a4418c1..c9c3e64f 100644
--- a/open_issues/gnumach_memory_management.mdwn
+++ b/open_issues/gnumach_memory_management.mdwn
@@ -1810,3 +1810,205 @@ There is a [[!FF_project 266]][[!tag bounty]] on this task.
<braunr> etenil: but mcsim's work is, for one, useful because the allocator
code is much clearer, adds some debugging support, and is smp-ready
+
+
+# IRC, freenode, #hurd, 2011-11-14
+
+ <braunr> i've just realized that replacing the zone allocator removes most
+ (if not all) static limit on allocated objects
+ <braunr> as we have nothing similar to rlimits, this means kernel resources
+ are actually exhaustible
+ <braunr> and i'm not sure every allocation is cleanly handled in case of
+ memory shortage
+ <braunr> youpi: antrik: tschwinge: is this acceptable anyway ?
+ <braunr> (although IMO, it's also a good thing to get rid of those limits
+ that made the kernel panic for no valid reason)
+ <youpi> there are actually not many static limits on allocated objects
+ <youpi> only a few have one
+ <braunr> those defined in kern/mach_param.h
+ <youpi> most of them are not actually enforced
+ <braunr> ah ?
+ <braunr> they are used at zinit() time
+ <braunr> i thought they were
+ <youpi> yes, but most zones are actually fine with overcoming the max
+ <braunr> ok
+ <youpi> see zone->max_size += (zone->max_size >> 1);
+ <youpi> you need both !EXHAUSTIBLE and FIXED
+ <braunr> ok
+ <pinotree> making having rlimits enforced would be nice...
+ <pinotree> s/making//
+ <braunr> pinotree: the kernel wouldn't handle many standard rlimits anyway
+
+ <braunr> i've just committed my final patch on mcsim's branch, which will
+ serve as the starting point for integration
+ <braunr> which means code in this branch won't change (or only last minute
+ changes)
+ <braunr> you're invited to test it
+ <braunr> there shouldn't be any noticeable difference with the master
+ branch
+ <braunr> a bit less fragmentation
+ <braunr> more memory can be reclaimed by the VM system
+ <braunr> there are debugging features
+ <braunr> it's SMP ready
+ <braunr> and overall cleaner than the zone allocator
+ <braunr> although a bit slower on the free path (because of what's
+ performed to reduce fragmentation)
+ <braunr> but even "slower" here is completely negligible
+
+
+# IRC, freenode, #hurd, 2011-11-15
+
+ <mcsim> I enabled cpu_pool layer and kentry cache exhausted at "apt-get
+ source gnumach && (cd gnumach-* && dpkg-buildpackage)"
+ <mcsim> I mean kernel with your last commit
+ <mcsim> braunr: I'll make patch how I've done it in a few minutes, ok? It
+ will be more specific.
+ <braunr> mcsim: did you just remove the #if NCPUS > 1 directives ?
+ <mcsim> no. I replaced macro NCPUS > 1 with SLAB_LAYER, which equals NCPUS
+ > 1, than I redefined macro SLAB_LAYER
+ <braunr> ah, you want to make the layer optional, even on UP machines
+ <braunr> mcsim: can you give me the commands you used to trigger the
+ problem ?
+ <mcsim> apt-get source gnumach && (cd gnumach-* && dpkg-buildpackage)
+ <braunr> mcsim: how much ram & swap ?
+ <braunr> let's see if it can handle a quite large aptitude upgrade
+ <mcsim> how can I check swap size?
+ <braunr> free
+ <braunr> cat /proc/meminfo
+ <braunr> top
+ <braunr> whatever
+ <mcsim> total used free shared buffers
+ cached
+ <mcsim> Mem: 786368 332296 454072 0 0
+ 0
+ <mcsim> -/+ buffers/cache: 332296 454072
+ <mcsim> Swap: 1533948 0 1533948
+ <braunr> ok, i got the problem too
+ <mcsim> braunr: do you run hurd in qemu?
+ <braunr> yes
+ <braunr> i guess the cpu layer increases fragmentation a bit
+ <braunr> which means more map entries are needed
+ <braunr> hm, something's not right
+ <braunr> there are only 26 kernel map entries when i get the panic
+ <braunr> i wonder why the cache gets that stressed
+ <braunr> hm, reproducing the kentry exhaustion problem takes quite some
+ time
+ <mcsim> braunr: what do you mean?
+ <braunr> sometimes, dpkg-buildpackage finishes without triggering the
+ problem
+ <mcsim> the problem is in apt-get source gnumach
+ <braunr> i guess the problem happens because of drains/fills, which
+ allocate/free much more object than actually preallocated at boot time
+ <braunr> ah ?
+ <braunr> ok
+ <braunr> i've never had it at that point, only later
+ <braunr> i'm unable to trigger it currently, eh
+ <mcsim> do you use *-dbg kernel?
+ <braunr> yes
+ <braunr> well, i use the compiled kernel, with the slab allocator, built
+ with the in kernel debugger
+ <mcsim> when you run apt-get source gnumach, you run it in clean directory?
+ Or there are already present downloaded archives?
+ <braunr> completely empty
+ <braunr> ah just got it
+ <braunr> ok the limit is reached, as expected
+ <braunr> i'll just bump it
+ <braunr> the cpu layer drains/fills allocate several objects at once (64 if
+ the size is small enough)
+ <braunr> the limit of 256 (actually 252 since the slab descriptor is
+ embedded in its slab) is then easily reached
+ <antrik> mcsim: most direct way to check swap usage is vmstat
+ <braunr> damn, i can't live without slabtop and the amount of
+ active/inactive cache memory any more
+ <braunr> hm, weird, we have active/inactive memory in procfs, but not
+ buffers/cached memory
+ <braunr> we could set buffers to 0 and everything as cached memory, since
+ we're currently unable to communicate the purpose of cached memory
+ (whether it's used by disk servers or file system servers)
+ <braunr> mcsim: looks like there are about 240 kernel map entries (i forgot
+ about the ones used in kernel submaps)
+ <braunr> so yes, addin the cpu layer is what makes the kernel reach the
+ limit more easily
+ <mcsim> braunr: so just increasing limit will solve the problem?
+ <braunr> mcsim: yes
+ <braunr> slab reclaiming looks very stable
+ <braunr> and unfrequent
+ <braunr> (which is surprising)
+ <pinotree> braunr: "unfrequent"?
+ <braunr> pinotree: there isn't much memory pressure
+ <braunr> slab_collect() gets called once a minute on my hurd
+ <braunr> or is it infrequent ?
+ <braunr> :)
+ <pinotree> i have no idea :)
+ <braunr> infrequent, yes
+
+
+# IRC, freenode, #hurd, 2011-11-16
+
+ <braunr> for those who want to play with the slab branch of gnumach, the
+ slabinfo tool is available at http://git.sceen.net/rbraun/slabinfo.git/
+ <braunr> for those merely interested in numbers, here is the output of
+ slabinfo, for a hurd running in kvm with 512 MiB of RAM, an unused swap,
+ and a short usage history (gnumach debian packages built, aptitude
+ upgrade for a dozen of packages, a few git commands)
+ <braunr> http://www.sceen.net/~rbraun/slabinfo.out
+ <antrik> braunr: numbers for a long usage history would be much more
+ interesting :-)
+
+
+## IRC, freenode, #hurd, 2011-11-17
+
+ <braunr> antrik: they'll come :)
+ <etenil> is something going on on darnassus? it's mighty slow
+ <braunr> yes
+ <braunr> i've rebooted it to run a modified kernel (with the slab
+ allocator) and i'm building stuff on it to stress it
+ <braunr> (i don't have any other available machine with that amount of
+ available physical memory)
+ <etenil> ok
+ <antrik> braunr: probably would be actually more interesting to test under
+ memory pressure...
+ <antrik> guess that doesn't make much of a difference for the kernel object
+ allocator though
+ <braunr> antrik: if ram is larger, there can be more objects stored in
+ kernel space, then, by building something large such as eglibc, memory
+ pressure is created, causing caches to be reaped
+ <braunr> our page cache is useless because of vm_object_cached_max
+ <braunr> it's a stupid arbitrary limit masking the inability of the vm to
+ handle pressure correctly
+ <braunr> if removing it, the kernel freezes soon after ram is filled
+ <braunr> antrik: it may help trigger the "double swap" issue you mentioned
+ <antrik> what may help trigger it?
+ <braunr> not checking this limit
+ <antrik> hm... indeed I wonder whether the freezes I see might have the
+ same cause
+
+
+## IRC, freenode, #hurd, 2011-11-19
+
+ <braunr> http://www.sceen.net/~rbraun/slabinfo.out <= state of the slab
+ allocator after building the debian libc packages and removing all files
+ once done
+ <braunr> it's mostly the same as on any other machine, because of the
+ various arbitrary limits in mach (most importantly, the max number of
+ objects in the page cache)
+ <braunr> fragmentation is still quite low
+ <antrik> braunr: actually fragmentation seems to be lower than on the other
+ run...
+ <braunr> antrik: what makes you think that ?
+ <antrik> the numbers of currently unused objects seem to be in a similar
+ range IIRC, but more of them are reclaimable I think
+ <antrik> maybe I'm misremembering the other numbers
+ <braunr> there had been more reclaims on the other run
+
+
+# IRC, freenode, #hurd, 2011-11-25
+
+ <braunr> mcsim: i've just updated the slab branch, please review my last
+ commit when you have time
+ <mcsim> braunr: Do you mean compilation/tests?
+ <braunr> no, just a quick glance at the code, see if it matches what you
+ intended with your original patch
+ <mcsim> braunr: everything is ok
+ <braunr> good
+ <braunr> i think the branch is ready for integration
diff --git a/open_issues/libmachuser_libhurduser_rpc_stubs.mdwn b/open_issues/libmachuser_libhurduser_rpc_stubs.mdwn
index 93055b77..80fc9fcd 100644
--- a/open_issues/libmachuser_libhurduser_rpc_stubs.mdwn
+++ b/open_issues/libmachuser_libhurduser_rpc_stubs.mdwn
@@ -54,3 +54,53 @@ License|/fdl]]."]]"""]]
compatibility with eventual 3rd party users is not broken
<pinotree> but those using them, other than hurd itself, won't compile
anymore, so you fix them progressively
+
+
+# IRC, freenode, #hurd, 2011-11-16
+
+ <braunr> is the mach_debug interface packaged in debian ?
+ <antrik> what mach_debug interface?
+ <braunr> include/include/mach_debug/mach_debug.defs in gnumach
+ <braunr> include/mach_debug/mach_debug.defs in gnumach
+ <antrik> what exactly is supposed to be packaged there?
+ <braunr> i'm talking about the host_*_info client code
+ <antrik> braunr: you mean MIG-generated stubs?
+ <braunr> antrik: yes
+ <braunr> i wrote a tiny slabinfo tool, and rpctrace doesn't show the
+ host_slab_info call
+ <braunr> do you happen to know why ?
+ <antrik> braunr: doesn't show it at all, or just doesn't translate?
+ <braunr> antrik: doesn't at all, the msgids file contains what's needed to
+ translate
+ <braunr> btw, i was able to build the libc0.3 packages with a kernel using
+ the slab allocator today, while monitoring it with the aforementioned
+ slabinfo tool, everything went smoothly
+ <antrik> great :-)
+ <braunr> i'll probably add a /proc/slabinfo entry some day
+ <braunr> and considering the current state of our beloved kernel, i'm
+ wondering why host_*_info rpcs are considered debugging calls
+ <braunr> imo, they should always be included by default
+ <braunr> and part of the standard mach interface
+ <braunr> (if the rpc is missing, an error is simply returned)
+ <antrik> I guess that's been inherited from original Mach
+ <antrik> so you think the stubs should be provided by libmachuser?
+ <braunr> i'm not sure
+ <antrik> actually, it's a bit arguable. if interfaces are not needed by
+ libc itself, it isn't really necessary to build them as part of the libc
+ build...
+ <braunr> i don't know the complete list of potential places for such calls
+ <antrik> OTOH, as any updates will happen in sync with other Mach updates,
+ it makes sense to keep them in one place, to reduce transition pain
+ <braunr> and i didn't want to imply they should be part of libc
+ <braunr> on the contrary, libmachuser seems right
+ <antrik> libmachuser is part of libc
+ <braunr> ah
+ <braunr> :)
+ <braunr> why so ?
+ <antrik> well, for one, libc needs the Mach (and Hurd) stubs itself
+ <antrik> also, it's traditionally the role of libc to provide the call
+ wrappers for syscalls... so it makes some sense
+ <braunr> sure, but why doesn't it depend on an external libmachuser instead
+ of embedding it ?
+ <braunr> right
+ <antrik> now that's a good question... no idea TBH :-)
diff --git a/open_issues/mig_portable_rpc_declarations.mdwn b/open_issues/mig_portable_rpc_declarations.mdwn
new file mode 100644
index 00000000..084d7454
--- /dev/null
+++ b/open_issues/mig_portable_rpc_declarations.mdwn
@@ -0,0 +1,58 @@
+[[!meta copyright="Copyright © 2011 Free Software Foundation, Inc."]]
+
+[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
+id="license" text="Permission is granted to copy, distribute and/or modify this
+document under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no Invariant
+Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
+is included in the section entitled [[GNU Free Documentation
+License|/fdl]]."]]"""]]
+
+[[!tag open_issue_mig]]
+
+
+# IRC, freenode, #hurd, 2011-11-14
+
+ <braunr> also, what's the best way to deal with types such as
+ <braunr> type cache_info_t = struct[23] of integer_t;
+ <braunr> whereas cache_info_t contains longs, which are obviously not
+ integer-wide on 64-bits processors
+ <braunr> ?
+ <youpi> you mean, to port mach to 64bit?
+ <braunr> no, to make the RPC declaration portable
+ <braunr> just in case :)
+ <youpi> refine integer_t into something more precise
+ <youpi> such as size_t, off_t, etc.
+ <braunr> i can't use a single line then
+ <braunr> struct cache_info contains ints, vm_size_t, longs
+ <braunr> should i just use the maximum size it can get ?
+ <braunr> or declare two sizes depending on the word size ?
+ <youpi> well, I'd say three
+ <braunr> youpi: three ?
+ <youpi> the ints, the vm_size_ts, and the longs
+ <braunr> youpi: i don't get it
+ <braunr> youpi: how would i write it in mig language ?
+ <youpi> I don't know the mig language
+ <braunr> me neither :)
+ <youpi> but I'd say don't lie
+ <braunr> i just see struct[23] of smething
+ <braunr> the original zone_info struct includes both integer_t and
+ vm_size_t, and declares it as
+ <braunr> type zone_info_t = struct[9] of integer_t;
+ <braunr> in its mig defs file
+ <braunr> i don't have a good example to reuse
+ <youpi> which is lying
+ <braunr> yes
+ <braunr> which is why i was wondering if mach architects themselves
+ actually solved that problem :)
+ <braunr> "There is no way to specify the fields of a
+ <braunr> C structure to MIG. The size and type-desc are just used to
+ give the size of
+ <braunr> the structure.
+ <braunr> "
+ <braunr> well, this sucks :/
+ <braunr> well, i'll do what the rest of the code seems to do, and let it
+ rot until a viable solution is available
+ <antrik> braunr: we discussed the problem of expressing structs with MIG in
+ the libburn thread
+ <antrik> (which I still need to follow up on... [sigh])
diff --git a/open_issues/mission_statement.mdwn b/open_issues/mission_statement.mdwn
index 212d65e7..d136e3a8 100644
--- a/open_issues/mission_statement.mdwn
+++ b/open_issues/mission_statement.mdwn
@@ -10,7 +10,10 @@ License|/fdl]]."]]"""]]
[[!tag open_issue_documentation]]
-IRC, freenode, #hurd, 2011-10-12:
+[[!toc]]
+
+
+# IRC, freenode, #hurd, 2011-10-12
<ArneBab> we have a mission statement: http://hurd.gnu.org
<Gorodish> yes
@@ -37,3 +40,10 @@ IRC, freenode, #hurd, 2011-10-12:
ceases to amaze me
<Gorodish> I agree that the informational, factual, results oriented
documentation is the primary objective of documenting
+
+
+# IRC, freenode, #hurd, 2011-11-25
+
+ <antrik> heh, nice: http://telepathy.freedesktop.org/wiki/Rationale
+ <antrik> most of this could be read as a rationale for the Hurd just as
+ well ;-)
diff --git a/open_issues/page_cache.mdwn b/open_issues/page_cache.mdwn
new file mode 100644
index 00000000..062fb8d6
--- /dev/null
+++ b/open_issues/page_cache.mdwn
@@ -0,0 +1,73 @@
+[[!meta copyright="Copyright © 2011 Free Software Foundation, Inc."]]
+
+[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
+id="license" text="Permission is granted to copy, distribute and/or modify this
+document under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no Invariant
+Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
+is included in the section entitled [[GNU Free Documentation
+License|/fdl]]."]]"""]]
+
+[[!tag open_issue_gnumach]]
+
+IRC, freenode, #hurd, 2011-11-28:
+
+ <braunr> youpi: would you find it reasonable to completely disable the page
+ cache in gnumach ?
+ <braunr> i'm wondering if it wouldn't help make the system more stable
+ under memory pressure
+ <youpi> assuming cache=writeback in gnumach?
+ <youpi> because disabling the page cache will horribly hit performance
+ <braunr> no, it doesn't have anything to do with the host
+ <braunr> i'm not so sure
+ <braunr> while observing the slab allocator, i noticed our page cache is
+ not used that often
+ <youpi> eeh?
+ <youpi> apart from the damn 4000 limitation, I've seen it used
+ <youpi> (and I don't why it wouldn't be used)
+ <youpi> (e.g. for all parts of libc)
+ <youpi> ah, no, libc would be kept open by ext2fs
+ <braunr> taht's precisely because of the 4k limit
+ <youpi> but e.g. .o file emitted during make
+ <braunr> well, no
+ <youpi> well, see the summary I had posted some time ago, the 4k limit
+ makes it completely randomized
+ <youpi> and thus you lose locality
+ <braunr> yes
+ <youpi> but dropping the limit would just fix it
+ <braunr> that's my point
+ <youpi> which I had tried to do, and there were issues, you mentioned why
+ <youpi> and (as usual), I haven't had anyu time to have a look at the issue
+ again
+ <braunr> i'm just trying to figure out the pros and cons for having teh
+ current page cache implementation
+ <braunr> but are you saying you tried with a strict limit of 0 ?
+ <youpi> non, I'm saying I tried with no limit
+ <youpi> but then memory fills up
+ <braunr> yes
+ <youpi> so trying to garbage collect
+ <braunr> i tried that too, the system became unstable very quickly
+ <youpi> but refs don't falldown to 0, you said
+ <braunr> did i ?
+ <youpi> or maybe somebody else
+ <youpi> see the list archives
+ <braunr> that's possible
+ <braunr> i'd imagine someone like sergio lopez
+ <youpi> possibly
+ <youpi> somebody that knows memory stuff way better than me in any case
+ <braunr> youpi: i'm just wondering how much we'd loose by disabling the
+ page cache, and if we actually gain more stability (and ofc, if it's
+ worth it)
+ <youpi> no idea, measures will tell
+ <youpi> fixing the page cache shouldn't be too hard I believe, however
+ <youpi> you just need to know what you are doing, which I don't
+ <youpi> I do believe the cache is still at least a bit useful
+ <youpi> even if dumb because of randomness
+ <youpi> e.g. running make lib in the glibc tree gets faster on second time
+ <youpi> because the cache wouldbe filled at least randomly with glibc tree
+ stuff
+ <braunr> yes, i agree on that
+ <youpi> braunr: btw, the current stability is fine for the buildds
+ <youpi> restarting them every few days is ok
+ <youpi> so I'd rather keep the performance :)
+ <braunr> ok
diff --git a/open_issues/perl.mdwn b/open_issues/perl.mdwn
index 45680328..48343e3e 100644
--- a/open_issues/perl.mdwn
+++ b/open_issues/perl.mdwn
@@ -36,6 +36,56 @@ First, make the language functional, have its test suite pass without errors.
[[!inline pages=community/gsoc/project_ideas/perl_python feeds=no]]
+
+## IRC, OFTC, #debian-hurd, 2011-11-08
+
+ <Dom> pinotree: so, with your three fixes applied to 5.14.2, there are
+ still 9 tests failing. They don't seem to be regressions in perl, since
+ they also fail when I build 5.14.0 (even though the buildd managed it).
+ <Dom> What do you suggest as the way forward?
+ <Dom> (incidentally I'm trying on strauss's sid chroot to see how that
+ compares)
+ <pinotree> Dom: samuel makes buildds build perl with nocheck (otherwise
+ we'd have no perl at all)
+ <pinotree> which tests still fail?
+ <Dom> ../cpan/Sys-Syslog/t/syslog.t ../cpan/Time-HiRes/t/HiRes.t
+ ../cpan/autodie/t/recv.t ../dist/IO/t/io_pipe.t ../dist/threads/t/libc.t
+ ../dist/threads/t/stack.t ../ext/Socket/t/socketpair.t io/pipe.t
+ op/sigdispatch.t
+ <Dom> buildds> I see
+ <pinotree> ah ok, those that are failing for me even with my patches
+ <Dom> I hadn't spotted that the builds were done with nocheck.
+ <Dom> (but only sometimes...)
+ <Dom> Explains a lot
+ <pinotree> syslog is kind of non-working on hurd, and syslog.t succeeds in
+ buildds (as opposted to crash the machine...) because there's no /var/log
+ in chroots
+ <pinotree> libc.t appears to succeed too in buildds
+ * Dom notices how little memory strauss has, and cancels the build, now
+ that he *knows* that running out of memory caused the crahses
+ <pinotree> iirc HiRes.t, io_pipe.t , pipe.t and sigdispatch.t fails because
+ of trobules we have with posix signals
+ <pinotree> socketpair.t is kind of curious, it seems to block on
+ socketpair()...
+ * Dom wonders if a wiki page tracking this would be worthwhile
+ <pinotree> stack.t fails because we cannot set a different size for pthread
+ stacks, yet (similar failing test cases are also in the python test
+ suite)
+ <Dom> if there are problems which aren't going to get resolved any time
+ soon, it may be worth a few SKIPs conditional on architecture, depending
+ on how serious the issue is
+ <Dom> then we'd get better visibility of other/new issues
+ <Dom> (problems which aren't bugs in perl, that is)
+ <pinotree> understandable, yes
+ <pinotree> i think nobody digged much deeper in the failing ones yet, to
+ actually classify them as due to glibc/hurd/mach
+ <pinotree> (eg unlike the pipe behaviour in sysconf.t i reported)
+
+### 2011-11-26
+
+ <pinotree> cool, my recvfrom() fix also makes the perl test recv.t pass
+
+
---
diff --git a/open_issues/robustness.mdwn b/open_issues/robustness.mdwn
new file mode 100644
index 00000000..d32bd509
--- /dev/null
+++ b/open_issues/robustness.mdwn
@@ -0,0 +1,64 @@
+[[!meta copyright="Copyright © 2011 Free Software Foundation, Inc."]]
+
+[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
+id="license" text="Permission is granted to copy, distribute and/or modify this
+document under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no Invariant
+Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
+is included in the section entitled [[GNU Free Documentation
+License|/fdl]]."]]"""]]
+
+[[!tag open_issue_documentation open_issue_hurd]]
+
+
+# IRC, freenode, #hurd, 2011-11-18
+
+ <nocturnal> I'm learning about GNU Hurd and was speculating with a friend
+ who is also a computer enthusiast. I would like to know if Hurds
+ microkernel can recover services should they crash? and if it can, does
+ that recovery code exist in multiple services or just one core kernel
+ service?
+ <braunr> nocturnal: you should read about passive translators
+ <braunr> basically, there is no dedicated service to restore crashed
+ servers
+ <etenil> Hi everyone!
+ <braunr> services can crash and be restarted, but persistence support is
+ limited, and rather per serivce
+ <braunr> actually persistence is more a side effect than a designed thing
+ <braunr> etenil: hello
+ <etenil> braunr: translators can also be spawned on an ad-hoc basis, for
+ instance when accessing a particular file, no?
+ <braunr> that's what being passive, for a translator, means
+ <etenil> ah yeah I thought so :)
+
+
+# IRC, freenode, #hurd, 2011-11-19
+
+ <chromaticwt> will hurd ever have the equivalent of a rs server?, is that
+ even possible with hurd?
+ <youpi> chromaticwt: what is an rs server ?
+ <chromaticwt> a reincarnation server
+ <youpi> ah, like minix. Well, the main ground issue is restoring existing
+ information, such as pids of processes, etc.
+ <youpi> I don't know how minix manages it
+ <antrik> chromaticwt: I have a vision of a session manager that could also
+ take care of reincarnation... but then, knowing myself, I'll probably
+ never imlement it
+ <youpi> we do get proc crashes from times to times
+ <youpi> it'd be cool to see the system heal itself :)
+ <braunr> i need a better description of reincarnation
+ <braunr> i didn't think it would make core servers like proc able to get
+ resurrected in a safe way
+ <antrik> depends on how it is implemented
+ <antrik> I don't know much about Minix, but I suspect they can recover most
+ core servers
+ <antrik> essentially, the condition is to make all precious state be
+ constantly serialised, and held by some third party, so the reincarnated
+ server could restore it
+ <braunr> should it work across reboots ?
+ <antrik> I haven't thought about the details of implementing it for each
+ core server; but proc should be doable I guess... it's not necessary for
+ the system to operate, just for various UNIX mechanisms
+ <antrik> well, I'm not aware of the Minix implementation working across
+ reboots. the one I have in mind based on a generic session management
+ infrastructure should though :-)
diff --git a/open_issues/syslog.mdwn b/open_issues/syslog.mdwn
index 5fec38b1..2e902698 100644
--- a/open_issues/syslog.mdwn
+++ b/open_issues/syslog.mdwn
@@ -43,3 +43,30 @@ IRC, freenode, #hurd, 2011-08-08
< youpi> shm should work with the latest libc
< youpi> what won't is sysv sem
< youpi> (i.e. semget)
+
+
+IRC, OFTC, #debian-hurd, 2011-11-02:
+
+ * pinotree sighs at #645790 :/
+ <tschwinge> pinotree: W.r.t. 645790 -- yeah, ``someone'' should finally
+ figure out what's going on with syslog.
+ http://lists.gnu.org/archive/html/bug-hurd/2008-07/msg00152.html
+ <tschwinge> pinotree: And this...
+ http://lists.gnu.org/archive/html/bug-hurd/2007-02/msg00042.html
+ <pinotree> tschwinge: i did that 20 invocations tests recently, and
+ basically none of them has been logged
+ <pinotree> tschwinge: when i started playing with logger more, as result i
+ had some server that started taking all the cpu, followed by other
+ servers and in the end my ssh connection were dropped and i had nothing
+ to do (not even login from console)
+ <tschwinge> pinotree: Sounds like ``fun''. Hopefully we can manage to
+ understand (and fix the underlying issue) why a simple syslog()
+ invocation can make the whole system instable.
+ <pinotree> tschwinge: to be honest, i got havoc in the system when i told
+ syslog to manually look for /dev/log (-u /dev/log), possibly alao when
+ telling to use a datagram socket (-d)
+ <pinotree> but even if a normal syslog() invocation does not cause havoc,
+ there's still the "lost messages" issue
+ <tschwinge> Yep. What I've been doing ever since, is deinstall all
+ *syslog* packages.
+ <tschwinge> This ``fixed'' all syslog() hangs.
diff --git a/open_issues/translator_stdout_stderr.mdwn b/open_issues/translator_stdout_stderr.mdwn
index 11793582..14ea1c6d 100644
--- a/open_issues/translator_stdout_stderr.mdwn
+++ b/open_issues/translator_stdout_stderr.mdwn
@@ -11,11 +11,43 @@ License|/fdl]]."]]"""]]
[[!tag open_issue_hurd]]
+There have been several discussions and proposals already, about adding a
+suitable logging mechanism to translators, for example.
+
+
Decide / implement / fix that (all?) running (passive?) translators' output
should show up on the (Mach / Hurd) console / syslog.
+
[[!taglink open_issue_documentation]]: [[!message-id
"87oepj1wql.fsf@becket.becket.net"]]
+
[[!taglink open_issue_documentation]]: Neal once had written an email on this
topic.
+
+
+IRC, freenode, #hurd, 2011-11-06
+
+ <youpi> about CLI_DEBUG, you can use #define CLI_DEBUG(fmt, ...) {
+ fprintf(stderr, fmt, ## __VA_ARGS__); fflush(stderr); }
+ <tschwinge> Isn't stderr in auto-flush mode by default?
+ <tschwinge> man setbuf: The standard error stream stderr is always
+ unbuffered by default.
+ <youpi> tschwinge: "by default" is the important thing here
+ <youpi> in the case of translators iirc stderr is buffered
+ <tschwinge> youpi: Oh!
+ <tschwinge> That sounds wrong.
+
+
+IRC, freenode, #hurd, 2011-11-23
+
+ <braunr> we'd need a special logging task for hurd servers
+ <pinotree> if syslog would work, that could be used optionally
+ <braunr> no, it relies on services provided by the hurd
+ <braunr> i'm thinking of something using merely the mach interface
+ <braunr> e.g. using mach_msg to send log messages to a special port used to
+ reference the logging service
+ <braunr> which would then store the messages in a circular buffer in ram
+ <braunr> maybe sending to syslog if the service is available
+ <braunr> the hurd system buffer if you want