From be4193108513f02439a211a92fd80e0651f6721b Mon Sep 17 00:00:00 2001 From: Thomas Schwinge Date: Wed, 30 Nov 2011 21:21:45 +0100 Subject: IRC. --- hurd/debugging/rpctrace.mdwn | 37 ++++ hurd/translator/tmpfs/tmpfs_vs_defpager.mdwn | 37 ++++ hurd/virtual_file_system/discussion.mdwn | 39 ++++ microkernel/mach/gnumach/memory_management.mdwn | 22 +++ open_issues/anatomy_of_a_hurd_system.mdwn | 28 ++- open_issues/ext2fs_page_cache_swapping_leak.mdwn | 88 ++++++++- open_issues/gnumach_memory_management.mdwn | 202 +++++++++++++++++++++ open_issues/libmachuser_libhurduser_rpc_stubs.mdwn | 50 +++++ open_issues/mig_portable_rpc_declarations.mdwn | 58 ++++++ open_issues/mission_statement.mdwn | 12 +- open_issues/page_cache.mdwn | 73 ++++++++ open_issues/perl.mdwn | 50 +++++ open_issues/robustness.mdwn | 64 +++++++ open_issues/syslog.mdwn | 27 +++ open_issues/translator_stdout_stderr.mdwn | 32 ++++ 15 files changed, 815 insertions(+), 4 deletions(-) create mode 100644 hurd/virtual_file_system/discussion.mdwn create mode 100644 open_issues/mig_portable_rpc_declarations.mdwn create mode 100644 open_issues/page_cache.mdwn create mode 100644 open_issues/robustness.mdwn diff --git a/hurd/debugging/rpctrace.mdwn b/hurd/debugging/rpctrace.mdwn index f7136056..fd24f081 100644 --- a/hurd/debugging/rpctrace.mdwn +++ b/hurd/debugging/rpctrace.mdwn @@ -52,6 +52,43 @@ See `rpctrace --help` about how to use it. note that there is a number of known bugs in rpctrace, for which zhengda has sent patches... though I haven't reviewed all of them I think there are some nasty Mach operations that are really hard to proxy -- but I don't think the auth mechanism needs any of these... +* IRC, freenode, #hurd, 2011-11-04 + + [[!taglink open_issue_documentation]] + + hello. Are there any documentation about understanding output + of rpctrace? + no + you should read the source code, best doc available + if you have too many numbers and almost no symbolc names, + you're lacking rpc definition lists + check that the gnumach-common package is installed, as it + provides the gnumach definitions + (the glibc ones are almost always available) + with those two, you should be fine for the beginning + gnumach-common is installed. And what is the name for glibc + package for gnumach definitions. + Also I'm using libraries specified by LD_LIBRARY_PATH. Does it + make influence on absence of symbolic names? + no + rpctrace --help + see the --rpc-list=FILE option + the default lists are in /usr/share/msgids/, with the .msgids + extension + $ dpkg -S msgids + gnumach-common: /usr/share/msgids/gnumach.msgids + hurd: /usr/share/msgids/hurd.msgids + ok, glibc has none, it's the hurd + for more details about the output, read the source code + it shouldn't be that hard to grasp + -I /usr/share/msgids helped + thank you + it shouldn't have, it's the default path + but symbolic names appeared + well, that's weird :) + braunr: the output of rpctrace --help should tell the + default dir for msgids + # See Also diff --git a/hurd/translator/tmpfs/tmpfs_vs_defpager.mdwn b/hurd/translator/tmpfs/tmpfs_vs_defpager.mdwn index a9317c21..5228515f 100644 --- a/hurd/translator/tmpfs/tmpfs_vs_defpager.mdwn +++ b/hurd/translator/tmpfs/tmpfs_vs_defpager.mdwn @@ -233,3 +233,40 @@ See also: [[open_issues/resource_management_problems/pagers]]. I have never looked at it. [[open_issues/mach-defpager_vs_defpager]]. + + +# IRC, freenode, #hurd, 2011-11-08 + + who else uses defpager besides tmpfs and kernel? + normally, nothing directly + than why tmpfs should use defpager? + it's its backend + backign store rather + the backing store of most file systems are partitions + tmpfs has none, it uses the swap space + if we allocate memory for tmpfs using vm_allocate, will it be able + to use swap partition? + it should + vm_allocate just maps anonymous memory + anonymous memory uses swap space as its backing store too + but be aware that this part of the vm system is known to have + deficiencies + which is why all mach based implementations have rewritten their + default pager + what kind of deficiencies? + bugs + and design issues, making anonymous memory fragmentation horrible + mcsim: vm_allocate doesn't return a memory object; so it can't be + passed to clients for mmap() + antrik: I use vm_allocate in pager_read_page + mcsim: well, that means that you have to actually implement a + pager yourself + also, when the kernel asks the pager to write back some pages, it + expects the memory to become free. if you are "paging" to ordinary + anonymous memory, this doesn't happen; so I expect it to have a very bad + effect on system performance + both can be avoided by just passing a real anonymous memory + object, i.e. one provided by the defpager + only problem is that the current defpager implementation can't + really handle that... + at least that's my understanding of the situation diff --git a/hurd/virtual_file_system/discussion.mdwn b/hurd/virtual_file_system/discussion.mdwn new file mode 100644 index 00000000..9e12d01e --- /dev/null +++ b/hurd/virtual_file_system/discussion.mdwn @@ -0,0 +1,39 @@ +[[!meta copyright="Copyright © 2011 Free Software Foundation, Inc."]] + +[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable +id="license" text="Permission is granted to copy, distribute and/or modify this +document under the terms of the GNU Free Documentation License, Version 1.2 or +any later version published by the Free Software Foundation; with no Invariant +Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license +is included in the section entitled [[GNU Free Documentation +License|/fdl]]."]]"""]] + +[[!tag open_issue_documentation]] + +IRC, freenode, #hurd, 2011-11-12: + + So hurd implements a 'transparent translator' somewhere which + just passes all IO calls to the posix IO I'm used to? (i.e. read, write, + open, close, etc.?) + it's the normal way of operation + glibc's read() doesn't do a system call, it always does an RPC to + the underlying translator + be it ext2fs for /, or your foobarfs for your node + Ok that makes sense. How does one program know which translator + it should refer to though? + the read() call magically knows which process to invoke? + the / translator is always known + and then you ask /'s translator about /home, then /home/you, then + /home/you/foobar + it tells you which other translator tyou have to contact + that's on open + It's a tree! Ok. + the notion of fd is then simply knowing the translator + Right. 'file descriptor' is now 'translator address descriptor' + maybe. + it's glibc which knows about FDs, nothing else knows + yes + actually an RPC port, simply + I want to try out the new RPC mechanism that mach implements + err, which "new" RPC ? + mach's RPCs are very old actually :) diff --git a/microkernel/mach/gnumach/memory_management.mdwn b/microkernel/mach/gnumach/memory_management.mdwn index 43b99d83..ca2f42c4 100644 --- a/microkernel/mach/gnumach/memory_management.mdwn +++ b/microkernel/mach/gnumach/memory_management.mdwn @@ -80,3 +80,25 @@ IRC, freenode, #hurd, 2011-06-09 wow i remember the linux support for 4G/4G split when there was enough RAM to fill the kernel space with struct page entries + + +IRC, freenode, #hurd, 2011-11-12 + + well, the Hurd doesn't "artificially" limits itself to 1.5GiB + memory + i386 has only 4GiB addressing space + we currently chose 2GiB for the kernel and 2GiB for the userspace + since kernel needs some mappings, that leaves only 1.5GiB usable + physical memory + Hm? 2GiB for kernel, 2GiB for userspace, 500MiB are used for + what? + for mappings + such as device iomap + contiguous buffer allocation + and such things + Ah, ok. You map things in kernel space into user space then. + linux does the same without the "bigmem" support + no, just in kernel space + kernel space is what determines how much physical memory you can + address + unless using the linux-said-awful "bigmem" support diff --git a/open_issues/anatomy_of_a_hurd_system.mdwn b/open_issues/anatomy_of_a_hurd_system.mdwn index 46526641..13599e19 100644 --- a/open_issues/anatomy_of_a_hurd_system.mdwn +++ b/open_issues/anatomy_of_a_hurd_system.mdwn @@ -87,7 +87,7 @@ RPC stubs. More stuff like [[hurd/IO_path]]. --- +--- IRC, freenode, #hurd, 2011-10-18: @@ -96,3 +96,29 @@ IRC, freenode, #hurd, 2011-10-18: short version: grub loads mach, ext2, and ld.so/exec; mach starts ext2; ext2 starts exec; ext2 execs a few other servers; ext2 execs init. from there on, it's just standard UNIX stuff + +--- + +IRC, OFTC, #debian-hurd, 2011-11-02: + + is __dir_lookup a RPC ?? + where can i find the source of __dir_lookup ?? + grepping most gives out rvalue assignments + -assignments + but in hurs/fs.h it is used as a function ?? + it should be the mig-generated function for that rpc + how do i know how its implemented ?? + is there any way to delve deeprer into mig-generated functions + sekon_: The MIG-generated stuff will either be found in the + package's build directory (if it's building it for themselves), or in the + glibc build directory (libhurduser, libmachuser; which are all the + available user RPC stubs). + sekon_: The implementation can be found in the various Hurd + servers/libraries. + sekon_: For example, [hurd]/libdiskfs/dir-lookup.c. + sekon_: What MIG does is provide a function call interface for + these ``functions'', and the Mach microkernel then dispatches the + invocation to the corresponding server, for example a /hurd/ext2fs file + system (via libdiskfs). + sekon_: This may help a bit: + http://www.gnu.org/software/hurd/hurd/hurd_hacking_guide.html diff --git a/open_issues/ext2fs_page_cache_swapping_leak.mdwn b/open_issues/ext2fs_page_cache_swapping_leak.mdwn index c0d0867b..075533e7 100644 --- a/open_issues/ext2fs_page_cache_swapping_leak.mdwn +++ b/open_issues/ext2fs_page_cache_swapping_leak.mdwn @@ -12,7 +12,10 @@ License|/fdl]]."]]"""]] There is a [[!FF_project 272]][[!tag bounty]] on this task. -IRC, OFTC, #debian-hurd, 2011-03-24 +[[!toc]] + + +# IRC, OFTC, #debian-hurd, 2011-03-24 I still believe we have an ext2fs page cache swapping leak, however as the 1.8GiB swap was full, yet the ld process was only 1.5GiB big @@ -24,7 +27,7 @@ IRC, OFTC, #debian-hurd, 2011-03-24 yes the disk content, basicallyt :) -IRC, freenode, #hurd, 2011-04-18 +# IRC, freenode, #hurd, 2011-04-18 damn, a cp -a simply gobbles down swap space... really ? @@ -173,3 +176,84 @@ IRC, freenode, #hurd, 2011-04-18 backing store of memory objects created from its pager so you can view swap as the file system for everything that isn't an external memory object + + +# IRC, freenode, #hurd, 2011-11-15 + + hm, now my system got unstable + swap is increasing, without any apparent reason + you mean without any load? + with load, yes + :) + well, with load is "normal"... + at least for some loads + i can't create memory pressure to stress reclaiming without any + load + what load are you using? + ftp mirrorring + hm... never tried that; but I guess it's similar to apt-get + so yes, that's "normal". I talked about it several times, and also + wrote to the ML + antrik: ok + if you find out how to fix this, you are my hero ;-) + arg :) + I suspect it's the infamous double swapping problem; but that's + just a guess + looks like this + BTW, if you give me the exact command, I could check if I see it + too + i use lftp (mirror -Re) from a linux git repository + through sftp + (lots of small files, big content) + can't you just give me the exact command? I don't feel like + figuring it out myself + antrik: cd linux-stable; lftp sftp://hurd_addr/ + inside lftp: mkdir linux-stable; cd linux-stable; mirror -Re + hm, half of physical memory just got freed + our page cache is really weird :/ + (i didn't delete any file when that happened) + hurd_addr? + ssh server ip address + or name + of your hurd :) + I'm confused. you are mirroring *from* the Hurd box? + no, to it + ah, so you login via sftp and then push to it? + yes + fragmentation looks very fine + even for the huge pv_entry cache and its 60k+ entries + (and i'm running a kernel with the cpu layer enabled) + git reset/status/diff/log/grep all work correctly + anyway, mcsim's branch looks quite stable to me + braunr: I can't reproduce the swap leak with ftp. free memory + idles around 6.5 k (seems to be the threshold where paging starts), and + swap use is constant + might be because everything swappable is already present in swap + from previous load I guess... + err... scratch that. was connected to the wrong host, silly me + indeed swap gets eaten away, as expected + but only if free memory actually falls below the + threshold. otherwise it just oscillates around a constant value, and + never touches swap + so this seems to confirm the double swapping theory + antrik: is that "double swap" theory written somewhere? + (no, a quick google didn't tell me) + + +## IRC, freenode, #hurd, 2011-11-16 + + youpi: + http://lists.gnu.org/archive/html/l4-hurd/2002-06/msg00001.html talks + about "double paging". probably it's also the term others used for it; + however, the term is generally used in a completely different meaning, so + I guess it's not really suitable for googling either ;-) + IIRC slpz (or perhaps someone else?) proposed a solution to this, + but I don't remember any details + ok so it's the same thing I was thinking about with swap getting + filled + my question was: is there something to release the double swap, + once the ext2fs pager managed to recover? + apparently not + the only way to free the memory seems to be terminating the FS + server + uh :/ diff --git a/open_issues/gnumach_memory_management.mdwn b/open_issues/gnumach_memory_management.mdwn index 9a4418c1..c9c3e64f 100644 --- a/open_issues/gnumach_memory_management.mdwn +++ b/open_issues/gnumach_memory_management.mdwn @@ -1810,3 +1810,205 @@ There is a [[!FF_project 266]][[!tag bounty]] on this task. etenil: but mcsim's work is, for one, useful because the allocator code is much clearer, adds some debugging support, and is smp-ready + + +# IRC, freenode, #hurd, 2011-11-14 + + i've just realized that replacing the zone allocator removes most + (if not all) static limit on allocated objects + as we have nothing similar to rlimits, this means kernel resources + are actually exhaustible + and i'm not sure every allocation is cleanly handled in case of + memory shortage + youpi: antrik: tschwinge: is this acceptable anyway ? + (although IMO, it's also a good thing to get rid of those limits + that made the kernel panic for no valid reason) + there are actually not many static limits on allocated objects + only a few have one + those defined in kern/mach_param.h + most of them are not actually enforced + ah ? + they are used at zinit() time + i thought they were + yes, but most zones are actually fine with overcoming the max + ok + see zone->max_size += (zone->max_size >> 1); + you need both !EXHAUSTIBLE and FIXED + ok + making having rlimits enforced would be nice... + s/making// + pinotree: the kernel wouldn't handle many standard rlimits anyway + + i've just committed my final patch on mcsim's branch, which will + serve as the starting point for integration + which means code in this branch won't change (or only last minute + changes) + you're invited to test it + there shouldn't be any noticeable difference with the master + branch + a bit less fragmentation + more memory can be reclaimed by the VM system + there are debugging features + it's SMP ready + and overall cleaner than the zone allocator + although a bit slower on the free path (because of what's + performed to reduce fragmentation) + but even "slower" here is completely negligible + + +# IRC, freenode, #hurd, 2011-11-15 + + I enabled cpu_pool layer and kentry cache exhausted at "apt-get + source gnumach && (cd gnumach-* && dpkg-buildpackage)" + I mean kernel with your last commit + braunr: I'll make patch how I've done it in a few minutes, ok? It + will be more specific. + mcsim: did you just remove the #if NCPUS > 1 directives ? + no. I replaced macro NCPUS > 1 with SLAB_LAYER, which equals NCPUS + > 1, than I redefined macro SLAB_LAYER + ah, you want to make the layer optional, even on UP machines + mcsim: can you give me the commands you used to trigger the + problem ? + apt-get source gnumach && (cd gnumach-* && dpkg-buildpackage) + mcsim: how much ram & swap ? + let's see if it can handle a quite large aptitude upgrade + how can I check swap size? + free + cat /proc/meminfo + top + whatever + total used free shared buffers + cached + Mem: 786368 332296 454072 0 0 + 0 + -/+ buffers/cache: 332296 454072 + Swap: 1533948 0 1533948 + ok, i got the problem too + braunr: do you run hurd in qemu? + yes + i guess the cpu layer increases fragmentation a bit + which means more map entries are needed + hm, something's not right + there are only 26 kernel map entries when i get the panic + i wonder why the cache gets that stressed + hm, reproducing the kentry exhaustion problem takes quite some + time + braunr: what do you mean? + sometimes, dpkg-buildpackage finishes without triggering the + problem + the problem is in apt-get source gnumach + i guess the problem happens because of drains/fills, which + allocate/free much more object than actually preallocated at boot time + ah ? + ok + i've never had it at that point, only later + i'm unable to trigger it currently, eh + do you use *-dbg kernel? + yes + well, i use the compiled kernel, with the slab allocator, built + with the in kernel debugger + when you run apt-get source gnumach, you run it in clean directory? + Or there are already present downloaded archives? + completely empty + ah just got it + ok the limit is reached, as expected + i'll just bump it + the cpu layer drains/fills allocate several objects at once (64 if + the size is small enough) + the limit of 256 (actually 252 since the slab descriptor is + embedded in its slab) is then easily reached + mcsim: most direct way to check swap usage is vmstat + damn, i can't live without slabtop and the amount of + active/inactive cache memory any more + hm, weird, we have active/inactive memory in procfs, but not + buffers/cached memory + we could set buffers to 0 and everything as cached memory, since + we're currently unable to communicate the purpose of cached memory + (whether it's used by disk servers or file system servers) + mcsim: looks like there are about 240 kernel map entries (i forgot + about the ones used in kernel submaps) + so yes, addin the cpu layer is what makes the kernel reach the + limit more easily + braunr: so just increasing limit will solve the problem? + mcsim: yes + slab reclaiming looks very stable + and unfrequent + (which is surprising) + braunr: "unfrequent"? + pinotree: there isn't much memory pressure + slab_collect() gets called once a minute on my hurd + or is it infrequent ? + :) + i have no idea :) + infrequent, yes + + +# IRC, freenode, #hurd, 2011-11-16 + + for those who want to play with the slab branch of gnumach, the + slabinfo tool is available at http://git.sceen.net/rbraun/slabinfo.git/ + for those merely interested in numbers, here is the output of + slabinfo, for a hurd running in kvm with 512 MiB of RAM, an unused swap, + and a short usage history (gnumach debian packages built, aptitude + upgrade for a dozen of packages, a few git commands) + http://www.sceen.net/~rbraun/slabinfo.out + braunr: numbers for a long usage history would be much more + interesting :-) + + +## IRC, freenode, #hurd, 2011-11-17 + + antrik: they'll come :) + is something going on on darnassus? it's mighty slow + yes + i've rebooted it to run a modified kernel (with the slab + allocator) and i'm building stuff on it to stress it + (i don't have any other available machine with that amount of + available physical memory) + ok + braunr: probably would be actually more interesting to test under + memory pressure... + guess that doesn't make much of a difference for the kernel object + allocator though + antrik: if ram is larger, there can be more objects stored in + kernel space, then, by building something large such as eglibc, memory + pressure is created, causing caches to be reaped + our page cache is useless because of vm_object_cached_max + it's a stupid arbitrary limit masking the inability of the vm to + handle pressure correctly + if removing it, the kernel freezes soon after ram is filled + antrik: it may help trigger the "double swap" issue you mentioned + what may help trigger it? + not checking this limit + hm... indeed I wonder whether the freezes I see might have the + same cause + + +## IRC, freenode, #hurd, 2011-11-19 + + http://www.sceen.net/~rbraun/slabinfo.out <= state of the slab + allocator after building the debian libc packages and removing all files + once done + it's mostly the same as on any other machine, because of the + various arbitrary limits in mach (most importantly, the max number of + objects in the page cache) + fragmentation is still quite low + braunr: actually fragmentation seems to be lower than on the other + run... + antrik: what makes you think that ? + the numbers of currently unused objects seem to be in a similar + range IIRC, but more of them are reclaimable I think + maybe I'm misremembering the other numbers + there had been more reclaims on the other run + + +# IRC, freenode, #hurd, 2011-11-25 + + mcsim: i've just updated the slab branch, please review my last + commit when you have time + braunr: Do you mean compilation/tests? + no, just a quick glance at the code, see if it matches what you + intended with your original patch + braunr: everything is ok + good + i think the branch is ready for integration diff --git a/open_issues/libmachuser_libhurduser_rpc_stubs.mdwn b/open_issues/libmachuser_libhurduser_rpc_stubs.mdwn index 93055b77..80fc9fcd 100644 --- a/open_issues/libmachuser_libhurduser_rpc_stubs.mdwn +++ b/open_issues/libmachuser_libhurduser_rpc_stubs.mdwn @@ -54,3 +54,53 @@ License|/fdl]]."]]"""]] compatibility with eventual 3rd party users is not broken but those using them, other than hurd itself, won't compile anymore, so you fix them progressively + + +# IRC, freenode, #hurd, 2011-11-16 + + is the mach_debug interface packaged in debian ? + what mach_debug interface? + include/include/mach_debug/mach_debug.defs in gnumach + include/mach_debug/mach_debug.defs in gnumach + what exactly is supposed to be packaged there? + i'm talking about the host_*_info client code + braunr: you mean MIG-generated stubs? + antrik: yes + i wrote a tiny slabinfo tool, and rpctrace doesn't show the + host_slab_info call + do you happen to know why ? + braunr: doesn't show it at all, or just doesn't translate? + antrik: doesn't at all, the msgids file contains what's needed to + translate + btw, i was able to build the libc0.3 packages with a kernel using + the slab allocator today, while monitoring it with the aforementioned + slabinfo tool, everything went smoothly + great :-) + i'll probably add a /proc/slabinfo entry some day + and considering the current state of our beloved kernel, i'm + wondering why host_*_info rpcs are considered debugging calls + imo, they should always be included by default + and part of the standard mach interface + (if the rpc is missing, an error is simply returned) + I guess that's been inherited from original Mach + so you think the stubs should be provided by libmachuser? + i'm not sure + actually, it's a bit arguable. if interfaces are not needed by + libc itself, it isn't really necessary to build them as part of the libc + build... + i don't know the complete list of potential places for such calls + OTOH, as any updates will happen in sync with other Mach updates, + it makes sense to keep them in one place, to reduce transition pain + and i didn't want to imply they should be part of libc + on the contrary, libmachuser seems right + libmachuser is part of libc + ah + :) + why so ? + well, for one, libc needs the Mach (and Hurd) stubs itself + also, it's traditionally the role of libc to provide the call + wrappers for syscalls... so it makes some sense + sure, but why doesn't it depend on an external libmachuser instead + of embedding it ? + right + now that's a good question... no idea TBH :-) diff --git a/open_issues/mig_portable_rpc_declarations.mdwn b/open_issues/mig_portable_rpc_declarations.mdwn new file mode 100644 index 00000000..084d7454 --- /dev/null +++ b/open_issues/mig_portable_rpc_declarations.mdwn @@ -0,0 +1,58 @@ +[[!meta copyright="Copyright © 2011 Free Software Foundation, Inc."]] + +[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable +id="license" text="Permission is granted to copy, distribute and/or modify this +document under the terms of the GNU Free Documentation License, Version 1.2 or +any later version published by the Free Software Foundation; with no Invariant +Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license +is included in the section entitled [[GNU Free Documentation +License|/fdl]]."]]"""]] + +[[!tag open_issue_mig]] + + +# IRC, freenode, #hurd, 2011-11-14 + + also, what's the best way to deal with types such as + type cache_info_t = struct[23] of integer_t; + whereas cache_info_t contains longs, which are obviously not + integer-wide on 64-bits processors + ? + you mean, to port mach to 64bit? + no, to make the RPC declaration portable + just in case :) + refine integer_t into something more precise + such as size_t, off_t, etc. + i can't use a single line then + struct cache_info contains ints, vm_size_t, longs + should i just use the maximum size it can get ? + or declare two sizes depending on the word size ? + well, I'd say three + youpi: three ? + the ints, the vm_size_ts, and the longs + youpi: i don't get it + youpi: how would i write it in mig language ? + I don't know the mig language + me neither :) + but I'd say don't lie + i just see struct[23] of smething + the original zone_info struct includes both integer_t and + vm_size_t, and declares it as + type zone_info_t = struct[9] of integer_t; + in its mig defs file + i don't have a good example to reuse + which is lying + yes + which is why i was wondering if mach architects themselves + actually solved that problem :) + "There is no way to specify the fields of a + C structure to MIG. The size and type-desc are just used to + give the size of + the structure. + " + well, this sucks :/ + well, i'll do what the rest of the code seems to do, and let it + rot until a viable solution is available + braunr: we discussed the problem of expressing structs with MIG in + the libburn thread + (which I still need to follow up on... [sigh]) diff --git a/open_issues/mission_statement.mdwn b/open_issues/mission_statement.mdwn index 212d65e7..d136e3a8 100644 --- a/open_issues/mission_statement.mdwn +++ b/open_issues/mission_statement.mdwn @@ -10,7 +10,10 @@ License|/fdl]]."]]"""]] [[!tag open_issue_documentation]] -IRC, freenode, #hurd, 2011-10-12: +[[!toc]] + + +# IRC, freenode, #hurd, 2011-10-12 we have a mission statement: http://hurd.gnu.org yes @@ -37,3 +40,10 @@ IRC, freenode, #hurd, 2011-10-12: ceases to amaze me I agree that the informational, factual, results oriented documentation is the primary objective of documenting + + +# IRC, freenode, #hurd, 2011-11-25 + + heh, nice: http://telepathy.freedesktop.org/wiki/Rationale + most of this could be read as a rationale for the Hurd just as + well ;-) diff --git a/open_issues/page_cache.mdwn b/open_issues/page_cache.mdwn new file mode 100644 index 00000000..062fb8d6 --- /dev/null +++ b/open_issues/page_cache.mdwn @@ -0,0 +1,73 @@ +[[!meta copyright="Copyright © 2011 Free Software Foundation, Inc."]] + +[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable +id="license" text="Permission is granted to copy, distribute and/or modify this +document under the terms of the GNU Free Documentation License, Version 1.2 or +any later version published by the Free Software Foundation; with no Invariant +Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license +is included in the section entitled [[GNU Free Documentation +License|/fdl]]."]]"""]] + +[[!tag open_issue_gnumach]] + +IRC, freenode, #hurd, 2011-11-28: + + youpi: would you find it reasonable to completely disable the page + cache in gnumach ? + i'm wondering if it wouldn't help make the system more stable + under memory pressure + assuming cache=writeback in gnumach? + because disabling the page cache will horribly hit performance + no, it doesn't have anything to do with the host + i'm not so sure + while observing the slab allocator, i noticed our page cache is + not used that often + eeh? + apart from the damn 4000 limitation, I've seen it used + (and I don't why it wouldn't be used) + (e.g. for all parts of libc) + ah, no, libc would be kept open by ext2fs + taht's precisely because of the 4k limit + but e.g. .o file emitted during make + well, no + well, see the summary I had posted some time ago, the 4k limit + makes it completely randomized + and thus you lose locality + yes + but dropping the limit would just fix it + that's my point + which I had tried to do, and there were issues, you mentioned why + and (as usual), I haven't had anyu time to have a look at the issue + again + i'm just trying to figure out the pros and cons for having teh + current page cache implementation + but are you saying you tried with a strict limit of 0 ? + non, I'm saying I tried with no limit + but then memory fills up + yes + so trying to garbage collect + i tried that too, the system became unstable very quickly + but refs don't falldown to 0, you said + did i ? + or maybe somebody else + see the list archives + that's possible + i'd imagine someone like sergio lopez + possibly + somebody that knows memory stuff way better than me in any case + youpi: i'm just wondering how much we'd loose by disabling the + page cache, and if we actually gain more stability (and ofc, if it's + worth it) + no idea, measures will tell + fixing the page cache shouldn't be too hard I believe, however + you just need to know what you are doing, which I don't + I do believe the cache is still at least a bit useful + even if dumb because of randomness + e.g. running make lib in the glibc tree gets faster on second time + because the cache wouldbe filled at least randomly with glibc tree + stuff + yes, i agree on that + braunr: btw, the current stability is fine for the buildds + restarting them every few days is ok + so I'd rather keep the performance :) + ok diff --git a/open_issues/perl.mdwn b/open_issues/perl.mdwn index 45680328..48343e3e 100644 --- a/open_issues/perl.mdwn +++ b/open_issues/perl.mdwn @@ -36,6 +36,56 @@ First, make the language functional, have its test suite pass without errors. [[!inline pages=community/gsoc/project_ideas/perl_python feeds=no]] + +## IRC, OFTC, #debian-hurd, 2011-11-08 + + pinotree: so, with your three fixes applied to 5.14.2, there are + still 9 tests failing. They don't seem to be regressions in perl, since + they also fail when I build 5.14.0 (even though the buildd managed it). + What do you suggest as the way forward? + (incidentally I'm trying on strauss's sid chroot to see how that + compares) + Dom: samuel makes buildds build perl with nocheck (otherwise + we'd have no perl at all) + which tests still fail? + ../cpan/Sys-Syslog/t/syslog.t ../cpan/Time-HiRes/t/HiRes.t + ../cpan/autodie/t/recv.t ../dist/IO/t/io_pipe.t ../dist/threads/t/libc.t + ../dist/threads/t/stack.t ../ext/Socket/t/socketpair.t io/pipe.t + op/sigdispatch.t + buildds> I see + ah ok, those that are failing for me even with my patches + I hadn't spotted that the builds were done with nocheck. + (but only sometimes...) + Explains a lot + syslog is kind of non-working on hurd, and syslog.t succeeds in + buildds (as opposted to crash the machine...) because there's no /var/log + in chroots + libc.t appears to succeed too in buildds + * Dom notices how little memory strauss has, and cancels the build, now + that he *knows* that running out of memory caused the crahses + iirc HiRes.t, io_pipe.t , pipe.t and sigdispatch.t fails because + of trobules we have with posix signals + socketpair.t is kind of curious, it seems to block on + socketpair()... + * Dom wonders if a wiki page tracking this would be worthwhile + stack.t fails because we cannot set a different size for pthread + stacks, yet (similar failing test cases are also in the python test + suite) + if there are problems which aren't going to get resolved any time + soon, it may be worth a few SKIPs conditional on architecture, depending + on how serious the issue is + then we'd get better visibility of other/new issues + (problems which aren't bugs in perl, that is) + understandable, yes + i think nobody digged much deeper in the failing ones yet, to + actually classify them as due to glibc/hurd/mach + (eg unlike the pipe behaviour in sysconf.t i reported) + +### 2011-11-26 + + cool, my recvfrom() fix also makes the perl test recv.t pass + + --- diff --git a/open_issues/robustness.mdwn b/open_issues/robustness.mdwn new file mode 100644 index 00000000..d32bd509 --- /dev/null +++ b/open_issues/robustness.mdwn @@ -0,0 +1,64 @@ +[[!meta copyright="Copyright © 2011 Free Software Foundation, Inc."]] + +[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable +id="license" text="Permission is granted to copy, distribute and/or modify this +document under the terms of the GNU Free Documentation License, Version 1.2 or +any later version published by the Free Software Foundation; with no Invariant +Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license +is included in the section entitled [[GNU Free Documentation +License|/fdl]]."]]"""]] + +[[!tag open_issue_documentation open_issue_hurd]] + + +# IRC, freenode, #hurd, 2011-11-18 + + I'm learning about GNU Hurd and was speculating with a friend + who is also a computer enthusiast. I would like to know if Hurds + microkernel can recover services should they crash? and if it can, does + that recovery code exist in multiple services or just one core kernel + service? + nocturnal: you should read about passive translators + basically, there is no dedicated service to restore crashed + servers + Hi everyone! + services can crash and be restarted, but persistence support is + limited, and rather per serivce + actually persistence is more a side effect than a designed thing + etenil: hello + braunr: translators can also be spawned on an ad-hoc basis, for + instance when accessing a particular file, no? + that's what being passive, for a translator, means + ah yeah I thought so :) + + +# IRC, freenode, #hurd, 2011-11-19 + + will hurd ever have the equivalent of a rs server?, is that + even possible with hurd? + chromaticwt: what is an rs server ? + a reincarnation server + ah, like minix. Well, the main ground issue is restoring existing + information, such as pids of processes, etc. + I don't know how minix manages it + chromaticwt: I have a vision of a session manager that could also + take care of reincarnation... but then, knowing myself, I'll probably + never imlement it + we do get proc crashes from times to times + it'd be cool to see the system heal itself :) + i need a better description of reincarnation + i didn't think it would make core servers like proc able to get + resurrected in a safe way + depends on how it is implemented + I don't know much about Minix, but I suspect they can recover most + core servers + essentially, the condition is to make all precious state be + constantly serialised, and held by some third party, so the reincarnated + server could restore it + should it work across reboots ? + I haven't thought about the details of implementing it for each + core server; but proc should be doable I guess... it's not necessary for + the system to operate, just for various UNIX mechanisms + well, I'm not aware of the Minix implementation working across + reboots. the one I have in mind based on a generic session management + infrastructure should though :-) diff --git a/open_issues/syslog.mdwn b/open_issues/syslog.mdwn index 5fec38b1..2e902698 100644 --- a/open_issues/syslog.mdwn +++ b/open_issues/syslog.mdwn @@ -43,3 +43,30 @@ IRC, freenode, #hurd, 2011-08-08 < youpi> shm should work with the latest libc < youpi> what won't is sysv sem < youpi> (i.e. semget) + + +IRC, OFTC, #debian-hurd, 2011-11-02: + + * pinotree sighs at #645790 :/ + pinotree: W.r.t. 645790 -- yeah, ``someone'' should finally + figure out what's going on with syslog. + http://lists.gnu.org/archive/html/bug-hurd/2008-07/msg00152.html + pinotree: And this... + http://lists.gnu.org/archive/html/bug-hurd/2007-02/msg00042.html + tschwinge: i did that 20 invocations tests recently, and + basically none of them has been logged + tschwinge: when i started playing with logger more, as result i + had some server that started taking all the cpu, followed by other + servers and in the end my ssh connection were dropped and i had nothing + to do (not even login from console) + pinotree: Sounds like ``fun''. Hopefully we can manage to + understand (and fix the underlying issue) why a simple syslog() + invocation can make the whole system instable. + tschwinge: to be honest, i got havoc in the system when i told + syslog to manually look for /dev/log (-u /dev/log), possibly alao when + telling to use a datagram socket (-d) + but even if a normal syslog() invocation does not cause havoc, + there's still the "lost messages" issue + Yep. What I've been doing ever since, is deinstall all + *syslog* packages. + This ``fixed'' all syslog() hangs. diff --git a/open_issues/translator_stdout_stderr.mdwn b/open_issues/translator_stdout_stderr.mdwn index 11793582..14ea1c6d 100644 --- a/open_issues/translator_stdout_stderr.mdwn +++ b/open_issues/translator_stdout_stderr.mdwn @@ -11,11 +11,43 @@ License|/fdl]]."]]"""]] [[!tag open_issue_hurd]] +There have been several discussions and proposals already, about adding a +suitable logging mechanism to translators, for example. + + Decide / implement / fix that (all?) running (passive?) translators' output should show up on the (Mach / Hurd) console / syslog. + [[!taglink open_issue_documentation]]: [[!message-id "87oepj1wql.fsf@becket.becket.net"]] + [[!taglink open_issue_documentation]]: Neal once had written an email on this topic. + + +IRC, freenode, #hurd, 2011-11-06 + + about CLI_DEBUG, you can use #define CLI_DEBUG(fmt, ...) { + fprintf(stderr, fmt, ## __VA_ARGS__); fflush(stderr); } + Isn't stderr in auto-flush mode by default? + man setbuf: The standard error stream stderr is always + unbuffered by default. + tschwinge: "by default" is the important thing here + in the case of translators iirc stderr is buffered + youpi: Oh! + That sounds wrong. + + +IRC, freenode, #hurd, 2011-11-23 + + we'd need a special logging task for hurd servers + if syslog would work, that could be used optionally + no, it relies on services provided by the hurd + i'm thinking of something using merely the mach interface + e.g. using mach_msg to send log messages to a special port used to + reference the logging service + which would then store the messages in a circular buffer in ram + maybe sending to syslog if the service is available + the hurd system buffer if you want -- cgit v1.2.3