diff options
Diffstat (limited to 'open_issues')
30 files changed, 1384 insertions, 34 deletions
diff --git a/open_issues/64-bit_port.mdwn b/open_issues/64-bit_port.mdwn index b0c95612..edb2dccd 100644 --- a/open_issues/64-bit_port.mdwn +++ b/open_issues/64-bit_port.mdwn @@ -155,3 +155,10 @@ In context of [[mondriaan_memory_protection]]. <braunr> the problem is the interfaces themselves <braunr> type widths <braunr> as passed between userspace and kernel + + +# IRC, OFTC, #debian-hurd, 2013-10-05 + + <dharc> and what about 64 bit support, almost done? + <youpi> kernel part is done + <youpi> MIG 32/64 trnaslation missing diff --git a/open_issues/anatomy_of_a_hurd_system.mdwn b/open_issues/anatomy_of_a_hurd_system.mdwn index ba72b00f..a3c55063 100644 --- a/open_issues/anatomy_of_a_hurd_system.mdwn +++ b/open_issues/anatomy_of_a_hurd_system.mdwn @@ -803,3 +803,11 @@ Actually, the Hurd has never used an M:N model. Both libthreads (cthreads) and l <braunr> and hoping it didn't corrupt something important like file system caches before being flushed <giuscri> kilobug, braunr : mhn, ook + + +# IRC, freenode, #hurd, 2013-10-13 + + <ahungry> ahh, ^c isn't working to cancel a ping - is there alternative? + <braunr> ahungry: ctrl-c does work, you just missed something somewhere and + are running a shell directly on a console, without a terminal to handle + signals diff --git a/open_issues/boehm_gc.mdwn b/open_issues/boehm_gc.mdwn index 623dcb83..0a476d71 100644 --- a/open_issues/boehm_gc.mdwn +++ b/open_issues/boehm_gc.mdwn @@ -523,3 +523,22 @@ restults of GNU/Linux and GNU/Hurd look very similar. <congzhang> hi, I am dotgnu work on hurd, and even winforms app <congzhang> s/am/make <congzhang> and maybe c# hello world translate another day :) + + +## Leak Detection + +### IRC, freenode, #hurd, 2013-10-17 + + <teythoon> I spent the last two days integrating libgc - the boehm + conservative garbage collector - into hurd + <teythoon> it can be used in leak detection mode + <azeem> whoa, cool + <teythoon> and it actually kind of works, finds malloc leaks in translators + <braunr> i think there were problems with signal handling in libgc + <braunr> i'm not sure we support nested signal handling well + <teythoon> yes, I read about them + <teythoon> libgc uses SIGUSR1/2, so any program installing handlers on them + will break + <azeem> (which is not a problem on Linux, cause there some RT-signals or so + are used) + <teythoon> yes diff --git a/open_issues/code_analysis/discussion.mdwn b/open_issues/code_analysis/discussion.mdwn index 7ac3beb1..4cb03293 100644 --- a/open_issues/code_analysis/discussion.mdwn +++ b/open_issues/code_analysis/discussion.mdwn @@ -1,4 +1,5 @@ -[[!meta copyright="Copyright © 2011, 2012 Free Software Foundation, Inc."]] +[[!meta copyright="Copyright © 2011, 2012, 2013 Free Software Foundation, +Inc."]] [[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable id="license" text="Permission is granted to copy, distribute and/or modify this @@ -42,6 +43,8 @@ License|/fdl]]."]]"""]] <braunr> i tried duma, and it crashes, probably because of cthreads :) +# Static Analysis + ## IRC, freenode, #hurd, 2012-09-08 <mcsim> hello. What static analyzer would you suggest (probably you have @@ -49,3 +52,54 @@ License|/fdl]]."]]"""]] <braunr> mcsim: if you find some good free static analyzer, let me know :) <pinotree> a simple one is cppcheck <mcsim> braunr: I'm choosing now between splint and adlint + + +## IRC, freenode, #hurd, 2013-10-17 + + <teythoon> whoa, llvm kinda works, enough to make scan-build work :) + <braunr> teythoon: what is scan-build ? + <teythoon> braunr: clangs static analyzer + <braunr> ok + <teythoon> I'm doing a full build of the hurd using it, I will post the + report once it is finished + <teythoon> this will help spot many problems + <teythoon> well, here are the scan-build reports I got so far: + https://teythoon.cryptobitch.de/qa/2013-10-17/scan-build/ + <teythoon> I noticed it finds problems in mig generated code, so there are + probably lot's of duplictaes for those kind of problems + <pinotree> what's a... better one to look at? + <teythoon> it's also good at spotting error handling errors, and can spot + leaks sometimes + <teythoon> hm + <teythoon> + https://teythoon.cryptobitch.de/qa/2013-10-17/scan-build/report-yVBHO1.html + <braunr> that's minor, the device always exist + <braunr> but that's still ugly + <teythoon> + https://teythoon.cryptobitch.de/qa/2013-10-17/scan-build/report-MtgWSa.html + <teythoon> + https://teythoon.cryptobitch.de/qa/2013-10-17/scan-build/report-QdsZIm.html + <teythoon> this could be important: + https://teythoon.cryptobitch.de/qa/2013-10-17/scan-build/report-PDMEbk.html + <teythoon> this is the issue it finds in mig generated server stubs: + https://teythoon.cryptobitch.de/qa/2013-10-17/scan-build/report-iU3soc.html + <braunr> this one is #if TypeCheck1 + <braunr> the libports one looks weird indeed + <teythoon> but TypeCheck is 1 (the tooltip shows macro expansion) + <teythoon> it is defined in line 23 + <braunr> oh + <teythoon> hmmm... clang does not support nested functions, that will limit + its usefulness for us :/ + <braunr> yes + <braunr> one more reason not to use them + + +### IRC, freenode, #hurd, 2013-10-18 + + <teythoon> more complete, now with index: + https://teythoon.cryptobitch.de/qa/2013-10-17/scan-build-2/ + + +# Leak Detection + +See *Leak Detection* on [[boehm_gc]]. diff --git a/open_issues/dbus.mdwn b/open_issues/dbus.mdwn index a41515a1..4473fba0 100644 --- a/open_issues/dbus.mdwn +++ b/open_issues/dbus.mdwn @@ -253,3 +253,115 @@ See [[glibc]], *Missing interfaces, amongst many more*, *`SOCK_CLOEXEC`*. to know how to find this sendmsg.c file? <pinotree> (it's in glibc, but otherwise the remark is valid) <pinotree> s/otherwise/anyway/ + + +# Emails + +# IRC, freenode, #hurd, 2013-10-16 + + <braunr> gnu_srs: how could you fail to understand credentials need to be + checked ? + <gnu_srs> braunr: If data is sent via sendmsg, no problem, right? + <braunr> gnu_srs: that's irrelevant + <gnu_srs> It's just to move the check to the receive side. + <braunr> and that is the whole problem + <braunr> it's not "just" doing it + <braunr> first, do you know what the receive side is ? + <braunr> do you know what it can be ? + <braunr> do you know where the corresponding source code is to be found ? + <gnu_srs> please, describe a scenario where receiving faulty ancillary data + could be a problem instead + <braunr> dbus + <braunr> a user starting privileged stuff although he's not part of a + privileged group of users for example + <braunr> gnome, kde and others use dbus to pass user ids around + <braunr> if you can't rely on these ids being correct, you can compromise + the whole system + <braunr> because dbus runs as root and can give root privileges + <braunr> or maybe not root, i don't remember but a system user probably + <pinotree> "messagebus" + <gnu_srs> k! + <braunr> see http://www.gnu.org/software/hurd/open_issues/dbus.html + <braunr> IRC, freenode, #hurd, 2013-07-17 + <braunr> <teythoon> and the proper fix is to patch pflocal to query the + auth server and add the credentials? + <braunr> <pinotree> possibly + <braunr> <teythoon> that doesn't sound to bad, did you give it a try? + + +# IRC, freenode, #hurd, 2013-10-22 + + <gnu_srs> I think I have a solution on the receive side for SCM_CREDS :) + + <gnu_srs> A question related to SCM_CREDS: dbus use a zero data byte to get + credentials sent. + <gnu_srs> however, kfreebsd does not care which data (and credentials) is + sent, they report the credentials anyway + <gnu_srs> should the hurd implementation do the same as kfreebsd? + <youpi> gnu_srs: I'm not sure to understand: what happens on linux then? + <youpi> does it see zero data byte as being bogus, and refuse to send the + creds? + <gnu_srs> linux is also transparent, it sends the credentials independent + of the data (but data has to be non-null) + <youpi> ok + <youpi> anyway, what the sending application writes does not matter indeed + <youpi> so we can just ignore that + <youpi> and have creds sent anyway + <braunr> i think the interface normally requires at least a byte of data + for ancilliary data + <youpi> possibly, yes + <braunr> To pass file descriptors or credentials over a SOCK_STREAM, + you need to send or + <braunr> receive at least one byte of non-ancillary data in + the same sendmsg(2) or + <braunr> recvmsg(2) call. + <braunr> but that may simply be linux specific + <braunr> gnu_srs: how do you plan on implementing right checking ? + <gnu_srs> Yes, data has to be sent, at least one byte, I was asking about + e.g. sending an integer + <braunr> just send a zero + <braunr> well + <braunr> dbus already does that + <braunr> just don't change anything + <braunr> let applications pass the data they want + <braunr> the socket interface already deals with port rights correctly + <braunr> what you need to do is make sure the rights received match the + credentials + <gnu_srs> The question is to special case on a zero byte, and forbid + anything else, or allow any data. + <braunr> why would you forbid + <braunr> ? + <gnu_srs> linux and kfreebsd does not special case on a received zero byte + <braunr> same question, why would you want to do that ? + <gnu_srs> linux sends credentials data even if no SCM_CREDENTIALS structure + is created, kfreebsd don't + <braunr> i doubt that + <gnu_srs> To be specific:msgh.msg_control = NULL; msgh.msg_controllen = 0; + <braunr> bbl + <gnu_srs> see the test code: + http://lists.debian.org/debian-hurd/2013/08/msg00091.html + <braunr> back + <braunr> why would the hurd include groups when sending a zero byte, but + only uid when not ? + <gnu_srs> ? + <braunr> 1) Sent credentials are correct: + <braunr> no flags: Hurd: OK, only sent ids + <braunr> -z Hurd: OK, sent IDs + groups + <braunr> and how can it send more than one uid and gid ? + <braunr> "sent credentials are not honoured, received ones are created" + <gnu_srs> Sorry, the implementation is changed by now. And I don't special + case on a zero byte. + <braunr> what does this mean ? + <braunr> then why give me that link ? + <gnu_srs> The code still applies for Linux and kFreeBSD. + <gnu_srs> It means that whatever you send, the kernel emits does not read + that data: see + <gnu_srs> socket.h: before struct cmsgcred: the sender's structure is + ignored ... + <braunr> do you mean receiving on a socket can succeed with gaining + credentials, although the sender sent wrong ones ? + <gnu_srs> Looks like it. I don't have a kfreebsd image available right now. + <gnu_srs> linux returns EPERM + <braunr> anyway + <braunr> how do you plan to implement credential checking ? + <gnu_srs> I'll mail patches RSN diff --git a/open_issues/debugging_gnumach_startup_qemu_gdb.mdwn b/open_issues/debugging_gnumach_startup_qemu_gdb.mdwn index e3a6b648..3faa56fc 100644 --- a/open_issues/debugging_gnumach_startup_qemu_gdb.mdwn +++ b/open_issues/debugging_gnumach_startup_qemu_gdb.mdwn @@ -1,4 +1,5 @@ -[[!meta copyright="Copyright © 2011 Free Software Foundation, Inc."]] +[[!meta copyright="Copyright © 2010, 2011, 2013 Free Software Foundation, +Inc."]] [[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable id="license" text="Permission is granted to copy, distribute and/or modify this @@ -12,8 +13,22 @@ License|/fdl]]."]]"""]] [[!tag open_issue_gdb open_issue_gnumach]] +[[!toc]] -# IRC, freenode, #hurd, 2011-07-14 + +# Memory Map + +## IRC, freenode, #hurd, 2010-06 (?) + + <jkoenig> is there a way to get gdb to map addresses as required when + debugging mach with qemu ? + <jkoenig> I can examine the data if I manually map the addresses th + 0xc0000000 but maybe there's an easier way... + <youpi> jkoenig: I haven't found a way + <youpi> I'm mostly using the internal kdb + + +## IRC, freenode, #hurd, 2011-07-14 <mcsim> Hello. I have problem with debugging gnumach. I set 2 brakepoints in file i386/i386at/model_dep.c on functions gdt_init and idt_init. Then @@ -114,3 +129,18 @@ License|/fdl]]."]]"""]] <antrik> oh, right, without GDB... <antrik> though if that's what he meant, his statement was very misleading at least + + +# Multiboot + +See also discussion about *multiboot* on [[arm_port]]. + + +## IRC, freenode, #hurd, 2013-10-09 + + <matlea01> I was just wondering - once gnumach is compiled and I have the + gnumach elf, is that bootable? I.e. can I use something like + "qemu-system-i386 -kernel gnumach"? + <kilobug> matlea01: you need something with multiboot support (like grub) + to provide the various bootstrap modules to the kernel + <matlea01> Ah, I see diff --git a/open_issues/emacs.mdwn b/open_issues/emacs.mdwn index cdd1b10d..749649be 100644 --- a/open_issues/emacs.mdwn +++ b/open_issues/emacs.mdwn @@ -1,4 +1,4 @@ -[[!meta copyright="Copyright © 2009 Free Software Foundation, Inc."]] +[[!meta copyright="Copyright © 2009, 2013 Free Software Foundation, Inc."]] [[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable id="license" text="Permission is granted to copy, distribute and/or modify this @@ -1525,3 +1525,18 @@ perhaps prepared (I did not yet have a look), and re-tries again and again? Why doesn't Mach page out some pages to make memory available? This is stock GNU Mach from Git, no patches, configured for Xen domU usage. + + +# IRC, freenode, #hurd, 2013-10-04 + + <pinotree> given you are an emacs user: could you please pick the build + patch from deb#725099, recompile emacs24 and test it with your daily + work? + + +## IRC, freenode, #hurd, 2013-10-07 + + <gnu_srs> Wow! emacs24 runs in X:-D + <gnu_srs> pinotree: I've now built and installed emacs 24.3. So far so good + ^ + <pinotree> good, keep testing and stressing diff --git a/open_issues/exec_memory_leaks.mdwn b/open_issues/exec_memory_leaks.mdwn index 67281bdc..1fc5a928 100644 --- a/open_issues/exec_memory_leaks.mdwn +++ b/open_issues/exec_memory_leaks.mdwn @@ -94,3 +94,28 @@ After running the libtool testsuite for some time: 8 39.5 0:15.60 28:48.57 9 0.0 0:04.49 10:24.12 10 12.8 0:08.84 19:34.45 + + +# IRC, freenode, #hurd, 2013-10-08 + + * braunr hunting the exec leak + <braunr> and i think i found it + <braunr> yes :> + <braunr> testing a bit more and committing the fix later tonight + <braunr> pinotree: i've been building glibc for 40 mins and exec is still + consuming around 1m memory + <pinotree> wow nice + <pinotree> i've been noticing exec leaking quite some time ago, then forgot + to pay more attention to that + <braunr> it's been more annoying since darnassus provides web access to + cgis + <braunr> automated tools make requests every seconds + <braunr> the leak occurred when starting a shell script or using system() + <braunr> youpi: not sure you saw it, i fixed the exec leak + + +## IRC, freenode, #hurd, 2013-10-10 + + <gg0> braunr: http://postimg.org/image/jd764wfpp/ + <braunr> exec 797M + <braunr> this should be fixed with the release of the next hurd packages diff --git a/open_issues/ext2fs_libports_reference_counting_assertion.mdwn b/open_issues/ext2fs_libports_reference_counting_assertion.mdwn index ff1c4c38..9ff43afa 100644 --- a/open_issues/ext2fs_libports_reference_counting_assertion.mdwn +++ b/open_issues/ext2fs_libports_reference_counting_assertion.mdwn @@ -1,4 +1,4 @@ -[[!meta copyright="Copyright © 2012 Free Software Foundation, Inc."]] +[[!meta copyright="Copyright © 2012, 2013 Free Software Foundation, Inc."]] [[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable id="license" text="Permission is granted to copy, distribute and/or modify this @@ -91,3 +91,14 @@ With that patch in place, the assertion failure is seen more often. sure we can get that easily lol [[automatic_backtraces_when_assertions_hit]]. + + +# IRC, freenode, #hurd, 2013-10-09 + + <braunr> mhmm, i may have an explanation for the weird assertions we + sometimes see in ext2fs + <braunr> glibc uses alloca to reserve memory for one reply port per thread + in abort_all_rpcs + <braunr> if this erases the thread-specific area, we can expect all kinds + of wreckage + <braunr> i'm not sure how to fix this though diff --git a/open_issues/gdb_qemu_debugging_gnumach.mdwn b/open_issues/gdb_qemu_debugging_gnumach.mdwn deleted file mode 100644 index d3105f50..00000000 --- a/open_issues/gdb_qemu_debugging_gnumach.mdwn +++ /dev/null @@ -1,19 +0,0 @@ -[[!meta copyright="Copyright © 2010 Free Software Foundation, Inc."]] - -[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable -id="license" text="Permission is granted to copy, distribute and/or modify this -document under the terms of the GNU Free Documentation License, Version 1.2 or -any later version published by the Free Software Foundation; with no Invariant -Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license -is included in the section entitled [[GNU Free Documentation -License|/fdl]]."]]"""]] - -[[!tag open_issue_gdb open_issue_gnumach]] - -\#hurd, freenode, June (?) 2010 - - <jkoenig> is there a way to get gdb to map addresses as required when debugging mach with qemu ? - <jkoenig> I can examine the data if I manually map the addresses th 0xc0000000 but maybe there's an easier way... - <youpi> jkoenig: I haven't found a way - <youpi> I'm mostly using the internal kdb - diff --git a/open_issues/gdb_signal_handler.mdwn b/open_issues/gdb_signal_handler.mdwn index 3084f7e3..5e27a099 100644 --- a/open_issues/gdb_signal_handler.mdwn +++ b/open_issues/gdb_signal_handler.mdwn @@ -401,3 +401,74 @@ License|/fdl]]."]]"""]] <zyg> braunr: are you sure? there is minimal user-code run before the signal is going into the handler. <braunr> you "step out of the handler" + + +# IRC, freenode, #hurd, 2013-10-24 + + <gnu_srs> how come some executables are not debuggable with gdb, e.g Cannot + access memory at address xxx. -fPIC flag? + <braunr> no + <braunr> i'm not sure but it's certainly not -fPIC + <gnu_srs> Another example is localedef: ./debian/tmp-libc/usr/bin/localedef + -i en_GB -c -f UTF-8 -A /usr/share/locale/locale.alias en_GB.UTF-8 + segfailts + <gnu_srs> and in gdb hangs after creating a thread., after C-c no useful + info: stack ends with: Cannot access memory at address 0x8382c385 + <braunr> if it's on the stack, it's probably a stack corruption + <nalaginrut> gnu_srs: are u using 'x' command or 'print' in GDB? IIRC + print may throw such message, but x may not + <gnu_srs> bt + <braunr> x may too + <braunr> what you're showing looks like an utf-8 string + <braunr> c385 is Å + <braunr> 83 is a special f + <braunr> 82 is a comma + <gnu_srs> so the stack is corrupted:-( + <braunr> probably + <braunr> well, certainly + <braunr> but gdb should show you where the program counter is + <gnu_srs> is that: ECX: the count register + <braunr> no + <braunr> eip + <braunr> program counter == instruction pointer + <gnu_srs> k!, the program counter is at first entry in bt: #0 0x01082612 + in _hurd_intr_rpc_msg_in_trap () at intr-msg.c:133 + <braunr> this is the hurd interruptible version of mach_msg + <braunr> so it probably means the corruption was made by a signal handler + <braunr> which is one of the reasons why gdb can't handle Ctrl-c + <gnu_srs> what to do in such a case, follow the source code + single-stepping? + <braunr> single stepping also uses signals + <braunr> and using printf will probably create an infinite recursion + <braunr> in those cases, i use mach_print + <braunr> as a first step, you could make sure a signal is actually received + <braunr> and which one + <braunr> hmm + <braunr> also, before rushing into conclusions, make sure you're looking at + the right thread + <braunr> i don't expect localedef to be multithreaded + <braunr> but gdb sometimes just doesn't get the thread where the segfault + actually occurred + <gnu_srs> two threads: 1095.4 and 1095.5 (created when starting localedef + in gdb) + <braunr> no, at the time of the crash + <braunr> the second thread is always the signal thread + <gnu_srs> OK,in gdb the program hangs, interrupted by C-c, outside it + segfaults + <braunr> when you use bt to get the corrupted stack, you can also use info + threads and thread apply all bt + <gnu_srs> I did: http://paste.debian.net/61170/ + <braunr> ok so it confirms there is only one real application thread, the + main one + <braunr> and that the corruption probably occurs during signal handling + <gnu_srs> rpctrace (edited out non-printable characters): + http://paste.debian.net/61178/ + <gnu_srs> Ah, have to do it again as root;-) + <braunr> yes .. :p + <gnu_srs> new last part: http://paste.debian.net/61181/ + <braunr> so, there is a seek, then a stat, then a close perhaps (port + deallocation) and then a signal received (probably sigsegv) + <braunr> gnu_srs: when you try running it in gdb, do you get a sigkill ? + <braunr> damn, gdb on darnassus is bugged :-( + <gnu_srs> It hangs, interrupted with C-c. + <braunr> ok diff --git a/open_issues/git-core-2.mdwn b/open_issues/git-core-2.mdwn index cbf47bd2..a92b3ebb 100644 --- a/open_issues/git-core-2.mdwn +++ b/open_issues/git-core-2.mdwn @@ -61,6 +61,113 @@ Fixing this situation is easy enough: Still seen. +## IRC, freenode, #hurd, 2013-10-10 + + <sea`> Huh? I've cloned the 'hurd' repository and I'm attempting to compile + it, but the 'rtnetlink.h' header in + 'hurd/pfinet/linux-src/include/linux/' is just blank. (Which leads to an + error later down when a macro that's supposed to be defined in there is + first used) + <sea`> So I'm just wondering, is that file really blank? Or is this some + unexpected error of decompression? + <braunr> clone again and see + <braunr> the file is definitely not empty + <sea`> I cloned it twice, both have that file blank. BUT, I want to point + out that both clones do have some decompression errors. (Some files are + missing chunks in /both/ cloned repositories). + <braunr> where did you clone it from ? + <sea`> git.sv.gnu.org/hurd/hurd.git + <braunr> hum decompression errors ? + <braunr> can you paste them please ? + <sea`> Hmm, I can clone again and show you an example if I find one + <sea`> This was on the hurd. When I run: git clone $repo;, it seems to fail + almost randomly with "incorrect header check", but when it does succeed, + occasionally some files are missing chunks + <sea`> and apparently entire files can be blank + <braunr> http or git ? + <sea`> git. + <braunr> that's really weird + <braunr> actually i don't even have problems with http any more nowadays .. + <sea`> This is using the hurd image from sthibault + <sea`> So once I get it recompiled and shuffle in the new binaries, the + problem should probably go away + <braunr> no + <braunr> well maybe but + <braunr> don't recompile + <braunr> upgrade packages instead + <sea`> Alright, I'll do an upgrade instead. Why that path specifically? + <braunr> rebuilding is long + <braunr> i wonder if the image you got is corrupted + <braunr> compute the checksum + <braunr> we've had weird reports in the past about the images he provides + <braunr> well not the images themselves, but differences after dowloading + .. + <braunr> downloading* + <sea`> The MD5SUMS file on his site isn't including the values for the most + recent images. + <sea`> It stops at 2012-12-28 + <braunr> hummm + <sea`> Anyway, let's see. git clone failed again: + <sea`> Receiving objects: 100% (50955/50955), 15.48 MiB | 42 KiB/s, done. + <sea`> error: inflate: date stream error (incorrect header check) <- This + is the interesting part + <sea`> fatal: serious inflate inconsistency + <sea`> fatal: index-pack failed + <braunr> not intereseting enough unfortunately + <braunr> but it might come from savannah too + <braunr> try the mirrors at + http://darnassus.sceen.net/gitweb/?a=project_list;pf=savannah_mirror + <sea`> Let's see..if I try: 'git clone + git://darnassus.sceen.net/gitweb/savannah_mirror/hurd.git', I get: + 'fatal: remote error: access denied or repository not exported: + /gitweb/savannah_mirror/hurd.git' + <braunr> my bad + <braunr> that's weird, it should work .. + <braunr> oh, stupid translation error + <sea`> translation? From one human language to another? + <braunr> not translation actually + <braunr> typo :) + <braunr> it's either + <braunr> git://darnassus.sceen.net/savannah_mirror/hurd.git + <braunr> or + <braunr> http://darnassus.sceen.net/gitweb/savannah_mirror/hurd.git + <braunr> copy paste the url exactly please + <braunr> /gitweb/ is only present in the http url + <sea`> Ah, right. Okay, I'll paste it exactly + <sea`> Ehm. The whole thing locked up badly. I'll reboot it and try again. + <braunr> are you sure it locked oO ? + <braunr> the hurd can easily become unresponsive when performing io + operations + <braunr> but you need more than such a git repository to reach that state + <sea`> Yeah, that happens occasionally. It's not related to git, but rather + it happens when I cancel some command. + <braunr> your image must be corrupted + <braunr> have you enabled host io caching btw ? + <sea`> By now it's corrupted for sure..everytime it crashes the filesystem + gets into a weird state. + <sea`> I'll unpack a fresh image, then update the packages, and then try + cloning this git repository. + <braunr> i'll get the image too so we can compare sums + <sea`> 957bb0768c9558564f0c3e0adb9b317e ./debian-hurd.img.tar.gz + <sea`> Which unpacks to: debian-hurd-20130504.img + <azeem_> the NSA might backdoor the Hurd, in anticipation of our scheduled + world-dominance + <braunr> for now they're doing it passively : + <braunr> :p + <braunr> sea`: same thing here + <braunr> sea`: if you still have problems, the image itself might be wrong + <braunr> in which case you should try with the debian network installer + <sea`> Ah, so if problems persist, try with the network installer. Okay + <sea`> Is there some recipe for constructing a hurd/mach minimal + environment? + <sea`> A system with only just enough tools and libraries to compile and + poke at things. + <braunr> not currently + <braunr> we all work in debian environments + <braunr> the reason being that a lot of patches are queued for integration + upstream + + # 2010-11-17 A very similar issue. The working tree had a lot of diff --git a/open_issues/glibc.mdwn b/open_issues/glibc.mdwn index b453b44f..292c6256 100644 --- a/open_issues/glibc.mdwn +++ b/open_issues/glibc.mdwn @@ -330,6 +330,33 @@ Last reviewed up to the [[Git mirror's 0323d08657f111267efa47bd448fbf6cd76befe8 clearly not a priority <nalaginrut> ok + IRC, freenode, #hurd, 2013-09-26: + + <nalaginrut> if I want to have epoll/kqueue like things, where + should it dwell? kernel or some libs? + <braunr> libs + <pinotree> userland + <braunr> that would be a good project to work on, something i + intended to do (so i can help) but it requires a lot of work + <braunr> you basically need to add a way to explicitely install and + remove polling requests (instead of the currently way that + implicitely remove polling requests when select/poll returns) + <braunr> while keeping the existing way working for some time + <braunr> glibc implements select + <braunr> the hurd io interface shows the select interface + <braunr> servers such as pfinet/pflocal implement it + <braunr> glibc implements the client-side of the call + <nalaginrut> where's poll? since epoll just added edge-trigger in + poll + <braunr> both select and poll are implemented on top of the hurd io + select call (which isn't exactly select) + <braunr> + http://darnassus.sceen.net/gitweb/savannah_mirror/hurd.git/blob/HEAD:/hurd/io.defs + <braunr> this is the io interface + <braunr> + http://darnassus.sceen.net/gitweb/savannah_mirror/glibc.git/blob/refs/heads/tschwinge/Roger_Whittaker:/hurd/hurdselect.c + <braunr> this is the client side implementation + * `sys/eventfd.h` * `sys/inotify.h` @@ -854,6 +881,298 @@ Last reviewed up to the [[Git mirror's 0323d08657f111267efa47bd448fbf6cd76befe8 <braunr> to check where those locks are held and determine the right order + IRC, OFTC, #debian-hurd, 2013-09-28: + + <gg0_> now we'd just need tls + <gg0_> http://bugs.ruby-lang.org/issues/8937 + <gg0_> well, it would pass makecheck at least. makecheckall would + keep hanging on threads/pipes tests i guess, unless tls/thread + destruction patches fix them + + IRC, OFTC, #debian-hurd, 2013-10-05: + + <youpi> so what is missing for ruby2.0, only disabling use of + context for now, no? + <pinotree> i'm not tracking it closely, gg0_ is + <gg0_> maybe terceiro would accept a patch which only disables + *context, "maybe" because he rightly said changes must go + upstream + <gg0_> anyway with or without *context, many many tests in + makecheckall fail by making it hang, first with and without + assertion you removed, now they all simply hang + <gg0_> youpi: what do we want to do? if you're about finishing tls + migration (as i thought a couple of weeks ago), i won't propose + anything upstream. otherwise i could but that will have to be + reverted upstream once you finish + <gg0_> about tests, current ruby2.0 doesn't run makecheckall, only + makecheck which succeeds on hurd (w/o context) + <gg0_> if anyone wants to give it a try: + http://paste.debian.net/plain/51089 + <gg0_> first hunk makes makecheck (not makecheckall) succeed and + has been upstreamed, not packaged yet + <pinotree> what about makecheckall for ruby2.0? + <gg0_> 16:58 < gg0_> anyway with or without *context, many many + tests in makecheckall fail by making it hang, first with and + without assertion you removed, now they all simply hang + <pinotree> i for a moment thought it as for 1.9.1, ok + <pinotree> these hangs should be debugged, yes + <gg0_> nope, tests behavior doesn't change between 1.9 and 2.0. i + started suppressing tests onebyone on 2.0 as well and as happened + on 1.9, i gave up cause there were too many + <gg0_> yep a smart mind could start debugging them, starting from + patch above pasted by a lazy one owner + <gg0_> one problem is that one can't reproduce them by isolate + them, they don't fail. start makecheckall then wait for one fail + <gg0_> now after my stupid report, someone like pinotree could take + it over, play with it for half an hour/an hour (which equals to + half a gg0's year/a gg0's year + <gg0_> ) + <gg0_> and fix them all + + <gg0_> 17:05 < gg0_> youpi: what do we want to do? if you're about + finishing tls migration (as i thought a couple of weeks ago), i + won't propose anything upstream. otherwise i could but that will + have to be reverted upstream once you finish + <youpi> gg0_: I don't really know what to answer + <youpi> that's why I didn't answer :) + <gg0_> youpi: well then we could upstream context disable and keep + it disabled even if you fix tls. ruby won't be as fast as it + would be with context but i don't think anyone will complain + about that. then once packaged, if terceiro doesn't enable + makecheckall, we will have ruby2.0 in main + <youpi> that can be a plan yes + <gg0_> btw reverting it upstream should not be a problem eventually + <youpi> sure, the thing is remembering to do it + <gg0_> filed http://bugs.ruby-lang.org/issues/8990 + <gg0_> please don't fix tls too soon :) + <gg0_> s/makecheck/maketest/g + + IRC, OFTC, #debian-hurd, 2013-10-08: + + <gg0_> ok. *context disabled http://bugs.ruby-lang.org/issues/8990 + + <gg0> bt full of an attached stuck ruby test + http://paste.debian.net/plain/53788/ + <gg0> anything useful? + <youpi> uh, is that really all? + <youpi> there's not much interesting unfortunately + <youpi> did you run thread apply all bt full ? + <youpi> (not just bt full) + <gg0> no just bt full + <gg0> http://paste.debian.net/plain/53790/ + <gg0> wait, there's a child + <gg0> damn ctrl-c'ing while it was loading symbols made it crash :/ + <gg0> restarted testsuite + <gg0> isn't it interesting that failed tests fail only if testsuite + runs from beginning, whereas if run singularly, they succeed? + <gg0> as it got out of whatever resources + <gg0> youpi: http://paste.debian.net/plain/53798/ + <youpi> the interesting part is actually right at the top + <youpi> it's indeed stuck in the critical section spinlock + <youpi> question being what is keeping it + <youpi> iirc I had already checked in the whole glibc code that all + paths which lock critical_section_lock actually release it in + all cases, but maybe I have missed some + <youpi> (I did find some missing paths, which I fixed) + <gg0> i guess the same check you and braunr talk about in + discussion just before this anchor + http://darnassus.sceen.net/~hurd-web/open_issues/glibc/#recvmmsg + <youpi> yes, but the issue we were discussing there is not what + happens here + <youpi> we would see another thread stuck in the other way roudn, + otherwise + <gg0> no way to get what is locking? + <youpi> no, that's not recorded + <gg0> and what about writing it somewhere right after getting the + lock? + <youpi> one will have to do that in all spots taking that lock + <youpi> but yes, that's the usual approach + <gg0> i would give it try but eglibc rebuild takes too much time, + that conflicts with my laziness + <gg0> i read even making locks timed would help + + IRC, OFTC, #debian-hurd, 2013-10-09: + + <gg0> so correct order would be: + <gg0> __spin_lock (&ss->lock); // locks sigstate + <gg0> __spin_lock (&ss->critical_section_lock); + <gg0> [do critical stuff] + <gg0> __spin_unlock (&ss->critical_section_lock); + <gg0> __spin_unlock (&ss->lock); // unlocks sigstate + <gg0> ? + + <gg0> 21:44 < gg0> terceiro: backported to 2.0 (backport to 1.9 is + waiting) https://bugs.ruby-lang.org/issues/9000 + <gg0> 21:46 < gg0> that means that if you take a 2.0 snapshot, + it'll build fine on hurd (unless you introduce maketestall as in + 1.9, that would make it get stuck like 1.9) + <gg0> 21:48 < terceiro> gg0: nice + <gg0> 21:48 < terceiro> I will try to upload a snapshot as soon as + I can + <gg0> 21:52 < gg0> no problem. you might break my "conditional + satisfaction" by adding maketestall. better if you do that on + next+1 upload so we'll have at least one 2.0 built :) + + <gg0> would it be a problem granting me access to a porter box to + rebuild eglibc+ruby2.0? + <gg0> i'm already doing it on another vm but host often loses power + <pinotree> you cannot install random stuff on a porterbox though + <gg0> i know i'd just need build-deps of eglibc+ruby2.0 i guess + <gg0> (already accessed to porter machines in the past, account + lele, mips iirc) + <gg0> ldap should remember that + <gg0> don't want to disturb anyone else work btw. if it's not a + problem, nice. otherwise no problem + <pinotree> please send a request to admin@exodar.debian.net so it + is not forgotten + <gg0> following this one would be too "official"? + http://dsa.debian.org/doc/guest-account/ + <pinotree> hurd is not a release architecture, so hurd machines are + not managed by DSA + <gg0> ok + <pinotree> the general procedure outlines is ok though, just need + to be sent to the address above + <gg0> sent + <gg0> (1st signed mail with mutt, in the worst case i've attached + passphrase :)) + <youpi> gg0: could you send me an ssh key? + <pinotree> no alioth account? + <youpi> yes, but EPERM + <gg0> youpi: sent to youpi@ + <youpi> youpi@ ? + <gg0> (... which doesn't exist :/) + <gg0> sthibault@ + <youpi> please test gg0-guest@exodar.debian.net ? + <youpi> (I'd rather not adduser the ldap name, who knows what might + happen when you get your DD account) + <gg0> i'm in. thanks + <youpi> you're welcome + <gg0> ldap users need to be adduser'ed? + <youpi> I'm not getting your ldap user account from ud-replicate, + at least + <gg0> (btw i never planned to apply nm, i'd be honoured but i + simply think not to deserve it) + <youpi> never say never ;) + <gg0> bah i like failing. that would be a success. i can't :) + <gg0> gg0-guest@exodar:~$ dchroot + <gg0> E: Access not authorised + <gg0> I: You do not have permission to access the schroot service. + <gg0> I: This failure will be reported. + <youpi> ah, right, iirc I need to add you somewhere + <youpi> gg0: please retry? + <gg0> works + <youpi> good + <gg0> are there already eglibc+ruby2.0 build-deps? + <youpi> yes + <gg0> oh that means i should do something myself now :) + <youpi> yep, that had to happen at some point :) + <gg0> my laziness thanks: "at some point" is better than "now" :) + + IRC, freenode, #hurd, 2013-10-10: + + <gg0> ok just reproduced the + former. ../sysdeps/mach/hurd/jmp-unwind.c:53 waits + <braunr> 20:37 < braunr> gg0: does ruby create and destroy threads + ? + <gg0> no idea + <gg0> braunr: days ago you and youpi talked about locking order + (just before this anchor + http://darnassus.sceen.net/~hurd-web/open_issues/glibc/#recvmmsg) + <braunr> oh right + <gg0> <youpi> could you submit the fix for jmp-unwind.c to + upstream? + <braunr> it didn't made it in the todo list + <gg0> so correct order is in hurd_thread_cancel, right? + <braunr> sorry about that + <braunr> we need to make a pass to make sure it is + <gg0> that means locking first ss->critical_section_lock _then_ + ss->lock + <gg0> correct? + <braunr> but considering how critical hurd_thread_cancel is, i + expect so + + <gg0> i get the same deadlock by swapping locks + <gg0> braunr: youpi: fyi ^ + <gg0> 20:51 < braunr> 20:37 < braunr> gg0: does ruby create and + destroy threads ? + <gg0> how could i check it? + <braunr> gg0: ps -eflw + <youpi> gg0: that's not surprising, since in the b acktrace you + posted there isn't another thread locked in the other order + <youpi> so it's really that somehow the thread is already in + critical sesction + <braunr> youpi: you mean there is ? + <braunr> ah, it's not the same bug + <youpi> no, in what he posted, no other thread is stuck + <youpi> so it's not a locking order + <youpi> just that the critical section is actually busy + <gg0> youpi: ack + <gg0> braunr: what's the other bug? ext2fs one? + <braunr> gg0: idk + <gg0> braunr: thanks. doesn't show threads (found -T for that) but + at least doesn't limit columns number if piped (thanks to -w) + <braunr> it does + <braunr> there is a TH column + <gg0> ok thread count. -T gives more info + + IRC, freenode, #hurd, 2013-10-24: + + <youpi> ruby2.0 builds fine with the to-be-uploaded libc btw + <gg0> youpi: without d-ports patches? surprise me :) + <youpi> gg0: plain main archive source + <gg0> you did it. surprised + <gg0> ah ok you just pushed your tls. great! + <braunr> tls will fix a lot of things + + * `sigaltstack` + + IRC, freenode, #hurd, 2013-10-09: + + <gnu_srs1> Hi, is sigaltstack() really supported, even if it is + defined as well as SA_ONSTACK? + <braunr> probably not + <braunr> well, + <braunr> i don't know actually, mistaking with something else + <braunr> it may be supported + <pinotree> iirc no + <gnu_srs1> pinotree: are you sure? + <pinotree> this is what i remember + <pinotree> if you want to be sure that $foo works, just do the + usual way: test it yourself + <gnu_srs1> found it: hurd/TODO: *** does sigaltstack/sigstack + really work? -- NO + <pinotree> well TODO is old and there were signal-related patches + by jk in the meanwhile, although i don't think they would have + changed this lack + <pinotree> in any case, test it + <gnu_srs1> anybody fluent in assembly? Looks like this code + destroys the stack: http://paste.debian.net/54331/ + <braunr> gnu_srs1: why would it ? + <braunr> it does something special with the stack pointer but it + just looks like it aligns it to 16 bytes, maybe because of sse2 + restrictions (recent gcc align the stack already anyway) + <gnu_srs1> Well, in that case it is the called function: + http://paste.debian.net/54341/ + <braunr> how do you know there is a problem with the stack in the + first place ? + <gnu_srs1> tracing up to here, everything is OK. then esp and ebp + are destroyed. + <gnu_srs1> and single stepping goes backward until it segfaults + <braunr> "destroyed" ? + <gnu_srs1> zero if I remember correctly now. the x86 version built + for is i586, should that be changed to i486? + <braunr> this shouldn't change anything + <braunr> and they shouldn't get to 0 + <braunr> use gdb to determine exactly which instruction resets the + stack pointer + <gnu_srs1> how to step into the assembly part? using 's' steps + through the function since no line information: + <gnu_srs1> Single stepping until exit from function + wine_call_on_stack, + <gnu_srs1> which has no line number information. + <braunr> gnu_srs1: use break on the address + <gnu_srs1> how do i get the address of where the assembly starts? + * `recvmmsg`/`sendmmsg` (`t/sendmmsg`) From [[!message-id "20120625233206.C000A2C06F@topped-with-meat.com"]], diff --git a/open_issues/glibc/t/tls-threadvar.mdwn b/open_issues/glibc/t/tls-threadvar.mdwn index 7ce36f41..40d1463e 100644 --- a/open_issues/glibc/t/tls-threadvar.mdwn +++ b/open_issues/glibc/t/tls-threadvar.mdwn @@ -116,3 +116,40 @@ dropped altogether, and `__thread` directly be used in glibc. ## IRC, OFTC, #debian-hurd, 2013-09-23 <youpi> yay, errno threadvar conversion success + + +## IRC, OFTC, #debian-hurd, 2013-10-05 + + <gg0_> youpi: any ETA for tls? + <youpi> gg0_: one can't have an ETA for bugfixing + <gg0_> i don't call them bugs if there's something missing to implement btw + <youpi> no, here it's bugs + <youpi> the implementation is already in the glibc branches in our + repository + <youpi> it just makes some important regressions + + +## IRC, OFTC, #debian-hurd, 2013-10-07 + + <youpi> about tls, I've made some "progress": now I'm wondering how raise() + has ever been working before :) + + +## IRC, OFTC, #debian-hurd, 2013-10-15 + + <youpi> good, reply_port tls is now ok + <youpi> last but not least, sigstate + + +## IRC, OFTC, #debian-hurd, 2013-10-21 + + <youpi> started testsuite with threadvars dropped completely + <youpi> so far so good + + +## IRC, OFTC, #debian-hurd, 2013-10-24 + + <youpi> ok, hurd boots with full-tls libc, no threadvars at all any more + <gg0> \o/ + <gg0> good bye threadvars bugs, welcome tls ones ;) + <youpi> now I need to check that threads can really use another stack :) diff --git a/open_issues/gnumach_page_cache_policy.mdwn b/open_issues/gnumach_page_cache_policy.mdwn index 5e93887e..77e52ddb 100644 --- a/open_issues/gnumach_page_cache_policy.mdwn +++ b/open_issues/gnumach_page_cache_policy.mdwn @@ -811,3 +811,63 @@ License|/fdl]]."]]"""]] <braunr> have* <braunr> and even if laggy, it doesn't feel much more than the usual lag of a network (ssh) based session + + +# IRC, freenode, #hurd, 2013-10-08 + + <braunr> hmm i have to change what gnumach reports as being cached memory + + +## IRC, freenode, #hurd, 2013-10-09 + + <braunr> mhmm, i'm able to copy files as big as 256M while building debian + packages, using a gnumach kernel patched for maximum memory usage in the + page cache + <braunr> just because i used --sync=30 in ext2fs + <braunr> a bit of swapping (around 40M), no deadlock yet + <braunr> gitweb is a bit slow but that's about it + <braunr> that's quite impressive + <braunr> i suspect thread storms might not even be the cataclysmic event + that we thought it was + <braunr> the true problem might simply be parallel fs synces + + +## IRC, freenode, #hurd, 2013-10-10 + + <braunr> even with the page cache patch, memory filled, swap used, and lots + of cached objects (over 200k), darnassus is impressively resilient + <braunr> i really wonder whether we fixed ext2fs deadlock + + <braunr> youpi: fyi, darnassus is currently running a patched gnumach with + the vm cache changes, in hope of reproducing the assertion errors we had + in the past + <braunr> i increased the sync interval of ext2fs to 30s like we discussed a + few months back + <braunr> and for now, it has been very resilient, failing only because of + the lack of kernel map entries after several heavy package builds + <gg0> wait the latter wasn't a deadlock it resumed after 1363.06 s + <braunr> gg0: thread storms can sometimes (rarely) fade and let the system + resume "normally" + <braunr> which is why i increased the sync interval to 30s, this leaves + time between two intervals for normal operations + <braunr> otherwise writebacks are queued one after the other, and never + processed fast enough for that queue to become empty again (except + rarely) + <braunr> youpi: i think we should consider applying at least the sync + interval to exodar, since many DDs are just unaware of the potential + problems with large IOs + <youpi> sure + + <braunr> 222k cached objects (1G of cached memory) and darnassus is still + kicking :) + <braunr> youpi: those lock fixing patches your colleague sent last year + must have helped somewhere + <youpi> :) + + +## IRC, freenode, #hurd, 2013-10-13 + + <youpi> braunr: how are your tests going with the object cache? + <braunr> youpi: not so good + <braunr> youpi: it failed after 2 days of straight building without a + single error output :/ diff --git a/open_issues/hurd_101.mdwn b/open_issues/hurd_101.mdwn index 574a03ec..25822512 100644 --- a/open_issues/hurd_101.mdwn +++ b/open_issues/hurd_101.mdwn @@ -60,3 +60,41 @@ Not the first time that something like this is proposed... <neal> how ipc works <neal> and understand exactly what state is stored where <zacts> ok + + +# IRC, freenode, #hurd, 2013-10-12 + + <ahungry> Hi all, can anyone expand on + https://www.gnu.org/software/hurd/contributing.html - if I proceed with + the quick start and have the system running in a virtual image, how do I + go from there to being able to start tweaking the source (and recompiling + ) in a meaningful way? + <ahungry> Would I modify the source, compile within the VM and then what + would be the next step to actually test my new changes? + <braunr> ahungry: we use debian + <braunr> i suggest formatting your changes into patches, importing them + into debian packages, rebuilding those packages, and installing them over + the upstream ones + <ahungry> what about modifications to mach itself? or say I wanted to try + to work on the wifi drives - I would build the translator or module or + whatever and just add to the running instance of hurd? + <ahungry> s/drives/drivers + <braunr> same thing + <braunr> although + <braunr> during development, it's obviously a bit too expensive to rebuild + complete packages each time + <braunr> you can use the hurd on top of a gnumach kernel built completely + from upstream sources + <braunr> you need a few debian patches for the hurd itself + <braunr> a lot of them for glibc + <braunr> i usually create a temporary local branch with the debian patches + i need to make my code run + <braunr> and then create the true development branch itself from that one + <braunr> drivers are a a dark corner of the hurd + <braunr> i wouldn't recommend starting there + <braunr> but if you did, yes, you'd write a server to run drivers, and + start it + <braunr> you'd probably write a translator (which is a special kind of + server), yes + <ahungry> braunr: thanks for all the info, hittin the sack now but ill have + to set up a box and try to contribute diff --git a/open_issues/hurd_init.mdwn b/open_issues/hurd_init.mdwn index b0b58a70..cc06935c 100644 --- a/open_issues/hurd_init.mdwn +++ b/open_issues/hurd_init.mdwn @@ -214,3 +214,11 @@ License|/fdl]]."]]"""]] <teythoon> I've been hacking on init/startup, I've looked into cleaning it up + + +## IRC, freenode, #hurd, 2013-10-07 + + <teythoon> braunr: btw, what do you think of my /hurd/startup proposal? + <braunr> i haven't read it in detail yet + <braunr> it's about separating init right ? + <teythoon> yes diff --git a/open_issues/libpthread/t/fix_have_kernel_resources.mdwn b/open_issues/libpthread/t/fix_have_kernel_resources.mdwn index 6f09ea0d..feea7c0d 100644 --- a/open_issues/libpthread/t/fix_have_kernel_resources.mdwn +++ b/open_issues/libpthread/t/fix_have_kernel_resources.mdwn @@ -413,3 +413,67 @@ Address problem mentioned in [[/libpthread]], *Threads' Death*. <braunr> oh, git is multithreaded <braunr> great <braunr> so i've actually tested my libpthread patch quite a lot + + +## IRC, freenode, #hurd, 2013-09-25 + + <braunr> on a side note, i was able to build gnumach/libc/hurd packages + with thread destruction + <teythoon> nice :) + <braunr> they boot and work mostly fine, although they add their own issues + <braunr> e.g. the comm field of the root ext2fs is empty + <braunr> ps crashes when trying to display threads + <braunr> but thread destruction actually works, i.e. servers (those that + are configured that away at least) go away after some time, and even + heavily used servers such as ext2fs dynamically scale over time :) + + +## IRC, freenode, #hurd, 2013-10-10 + + <braunr> concerning threads, i think i figured out the last bugs i had with + thread destruction + <braunr> it should be well on its way to be merged by the end of the year + + +## IRC, freenode, #hurd, 2013-10-11 + + <gg0> braunr: is your thread destruction patch ready for testing? + <braunr> gg0: there are packages at my repository, yes + <braunr> but i still have hurd fixes to do before i polish it + <braunr> in particular, posix says returning from main() stops the entire + process and all other threads + <braunr> i didn't check that during the switch to pthreads, and ext2fs (and + maybe others) actually return from main but expect other threads to live + on + <braunr> this creates problems when the main thread is actually destroyed, + but not the process + <teythoon> braunr: tmpfs does something like that, but calls pthread_exit + at the end of main + <braunr> same effect + <braunr> this was fine with cthreads, but must be changed with pthreads + <braunr> and libpthread must be fixed to enforce it + <braunr> (or libc) + + <braunr> diskfs_startup_diskfs should probably be changed to reuse the main + thread instead of returning + + +## IRC, freenode, #hurd, 2013-10-19 + + <zacts> I know what threads are, but what is 'thread destruction'? + <braunr> the hurd currently never destroys individual threads + <braunr> they're destroyed when tasks are destroyed + <braunr> if the number of threads in a task peaks at a high number, say + thousands of them, they'll remain until the task is terminated + <braunr> such tasks are usually file systems, normally never restarted (and + in the case of the root file system, not restartable) + <braunr> this results in a form of leak + <braunr> another effect of this leak is that servers which should go away + because of inactivity still remain + <braunr> since thread destruction doesn't actually work, the debian package + uses a patch to prevent worker threads from timeouting + <braunr> and to finish with, since thread destruction actually doesn't + work, normal (unpatched) applications that destroy threads are certainly + failing bad + <braunr> i just need to polish a few things, wait for youpi to finish his + work on TLS to resolve conflicts, and that will be all diff --git a/open_issues/lsof.mdwn b/open_issues/lsof.mdwn index 2cbf2302..2651932d 100644 --- a/open_issues/lsof.mdwn +++ b/open_issues/lsof.mdwn @@ -1,4 +1,4 @@ -[[!meta copyright="Copyright © 2010 Free Software Foundation, Inc."]] +[[!meta copyright="Copyright © 2010, 2013 Free Software Foundation, Inc."]] [[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable id="license" text="Permission is granted to copy, distribute and/or modify this @@ -11,3 +11,41 @@ License|/fdl]]."]]"""]] We don't have a `lsof` tool. Perhaps we could cook something with having a look at which ports are open at the moment (as [[`portinfo`|hurd/portinfo]] does, for example)? + + +# IRC, freenode, #hurd, 2013-10-16 + + <teythoon> braunr: there's something I've been working on, it's not yet + finished but usable + <teythoon> http://paste.debian.net/58266/ + <teythoon> it graphs port usage + <teythoon> it's a bit heavy on the dependency-side though... + <braunr> but + <braunr> is it able to link rights from different ipc spaces ? + <teythoon> no + <teythoon> what do you mean exactly? + <braunr> know that send right 123 in task 1 refers to receive right 321 in + task 2 + <braunr> basically, lsof + <braunr> i'm not sure it's possible right now, and that's what we'd really + need + <teythoon> does the kernel hand out this information? + <braunr> ^ + <teythoon> right, I'm not sure it's possible either + <braunr> but a graph maker in less than 300 is cute :) + <braunr> 300 lines* + <teythoon> well, it leverages pymatplotlib or something, it needs half of + the pythonverse ;) + <braunr> lsof and pmap and two tools we really lack on the hurd + <teythoon> what does portinfo --translate=PID do? + <braunr> i guess it asks proc so that ports that refer to task actually + give useful info + <braunr> hml + <braunr> no + <braunr> doesn't make sense to give a pid in this case + <braunr> teythoon: looks like it does what we talked about + <teythoon> :) + <braunr> teythoon: the output looks a bit weird anyway, i think we need to + look at the code to be sure + <teythoon> braunr: this is what aptitude update looks like: + https://teythoon.cryptobitch.de/portmonitor/aptitude_portmonitor.svg diff --git a/open_issues/mach-defpager_swap.mdwn b/open_issues/mach-defpager_swap.mdwn index 7d3b001c..6e4dc088 100644 --- a/open_issues/mach-defpager_swap.mdwn +++ b/open_issues/mach-defpager_swap.mdwn @@ -18,3 +18,24 @@ License|/fdl]]."]]"""]] <lifeng> I allocated a 5GB partition as swap, but hurd only found 1GB <youpi> use 2GiB swaps only, >2Gib are not supported <youpi> (and apparently it just truncates the size, to be investigated) + +## IRC, freenode, #hurd, 2013-10-25 + + <C-Keen> mkswap truncated the swap partiton to 2GB + <teythoon> :/ + <teythoon> have you checked with 'free' ? + <teythoon> I have a 4gb swap partition on one of my boxes + <C-Keen> how did you create it? + <C-Keen> 2gig swap alright + <C-Keen> according to free + + +# Swap Files + +## IRC, freenode, #hurd, 2013-10-25 + + <braunr> C-Keen: swapfiles are not to work very badly on the hurd + <braunr> swapfiles cause recursion and reservation problems on every system + but on the hurd, we just never took the time to fix the swap code + +Same issues as we generally would have with `hurd-defpager`? diff --git a/open_issues/multiprocessing.mdwn b/open_issues/multiprocessing.mdwn index 0ac7f195..eaaa2289 100644 --- a/open_issues/multiprocessing.mdwn +++ b/open_issues/multiprocessing.mdwn @@ -17,7 +17,7 @@ for applying multiprocessing. That is, however, only true from a first and inexperienced point of view: there are many difficulties. -IRC, freenode, #hurd, August / September 2010 +# IRC, freenode, #hurd, August / September 2010 <marcusb> silver_hook: because multi-server systems depend on inter-process communication, and inter-process communication is many times more @@ -32,7 +32,7 @@ IRC, freenode, #hurd, August / September 2010 serious research challenges -IRC, freenode, #hurd, 2011-07-26 +# IRC, freenode, #hurd, 2011-07-26 < braunr> 12:03 < CTKArcher> and does the hurd take more advantages in a multicore architecture than linux ? @@ -57,7 +57,7 @@ IRC, freenode, #hurd, 2011-07-26 < braunr> (here, thread migration means being dispatched on another cpu) -debian-hurd list +# debian-hurd list On Thu, Jan 02, 2003 at 05:40:00PM -0800, Thomas Bushnell, BSG wrote: > Georg Lehner writes: diff --git a/open_issues/performance.mdwn b/open_issues/performance.mdwn index ae05e128..772fd865 100644 --- a/open_issues/performance.mdwn +++ b/open_issues/performance.mdwn @@ -1,4 +1,4 @@ -[[!meta copyright="Copyright © 2010, 2011, 2012 Free Software Foundation, +[[!meta copyright="Copyright © 2010, 2011, 2012, 2013 Free Software Foundation, Inc."]] [[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable @@ -44,6 +44,8 @@ call|/glibc/fork]]'s case. * [[metadata_caching]] + * [[community/gsoc/project_ideas/object_lookups]] + --- diff --git a/open_issues/performance/io_system/read-ahead.mdwn b/open_issues/performance/io_system/read-ahead.mdwn index cd39328f..05a58f2e 100644 --- a/open_issues/performance/io_system/read-ahead.mdwn +++ b/open_issues/performance/io_system/read-ahead.mdwn @@ -3031,3 +3031,13 @@ License|/fdl]]."]]"""]] <mcsim> so, add? <braunr> if that's what you want to do, ok <braunr> i'll think about your initial question tomorrow + + +## IRC, freenode, #hurd, 2013-09-30 + + <antrik> talking about which... did the clustered I/O work ever get + concluded? + <braunr> antrik: yes, mcsim was able to finish clustered pageins, and it's + still on my TODO list + <braunr> it will get merged eventually, now that the large store patch has + also been applied diff --git a/open_issues/performance/microkernel_multi-server.mdwn b/open_issues/performance/microkernel_multi-server.mdwn index 111d2b88..0382c835 100644 --- a/open_issues/performance/microkernel_multi-server.mdwn +++ b/open_issues/performance/microkernel_multi-server.mdwn @@ -1,4 +1,4 @@ -[[!meta copyright="Copyright © 2011 Free Software Foundation, Inc."]] +[[!meta copyright="Copyright © 2011, 2013 Free Software Foundation, Inc."]] [[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable id="license" text="Permission is granted to copy, distribute and/or modify this @@ -12,7 +12,8 @@ License|/fdl]]."]]"""]] Performance issues due to the microkernel/multi-server system architecture? -IRC, freenode, #hurd, 2011-07-26 + +# IRC, freenode, #hurd, 2011-07-26 < CTKArcher> I read that, because of its microkernel+servers design, the hurd was slower than a monolithic kernel, is that confirmed ? @@ -45,3 +46,181 @@ IRC, freenode, #hurd, 2011-07-26 < braunr> but in 95, processors weren't that fast compared to other components as they are now < youpi> while disk/mem haven't evovled so fast + + +# IRC, freenode, #hurd, 2013-09-30 + + <snadge> ok.. i noticed when installing debian packages in X, the mouse + lagged a little bit + <snadge> that takes me back to classic linux days + <snadge> it could be a side effect of running under virtualisation who + knows + <braunr> no + <braunr> it's because of the difference of priorities between server and + client tasks + <snadge> is it simple enough to increase the priority of the X server? + <snadge> it does remind me of the early linux days.. people were more + interested in making things work, and making things not crash.. than + improving the desktop interactivity or responsiveness + <snadge> very low priority :P + <braunr> snadge: actually it's not the difference in priority, it's the + fact that some asynchronous processing is done at server side + <braunr> the priority difference just gives more time overall to servers + for that processing + <braunr> snadge: when i talk about servers, i mean system (hurd) servers, + no x + <snadge> yeah.. linux is the same.. in the sense that, that was its + priority and focus + <braunr> snadge: ? + <snadge> servers + <braunr> what are you talking about ? + <snadge> going back 10 years or so.. linux had very poor desktop + performance + <braunr> i'm not talking about priorities for developers + <snadge> it has obviously improved significantly + <braunr> i'm talking about things like nice values + <snadge> right.. and some of the modifications that have been done to + improve interactivity of an X desktop, are not relevant to servers + <braunr> not relevant at all since it's a hurd problem, not an x problem + <snadge> yeah.. that was more of a linux problem too, some time ago was the + only real point i was making.. a redundant one :p + <snadge> where i was going with that.. was desktop interactivity is not a + focus for hurd at this time + <braunr> it's not "desktop interactivity" + <braunr> it's just correct scheduling + <snadge> is it "correct" though.. the scheduler in linux is configurable, + and selectable + <snadge> depending on the type of workload you expect to be doing + <braunr> not really + <snadge> it can be interactive, for desktop loads.. or more batched, for + server type loads.. is my basic understanding + <braunr> no + <braunr> that's the scheduling policy + <braunr> the scheduler is cfs currently + <braunr> and that's the main difference + <braunr> cfs means completely fair + <braunr> whereas back in 2.4 and before, it was a multilevel feedback + scheduler + <braunr> i.e. a scheduler with a lot of heuristics + <braunr> the gnumach scheduler is similar, since it was the standard + practice from unix v6 at the time + <braunr> (gnumach code base comes from bsd) + <braunr> so 1/ we would need a completely fair scheduler too + <braunr> and 2/ we need to remove asynchronous processing by using mostly + synchronous rpc + <snadge> im just trying to appreciate the difference between async and sync + event processing + <braunr> on unix, the only thing asynchronous is signals + <braunr> on the hurd, simply cancelling select() can cause many + asynchronous notifications at the server to remove now unneeded resources + <braunr> when i say cancelling select, i mean one or more fds now have + pending events, and the others must be cleaned + <snadge> yep.. thats a pretty fundamental change though isnt it? .. if im + following you, you're talking about every X event.. so mouse move, + keyboard press etc etc etc + <snadge> instead of being handled async.. you're polling for them at some + sort of timing interval? + <snadge> never mind.. i just read about async and sync with regards to rpc, + and feel like a bit of a noob + <snadge> async provides a callback, sync waits for the result.. got it :p + <snadge> async is resource intensive on hurd for the above mentioned + reasons.. makes sense now + <snadge> how about optimising the situation where a select is cancelled, + and deferring the signal to the server to clean up resources until a + later time? + <snadge> so like java.. dont clean up, just make a mess + <snadge> then spend lots of time later trying to clean it up.. sounds like + my life ;) + <snadge> reuse stale objects instead of destroying and recreating them, and + all the problems associated with that + <snadge> but if you're going to all these lengths to avoid sending messages + between processes + <snadge> then you may as well just use linux? :P + <snadge> im still trying to wrap my head around how converting X to use + synchronous rpc calls will improve responsiveness + <pinotree> what has X to do with it? + <snadge> nothing wrong with X.. braunr just mentioned that hurd doesnt + really handle the async calls so well + <snadge> there is more overhead.. that it would be more efficient on hurd, + if it uses sync rpc instead + <snadge> and perhaps a different task scheduler would help also + <snadge> ala cfs + <snadge> but i dont think anyone is terribly motivated in turning hurd into + a desktop operating system just yet.. but i could be wrong ;) + <braunr> i didn't say that + <snadge> i misinterpreted what you said then .. im not surprised, im a + linux sysadmin by trade.. and have basic university OS understanding (ie + crap all) at a hobbyist level + <braunr> i said there is asynchronous processing (i.e. server still have + work to do even when there is no client) + <braunr> that processing mostly comes from select requests cancelling what + they installed + <braunr> ie.e. you select fd 1 2 3, even on 2, you cancel on 1 and 3 + <braunr> those cancellations aren't synchronous + <braunr> the client deletes ports, and the server asynchronously receives + dead name notifications + <braunr> since servers have a greater priority, these notifications are + processed before the client can continue + <braunr> which is what makes you feel lag + <braunr> X is actually a client here + <braunr> when i say server, i mean hurd servers + <braunr> the stuff implementing sockets and files + <braunr> also, you don't need to turn the hurd into a desktop os + <braunr> any correct way to do fair scheduling will do + <snadge> can the X client be made to have a higher priority than the hurd + servers? + <snadge> or perhaps something can be added to hurd to interface with X + <azeem_> well, the future is wayland + <snadge> ufs .. unfair scheduling.. give priority to X over everything else + <snadge> hurd almost seams ideal for that idea.. since the majority of the + system is seperated from the kernel + <snadge> im likely very wrong though :p + <braunr> snadge: the reason we elevated the priority of servers is to avoid + delaying the processing of notifications + <braunr> because each notification can spawn a server thread + <braunr> and this lead to cases where processing notifications was so slow + that spawning threads would occur more frequently, leading to the server + exhausting its address space because of thread stacks + <snadge> cant it wait for X though? .. or does it lead to that situation + you just described + <braunr> we should never need such special cases + <braunr> we should remove async notifications + <snadge> my logic is this.. if you're not running X then it doesnt + matter.. if you are, then it might.. its sort of up to you whether you + want priority over your desktop interface or whether it can wait for more + important things, which creates perceptible lag + <braunr> snadge: no it doesn't + <braunr> X is clearly not the only process involved + <braunr> the whole chain should act synchronously + <braunr> from the client through the server through the drivers, including + the file system and sockets, and everything that is required + <braunr> it's a general problem, not specific to X + <snadge> right.. from googling around, it looks like people get very + excited about asyncronous + <snadge> there was a move to that for some reason.. it sounds great in + theory + <snadge> continue processing something else whilst you wait for a + potentially time consuming process.. and continue processing that when + you get the result + <snadge> its also the only way to improve performance with parallelism? + <snadge> which is of no concern to hurd at this time + <braunr> snadge: please don't much such statements when you don't know what + you're talking about + <braunr> it is a concern + <braunr> and yes, async processing is a way to improve performance + <braunr> but don't mistake async rpc and async processing + <braunr> async rpc simply means you can send and receive at any time + <braunr> sync means you need to recv right after send, blocking until a + reply arrives + <braunr> the key word here is *blocking*ù + <snadge> okay sure.. that makes sense + <snadge> what is the disadvantage to doing it that way? + <snadge> you potentially have more processes that are blocking? + <braunr> a system implementing posix such as the hurd needs signals + <braunr> and some event handling facility like select + <braunr> implementing them synchronously means a thread ready to service + these events + <braunr> the hurd currently has such a message thread + <braunr> but it's complicated and also a scalability concern + <braunr> e.g. you have at least two thread per process + <braunr> bbl diff --git a/open_issues/pthread_atfork.mdwn b/open_issues/pthread_atfork.mdwn index 1b656f05..06b9d6c6 100644 --- a/open_issues/pthread_atfork.mdwn +++ b/open_issues/pthread_atfork.mdwn @@ -18,3 +18,89 @@ can probably be borrowed from `nptl/sysdeps/unix/sysv/linux/register-atfork.c`. <pinotree> SRCDIR/opal/mca/memory/linux/arena.c:387: warning: warning: pthread_atfork is not implemented and will always fail + + +# Samuel's implementation + +TODO. + + +## IRC, OFTC, #debian-hurd, 2013-10-08 + + <pinotree> youpi: if you need/want to test your pthread_atfork + implementation, you can check libposix-atfork-perl and its test suite + (whose test 004 hangs now, with eglibc -93) + <youpi> while it failed previously indeed + <youpi> we might simply need to rebuild perl against it + <youpi> (I see ifdef pthread_atfork in perl) + + +## IRC, freenode, #hurd, 2013-10-16 + + <teythoon> tschwinge: I'd love to try your cross-gnu tool, the wiki page + suggests that the list of required source packages is outdated. can you + give me some hints? + <teythoon> tschwinge: I got this error running cross-gnu: + http://paste.debian.net/58303/ + make[4]: Leaving directory `/home/teythoon/repos/hurd/cross/src/glibc/setjmp' + make subdir=string -C ../string ..=../ objdir=/home/teythoon/repos/hurd/cross/obj/glibc -f Makefile -f ../elf/rtld-Rules rtld-all rtld-modules='rtld-strchr.os rtld-strcmp.os rtld-strcpy.os rtld-strlen.os rtld-strnlen.os rtld-memchr.os rtld-memcmp.os rtld-memmove.os rtld-memset.os rtld-mempcpy.os rtld-stpcpy.os rtld-memcpy.os rtld-rawmemchr.os rtld-argz-count.os rtld-argz-extract.os rtld-stpncpy.os' + make[4]: Entering directory `/home/teythoon/repos/hurd/cross/src/glibc/string' + make[4]: Leaving directory `/home/teythoon/repos/hurd/cross/src/glibc/string' + make[4]: Entering directory `/home/teythoon/repos/hurd/cross/src/glibc/string' + make[4]: Nothing to be done for `rtld-all'. + make[4]: Leaving directory `/home/teythoon/repos/hurd/cross/src/glibc/string' + make[3]: Leaving directory `/home/teythoon/repos/hurd/cross/src/glibc/elf' + i686-pc-gnu-gcc -shared -static-libgcc -Wl,-O1 -Wl,-z,defs -Wl,-dynamic-linker=/lib/ld.so.1 -B/home/teythoon/repos/hurd/cross/obj/glibc/csu/ -Wl,--version-script=/home/teythoon/repos/hurd/cross/obj/glibc/libc.map -Wl,-soname=libc.so.0.3 -Wl,-z,combreloc -Wl,-z,relro -Wl,--hash-style=both -nostdlib -nostartfiles -e __libc_main -L/home/teythoon/repos/hurd/cross/obj/glibc -L/home/teythoon/repos/hurd/cross/obj/glibc/math -L/home/teythoon/repos/hurd/cross/obj/glibc/elf -L/home/teythoon/repos/hurd/cross/obj/glibc/dlfcn -L/home/teythoon/repos/hurd/cross/obj/glibc/nss -L/home/teythoon/repos/hurd/cross/obj/glibc/nis -L/home/teythoon/repos/hurd/cross/obj/glibc/rt -L/home/teythoon/repos/hurd/cross/obj/glibc/resolv -L/home/teythoon/repos/hurd/cross/obj/glibc/crypt -L/home/teythoon/repos/hurd/cross/obj/glibc/mach -L/home/teythoon/repos/hurd/cross/obj/glibc/hurd -Wl,-rpath-link=/home/teythoon/repos/hurd/cross/obj/glibc:/home/teythoon/repos/hurd/cross/obj/glibc/math:/home/teythoon/repos/hurd/cross/obj/glibc/elf:/home/teythoon/repos/hurd/cross/obj/glibc/dlfcn:/home/teythoon/repos/hurd/cross/obj/glibc/nss:/home/teythoon/repos/hurd/cross/obj/glibc/nis:/home/teythoon/repos/hurd/cross/obj/glibc/rt:/home/teythoon/repos/hurd/cross/obj/glibc/resolv:/home/teythoon/repos/hurd/cross/obj/glibc/crypt:/home/teythoon/repos/hurd/cross/obj/glibc/mach:/home/teythoon/repos/hurd/cross/obj/glibc/hurd -o /home/teythoon/repos/hurd/cross/obj/glibc/libc.so -T /home/teythoon/repos/hurd/cross/obj/glibc/shlib.lds /home/teythoon/repos/hurd/cross/obj/glibc/csu/abi-note.o /home/teythoon/repos/hurd/cross/obj/glibc/elf/soinit.os /home/teythoon/repos/hurd/cross/obj/glibc/libc_pic.os /home/teythoon/repos/hurd/cross/obj/glibc/elf/sofini.os /home/teythoon/repos/hurd/cross/obj/glibc/elf/interp.os /home/teythoon/repos/hurd/cross/obj/glibc/elf/ld.so /home/teythoon/repos/hurd/cross/obj/glibc/mach/libmachuser-link.so /home/teythoon/repos/hurd/cross/obj/glibc/hurd/libhurduser-link.so -lgcc + /home/teythoon/repos/hurd/cross/obj/glibc/libc_pic.os: In function `__fork': + /home/teythoon/repos/hurd/cross/src/glibc/posix/../sysdeps/mach/hurd/fork.c:70: undefined reference to `__start__hurd_atfork_prepare_hook' + /home/teythoon/repos/hurd/cross/lib/gcc/i686-pc-gnu/4.8.1/../../../../i686-pc-gnu/bin/ld: /home/teythoon/repos/hurd/cross/obj/glibc/libc_pic.os: relocation R_386_GOTOFF against undefined hidden symbol `__start__hurd_atfork_prepare_hook' can not be used when making a shared object + /home/teythoon/repos/hurd/cross/lib/gcc/i686-pc-gnu/4.8.1/../../../../i686-pc-gnu/bin/ld: final link failed: Bad value + collect2: error: ld returned 1 exit status + make[2]: *** [/home/teythoon/repos/hurd/cross/obj/glibc/libc.so] Error 1 + make[2]: Leaving directory `/home/teythoon/repos/hurd/cross/src/glibc/elf' + make[1]: *** [elf/subdir_lib] Error 2 + make[1]: Leaving directory `/home/teythoon/repos/hurd/cross/src/glibc' + make: *** [all] Error 2 + + rm -f /home/teythoon/repos/hurd/cross/sys_root/lib/ld.so + + exit 100 + + binutils-2.23.2, + gcc-4.8.1, + everything else is from git as specified in the wiki. + + +## IRC, freenode, #hurd, 2013-10-24 + + <AliciaC> in recent glibc commits (tschwinge/Roger_Whittaker branch) there + are references to _hurd_atfork_* symbols in sysdeps/mach/hurd/fork.c, and + some _hurd_fork_* symbols, some of the _hurd_fork_* symbols seem to be + defined in Hurd's boot/frankemul.ld (mostly guessing by their names being + mentioned, I don't know linker script syntax), but those _hurd_atfork_* + symbols don't seem to be defined there, are they supposed to be defined + elsewhere or is th + <AliciaC> does anyone know where the _hurd_atfork_* group of symbols + referenced in glibc are defined (if anywhere)? + <youpi> AliciaC: it's the DEFINE_HOOK (_hurd_atfork_prepare_hook, (void)); + in glibc/sysdeps/mach/hurd/fork.c + <AliciaC> hm, is that not just a declaration? + <youpi> no, it's a definition, as its name suggests : + <AliciaC> (despite the macro name) + <youpi> :) + <AliciaC> ok + <AliciaC> I should look into it more, I could have sworn I was getting + undefined references, but maybe the symbol names used are different from + those defined, but that'd be odd as well, in the same file and all + <AliciaC> I mean, I do get undefined references, but question is if it's to + things that should have been defined or not + <youpi> what undefined references do you gaT? + <youpi> s/gaT/get + <AliciaC> I'll get back to you once I have that system up again + <AliciaC> youpi: sysdeps/mach/hurd/fork.c:70: undefined reference to + `__start__hurd_atfork_prepare_hook' + <AliciaC> fork.c:70: 'RUN_HOOK (_hurd_atfork_prepare_hook, ());' + <AliciaC> DEFINE_HOOK (_hurd_atfork_prepare_hook, (void)); is higher up in + the file + <AliciaC> though there is also this message: build/libc_pic.os: relocation + R_386_GOTOFF against undefined hidden symbol + `__start__hurd_atfork_prepare_hook' can not be used when making a shared + object diff --git a/open_issues/smp.mdwn b/open_issues/smp.mdwn index a45a1e22..89474d25 100644 --- a/open_issues/smp.mdwn +++ b/open_issues/smp.mdwn @@ -37,3 +37,11 @@ See also the [[FAQ entry|faq/smp]]. ## Richard, 2013-03-20 This task actually looks too big for a GSoC project. + + +## IRC, freenode, #hurd, 2013-09-30 + + <braunr> also, while the problem with hurd is about I/O, it's actually a + lot more about caching, and even with more data cached in, the true + problem is contention, in which case having several processors would + actually slow things down even more diff --git a/open_issues/strict_aliasing.mdwn b/open_issues/strict_aliasing.mdwn index b7d39805..0e59f796 100644 --- a/open_issues/strict_aliasing.mdwn +++ b/open_issues/strict_aliasing.mdwn @@ -1,4 +1,4 @@ -[[!meta copyright="Copyright © 2012 Free Software Foundation, Inc."]] +[[!meta copyright="Copyright © 2012, 2013 Free Software Foundation, Inc."]] [[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable id="license" text="Permission is granted to copy, distribute and/or modify this @@ -29,3 +29,16 @@ License|/fdl]]."]]"""]] issues (if gcc catches them all) <tschwinge> The strict aliasing things should be fixed, yes. Some might be from MIG. + + +# IRC, freenode, #hurd, 2013-10-17 + + <braunr> we should build gnumach and the hurd with -fno-strict-aliasing + <pinotree> aren't the mig-generated stubs the only issues related to that? + <braunr> no + <teythoon> b/c we often have pointers of different type pointing to the + same address? for example code using libports? + <braunr> the old linux code, including pfinet, and even the hurd libraries, + use techniques that assume aliasing + <braunr> exactly + <teythoon> right, I agree diff --git a/open_issues/thread-cancel_c_55_hurd_thread_cancel_assertion___spin_lock_locked_ss_critical_section_lock.mdwn b/open_issues/thread-cancel_c_55_hurd_thread_cancel_assertion___spin_lock_locked_ss_critical_section_lock.mdwn index 7159551d..f40e0455 100644 --- a/open_issues/thread-cancel_c_55_hurd_thread_cancel_assertion___spin_lock_locked_ss_critical_section_lock.mdwn +++ b/open_issues/thread-cancel_c_55_hurd_thread_cancel_assertion___spin_lock_locked_ss_critical_section_lock.mdwn @@ -50,3 +50,5 @@ IRC, unknown channel, unknown date: result in others trying to take it... <youpi> nope: look at the code :) <youpi> or maybe the cancel_hook, but I really doubt it + +See discussion about *`critical_section_lock`* on [[glibc]]. diff --git a/open_issues/time.mdwn b/open_issues/time.mdwn index 367db872..d9f1fa1d 100644 --- a/open_issues/time.mdwn +++ b/open_issues/time.mdwn @@ -837,3 +837,17 @@ not get a define for `HZ`, which is then defined with a fallback value of 60. <nalaginrut> braunr: Guile2 works smoothly now, let me try something cool with it <braunr> nalaginrut: nice + + +### IRC, OFTC, #debian-hurd, 2013-09-29 + + <pinotree> youpi: is the latest glibc carrying the changes related to + timing? what about gb guile-2.0 with it? + <youpi> it does + <youpi> so that was the only issue with guile? + <youpi> well at least we'll see + <pinotree> iirc yes + <pinotree> according to nalaginrut and the latest build log, it'd seem so + <youpi> started + <youpi> yay, guile-2.0 :) + <pinotree> yay diff --git a/open_issues/wine.mdwn b/open_issues/wine.mdwn index 65e6c584..f8bb469b 100644 --- a/open_issues/wine.mdwn +++ b/open_issues/wine.mdwn @@ -1,4 +1,5 @@ -[[!meta copyright="Copyright © 2010, 2011 Free Software Foundation, Inc."]] +[[!meta copyright="Copyright © 2010, 2011, 2013 Free Software Foundation, +Inc."]] [[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable id="license" text="Permission is granted to copy, distribute and/or modify this @@ -21,7 +22,7 @@ requirements Wine has: only libc / POSIX / etc., or if there are allocation. There is kernel support for this,* however. -IRC, freenode, #hurd, 2011-08-11 +# IRC, freenode, #hurd, 2011-08-11 < arethusa> I've been trying to make Wine work inside a Debian GNU/Hurd VM, and to that end, I've successfully compiled the latest sources from Git @@ -67,3 +68,13 @@ IRC, freenode, #hurd, 2011-08-11 < youpi> yes < pinotree> (but that patch is lame) + + +# IRC, freenode, #hurd, 2013-10-02 + + <gnu_srs> youpi: I've come a little further with wine, see debian bug + #724681 (same problem). + <gnu_srs> Now the problem is probably due to the specific address space + and stack issues to be + <gnu_srs> fixed for wine to run as braunr pointed out some months ago + (IRC?) when we discussed wine. |