summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--hurd/subhurd.mdwn48
-rw-r--r--open_issues/perlmagick.mdwn45
-rw-r--r--open_issues/unit_testing.mdwn83
3 files changed, 174 insertions, 2 deletions
diff --git a/hurd/subhurd.mdwn b/hurd/subhurd.mdwn
index cb4a40a8..b8e595d3 100644
--- a/hurd/subhurd.mdwn
+++ b/hurd/subhurd.mdwn
@@ -1,4 +1,4 @@
-[[!meta copyright="Copyright © 2007, 2008, 2010 Free Software Foundation,
+[[!meta copyright="Copyright © 2007, 2008, 2010, 2011 Free Software Foundation,
Inc."]]
[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
@@ -21,6 +21,8 @@ possible to use [[debugging/gdb/noninvasive_debugging]], but this is less
flexible.) Vice versa, it is also possible to use a subhurd to debug the
*main* Hurd system, for example, the latter's root file system.
+[[!toc levels=2]]
+
# Howto
@@ -139,3 +141,47 @@ translator|translator/ext2fs]]'s [[PID]], but this is no problem, as currently
[[PID]]s are visible across subhurd boundaries. (It is a [[!taglink
open_issue_hurd]] whether this is the right thing to do in
[[open_issues/virtualization]] contexts, but that's how it currently is.)
+
+
+<a name="unit_testing"></a>
+## Unit Testing
+
+freenode, #hurd channel, 2011-03-06:
+
+From [[open_issues/unit_testing]].
+
+ <youpi> it could be interesting to have scripts that automatically start a
+ sub-hurd to do the tests
+ <youpi> though that'd catch sub-hurd issues :)
+ <foocraft> so a sub-hurd is a hurd that I can run on something that I know
+ works(like linux)?
+ <foocraft> Virtual machine I would think
+ <foocraft> and over a network connection it would submit results back to
+ the host :p
+ * foocraft brain damage
+ <youpi> sub-hurd is a bit like chroot
+ <youpi> except that it's more complete
+ <foocraft> oh okay
+ <youpi> i.e. almost everything gets replaced with what you want, except the
+ micro-kernel
+ <youpi> that way you can even test the exec server for instance, without
+ risks of damaging the host OS
+ <foocraft> and we know the micro-kernel works correctly, right youpi?
+ <youpi> well, at least it's small enough that most bugs are not there
+ <foocraft> 1) all tests run in subhurd 2) output results in a place in the
+ subhurd 3) tester in the host checks the result and pretty-prints it 4)
+ rinse & repeat
+ <youpi> the output can actually be redirected iirc
+ <youpi> since you give the sub-hurd a "console"
+ <foocraft> youpi, yup yeah, so now it's more like chroot if that's the case
+ <youpi> it really looks like chroot, yes
+ <foocraft> but again, there's this subset of tests that we need to have
+ that ensures that even the tester running on the subhurd is valid, and it
+ didn't break because of a bug in the subhurd
+ <tschwinge> As long as you do in-system testing, you'll always (have to)
+ rely on some functionality provided by the host system.
+ <foocraft> the worst thing that could happen with unit testing is false
+ results that lead someone to try to fix something that isn't broken :p
+ <tschwinge> Yes.
+ <youpi> usually one tries to repeat the test by hand in a normal
+ environment
diff --git a/open_issues/perlmagick.mdwn b/open_issues/perlmagick.mdwn
index 1daac62b..8a57a8fd 100644
--- a/open_issues/perlmagick.mdwn
+++ b/open_issues/perlmagick.mdwn
@@ -1,4 +1,4 @@
-[[!meta copyright="Copyright © 2010 Free Software Foundation, Inc."]]
+[[!meta copyright="Copyright © 2010, 2011 Free Software Foundation, Inc."]]
[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
id="license" text="Permission is granted to copy, distribute and/or modify this
@@ -57,6 +57,49 @@ Etc.
+/usr/lib/gcc/i486-gnu/4.4.2/include/omp.h:
+# State as of 2011-03-06
+
+freenode, #hurd channel, 2011-03-06:
+
+ <pinotree> tschwinge: (speaking on working perl, how did it en with that
+ "(glibc) double free" crash with perl?)
+ <pinotree> *end
+ <tschwinge> I think I remember I suspected it's a libgomp (!) issue in the
+ end. I have not yet continued working on that.
+ <pinotree> libogmp? looks like you know more than me, then :)
+ <youpi> tschwinge: oh, I'm interested
+ <youpi> I know a bit about libgomp :)
+ <tschwinge> I bisected this down to where Imagemagick added -fgomp (or
+ whatever it is). And then the perl library (Imagemagick.pm?) which loads
+ the imagemagick.so segfaulted.
+ <tschwinge> ImageMagick did this change in the middle of a x.x.x.something
+ release..
+ <tschwinge> My next step would have been to test whether libgomp works at
+ all for us.
+ <youpi> ./usr/sbin/debootstrap:DEBOOTSTRAP_CHECKSUM_FIELD="SHA$SHA_SIZE"
+ <youpi> erf
+ <youpi> so they switched to another checksum
+ <youpi> but we don't have that one on all of our packages :)
+ <youpi> tschwinge:
+ <youpi> buildd@bach:~$ OMP_NUM_THREADS=2 ./test
+ <youpi> I'm 0x1
+ <youpi> I'm 0x3
+ <youpi> libgomp works at least a bit
+ <tschwinge> OK.
+ <pinotree> i guess we should hope the working bits don't stop at that point
+ ;)
+ <tschwinge> If open_issues/perlmagick is to be believed a diff of 6.4.1-1
+ and 6.4.1-2 should tell what exactly was changed.
+ <tschwinge> Oh!
+ <tschwinge> I even have it on the page already! ;-)
+ <tschwinge> -fopenmp
+ <youpi> I've tried the pragmas that imagemagick uses
+ <youpi> they work
+ <tschwinge> Might be the issue fixed itself?
+ <youpi> I don't know, it's the latest libc here
+ <youpi> (and latest hurd, to be uploaded)
+
+
# Other
[[!debbug 551017]]
diff --git a/open_issues/unit_testing.mdwn b/open_issues/unit_testing.mdwn
index 0ff7ecca..a5ffe19d 100644
--- a/open_issues/unit_testing.mdwn
+++ b/open_issues/unit_testing.mdwn
@@ -58,6 +58,8 @@ abandoned).
Developers*](http://lwn.net/Articles/412302/) by Steven Rostedt,
2010-10-28. [v2](http://lwn.net/Articles/414064/), 2010-11-08.
+ * <http://autotest.kernel.org/wiki/WhitePaper>
+
# Related
@@ -237,3 +239,84 @@ EPERM](http://lists.gnu.org/archive/html/bug-hurd/2011-03/msg00007.html)
<youpi> wasn't it already in the gsoc proposal?
<youpi> bummer
<antrik> nope
+
+freenode, #hurd channel, 2011-03-06:
+
+ <nixness> how's the hurd coding workflow, typically?
+
+*nixness* -> *foocraft*.
+
+ <foocraft> we're discussing how TDD can work with Hurd (or general kernel
+ development) on #osdev
+ <foocraft> so what I wanted to know, since I haven't dealt much with GNU
+ Hurd, is how do you guys go about coding, in this case
+ <tschwinge> Our current workflow scheme is... well... is...
+ <tschwinge> Someone wants to work on something, or spots a bug, then works
+ on it, submits a patch, and 0 to 10 years later it is applied.
+ <tschwinge> Roughly.
+ <foocraft> hmm so there's no indicator of whether things broke with that
+ patch
+ <foocraft> and how low do you think we can get with tests? A friend of mine
+ was telling me that with kernel dev, you really don't know whether, for
+ instance, the stack even exists, and a lot of things that I, as a
+ programmer, can assume while writing code break when it comes to writing
+ kernel code
+ <foocraft> Autotest seems promising
+
+See autotest link given above.
+
+ <foocraft> but in any case, coming up with the testing framework that
+ doesn't break when the OS itself breaks is hard, it seems
+ <foocraft> not sure if autotest isolates the mistakes in the os from
+ finding their way in the validity of the tests themselves
+ <youpi> it could be interesting to have scripts that automatically start a
+ sub-hurd to do the tests
+
+[[hurd/subhurd#unit_testing]].
+
+ <tschwinge> foocraft: To answer one of your earlier questions: you can do
+ really low-level testing. Like testing Mach's message passing. A
+ million times. The questions is whether that makes sense. And / or if
+ it makes sense to do that as part of a unit testing framework. Or rather
+ do such things manually once you suspect an error somewhere.
+ <tschwinge> The reason for the latter may be that Mach IPC is already
+ heavily tested during normal system operation.
+ <tschwinge> And yet, there still may be (there are, surely) bugs.
+ <tschwinge> But I guess that you have to stop at some (arbitrary?) level.
+ <foocraft> so we'd assume it works, and test from there
+ <tschwinge> Otherwise you'd be implementing the exact counter-part of what
+ you're testing.
+ <tschwinge> Which may be what you want, or may be not. Or it may just not
+ be feasible.
+ <foocraft> maybe the testing framework should have dependencies
+ <foocraft> which we can automate using make, and phony targets that run
+ tests
+ <foocraft> so everyone writes a test suite and says that it depends on A
+ and B working correctly
+ <foocraft> then it'd go try to run the tests for A etc.
+ <tschwinge> Hmm, isn't that -- on a high level -- have you have by
+ different packages? For example, the perl testsuite depends (inherently)
+ on glibc working properly. A perl program's testsuite depends on perl
+ working properly.
+ <foocraft> yeah, but afaik, the ordering is done by hand
+
+freenode, #hurd channel, 2011-03-07:
+
+ <antrik> actually, I think for most tests it would be better not to use a
+ subhurd... that leads precisely to the problem that if something is
+ broken, you might have a hard time running the tests at all :-)
+ <antrik> foocraft: most of the Hurd code isn't really low-level. you can
+ use normal debugging and testing methods
+ <antrik> gnumach of course *does* have some low-level stuff, so if you add
+ unit tests to gnumach too, you might run into issues :-)
+ <antrik> tschwinge: I think testing IPC is a good thing. as I already said,
+ automated testing is *not* to discover existing but unknown bugs, but to
+ prevent new ones creeping in, and tracking progress on known bugs
+ <antrik> tschwinge: I think you are confusing unit testing and regression
+ testing. http://www.bddebian.com/~hurd-web/open_issues/unit_testing/
+ talks about unit testing, but a lot (most?) of it is actually about
+ regression tests...
+ <tschwinge> antrik: That may certainly be -- I'm not at all an expert in
+ this, and just generally though that some sort of automated testing is
+ needed, and thus started collecting ideas.
+ <tschwinge> antrik: You're of course invited to fix that.