summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--microkernel/mach/memory_object/discussion.mdwn43
-rw-r--r--open_issues/clock_gettime.mdwn30
-rw-r--r--open_issues/default_pager.mdwn28
-rw-r--r--open_issues/gnumach_memory_management.mdwn92
-rw-r--r--open_issues/mach_migrating_threads.mdwn15
-rw-r--r--open_issues/performance.mdwn8
-rw-r--r--open_issues/performance/degradation.mdwn14
-rw-r--r--open_issues/performance/ipc_virtual_copy.mdwn358
-rw-r--r--open_issues/time.mdwn16
-rw-r--r--open_issues/translators_set_up_by_untrusted_users.mdwn43
10 files changed, 644 insertions, 3 deletions
diff --git a/microkernel/mach/memory_object/discussion.mdwn b/microkernel/mach/memory_object/discussion.mdwn
index a006429b..c874b255 100644
--- a/microkernel/mach/memory_object/discussion.mdwn
+++ b/microkernel/mach/memory_object/discussion.mdwn
@@ -10,7 +10,7 @@ License|/fdl]]."]]"""]]
[[!tag open_issue_documentation open_issue_gnumach]]
-IRC, freenode, #hurd, 2011-08-05
+IRC, freenode, #hurd, 2011-08-05:
< neal> braunr: For instance, memory objects are great as they allow you to
specify the mapping policy in user space.
@@ -22,3 +22,44 @@ IRC, freenode, #hurd, 2011-08-05
< neal> I'm not sure what you mean by page cache lru appoximateion
< braunr> the kernel eviction policy :)
< neal> that's an implementation detail
+
+IRC, freenode, #hurd, 2011-09-05:
+
+ <braunr> mach isn't a true modern microkernel, it handles a lot of
+ resources, such as high level virtual memory and cpu time
+ <braunr> for example, the page replacement mechanism can't be implemented
+ outside the kernel
+ <braunr> yet, it provides nothing to userspace server to easily allocate
+ resources on behalf of clients
+ <braunr> so, when a thread calls an RPC, the cpu time used to run that RPC
+ is accounted on the server task
+ <braunr> the hurd uses lots of external memory managers
+
+[[external_pager_mechanism]].
+
+ <braunr> but they can't decide how to interact with the page cache
+ <braunr> the kernel handles the page cache, and initiates the requests to
+ the pagers
+ <cjuner> braunr, why can't they decide that?
+ <braunr> because it's implemented in the kernel
+ <braunr> and there is nothing provided by mach to do that some other way
+ <slpz_> braunr: you probably already know this, but the problem with client
+ requests being accounted on behalf the server, is fixed in Mach with
+ Migrating Threads
+
+[[open_issues/mach_migrating_threads]].
+
+ <braunr> slpz_: migrating threads only fix the issue for the resources
+ managed by mach, not the external servers
+ <braunr> slpz_: but it's a (imo necessary) step to completely solve the
+ issue
+ <braunr> in addition to being a great feature for performance (lighter
+ context switchers, less state to track)
+ <braunr> it also helps priority inversion problems
+ <slpz_> braunr: I was referring just to cpu-time, but I agree with you an
+ interface change is needed for external pagers
+ <braunr> slpz_: servers in general, not necessarily pagers
+ <slpz_> as a way to mitigate the effect of Mach paging out to external
+ pagers, the folks at OSF implemented an "advisory pageout", so servers
+ are "warned" that they should start paging out, and can decide which
+ pages are going to be flushed by themselves
diff --git a/open_issues/clock_gettime.mdwn b/open_issues/clock_gettime.mdwn
index c06edc9b..5345ed6b 100644
--- a/open_issues/clock_gettime.mdwn
+++ b/open_issues/clock_gettime.mdwn
@@ -39,3 +39,33 @@ IRC, freenode, #hurd, 2011-08-26:
< youpi> yes, it should work
< braunr> sure
< youpi> and that's the way I was considering implementing it
+
+IRC, freenode, #hurd, 2011-09-06:
+
+ <pinotree> yeah, i had a draft of improved idea for also handling
+ nanoseconds
+ <tschwinge> pinotree: Ah, nice, I thought about nanoseconds as well.
+ <tschwinge> pinotree, youpi: This memory page is all-zero by default,
+ right?
+ <tschwinge> Can't we then say that its last int is a version code, and if
+ it is 0 (as it is now), we only have the normal mapped time field, if it
+ is 1, we also have the monotonic cliock and ns precision on address 8 and
+ 16 (or whatever)?
+ <tschwinge> In case that isn't your plan anyway.
+ <youpi> it's all-zero, yes
+ <tschwinge> Or, we say if a field is != 0 it is valid.
+ <youpi> making the last int a version code limits the size to one page
+ <youpi> I was thinking a field != 0 being valid is simpler
+ <youpi> but it's probably a problem too
+ <youpi> in that glibc usually caches whether interfaces are supported
+ <tschwinge> Wrap-around?
+ <youpi> for some clocks, it may be valid that the value is 0
+ <youpi> wrap-around is another issue too
+ <tschwinge> Well, then we can do the version-field thing, but put it right
+ after the current time field (address 8, I think)?
+ <youpi> yes
+ <youpi> it's a bit ugly, but it's hidden behind the structure
+ <tschwinge> It's not too bad, I think.
+ <youpi> yes
+ <tschwinge> And it will forever be a witness of the evolving of this
+ map_time interface. :-)
diff --git a/open_issues/default_pager.mdwn b/open_issues/default_pager.mdwn
new file mode 100644
index 00000000..189179c6
--- /dev/null
+++ b/open_issues/default_pager.mdwn
@@ -0,0 +1,28 @@
+[[!meta copyright="Copyright © 2011 Free Software Foundation, Inc."]]
+
+[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
+id="license" text="Permission is granted to copy, distribute and/or modify this
+document under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no Invariant
+Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
+is included in the section entitled [[GNU Free Documentation
+License|/fdl]]."]]"""]]
+
+[[!tag open_issue_gnumach]]
+
+IRC, freenode, #hurd, 2011-08-31:
+
+ <antrik> braunr: do you have any idea what could cause the paging errors
+ long before swap is exhausted?
+ <braunr> antrik: not really, but i know every project based on the mach vm
+ have rewritten their swap pager
+ <antrik> (and also I/O performance steadily dropping before that point is
+ reached?)
+ <antrik> hm
+ <braunr> there could too many things
+ <antrik> perhaps we could "borrow" from one of them? :-)
+ <braunr> map entry fragmentation for example
+ <braunr> the freebsd one is the only possible candidate
+ <braunr> uvm is too different
+ <braunr> dragonflybsd maybe, but it's very close to freebsd
+ <braunr> i didn't look at darwin/xnu
diff --git a/open_issues/gnumach_memory_management.mdwn b/open_issues/gnumach_memory_management.mdwn
index a728fc9d..1fe2f9be 100644
--- a/open_issues/gnumach_memory_management.mdwn
+++ b/open_issues/gnumach_memory_management.mdwn
@@ -1320,3 +1320,95 @@ There is a [[!FF_project 266]][[!tag bounty]] on this task.
< braunr> i hope it helped you learn about memory allocation, virtual
memory, gnu mach and the hurd in general :)
< antrik> indeed :-)
+
+
+# IRC, freenode, #hurd, 2011-09-06
+
+ [some performance testing]
+ <braunr> i'm not sure such long tests are relevant but let's assume balloc
+ is slower
+ <braunr> some tuning is needed here
+ <braunr> first, we can see that slab allocation occurs more often in balloc
+ than page allocation does in zalloc
+ <braunr> so yes, as slab allocation is slower (have you measured which part
+ actually is slow ? i guess it's the kmem_alloc call)
+ <braunr> the whole process gets a bit slower too
+ <mcsim> I used alloc_size = 4096 for zalloc
+ <braunr> i don't know what that is exactly
+ <braunr> but you can't hold 500 16 bytes buffers in a page so zalloc must
+ have had free pages around for that
+ <mcsim> I use kmem_alloc_wired
+ <braunr> if you have time, measure it, so that we know how much it accounts
+ for
+ <braunr> where are the results for dealloc ?
+ <mcsim> I can't give you result right now because internet works very
+ bad. But for first DEALLOC result are the same, exept some cases when it
+ takes balloc for more than 1000 ticks
+ <braunr> must be the transfer from the cpu layer to the slab layer
+ <mcsim> as to kmem_alloc_wired. I think zalloc uses this function too for
+ allocating objects in zone I test.
+ <braunr> mcsim: yes, but less frequently, which is why it's faster
+ <braunr> mcsim: another very important aspect that should be measured is
+ memory consumption, have you looked into that ?
+ <mcsim> I think that I made too little iterations in test SMALL
+ <mcsim> If I increase constant SMALL_TESTS will it be good enough?
+ <braunr> mcsim: i don't know, try both :)
+ <braunr> if you increase the number of iterations, balloc average time will
+ be lower than zalloc, but this doesn't remove the first long
+ initialization step on the allocated slab
+ <mcsim> SMALL_TESTS to 500, I mean
+ <braunr> i wonder if maintaining the slabs sorted through insertion sort is
+ what makes it slow
+ <mcsim> braunr: where do you sort slabs? I don't see this.
+ <braunr> mcsim: mem_cache_alloc_from_slab and its free counterpart
+ <braunr> mcsim: the mem_source stuff is useless in gnumach, you can remove
+ it and directly call the kmem_alloc/free functions
+ <mcsim> But I have to make special allocator for kernel map entries.
+ <braunr> ah right
+ <mcsim> btw. It turned out that 256 entries are not enough.
+ <braunr> that's weird
+ <braunr> i'll make a patch so that the mem_source code looks more like what
+ i have in x15 then
+ <braunr> about the results, i don't think the slab layer is that slow
+ <braunr> it's the cpu_pool_fill/drain functions that take time
+ <braunr> they preallocate many objects (64 for your objects size if i'm
+ right) at once
+ <braunr> mcsim: look at the first result page: some times, a number around
+ 8000 is printed
+ <braunr> the common time (ticks, whatever) for a single object is 120
+ <braunr> 8132/120 is 67, close enough to the 64 value
+ <mcsim> I forgot about SMALL tests here are they:
+ http://paste.debian.net/128533/ (balloc) http://paste.debian.net/128534/
+ (zalloc)
+ <mcsim> braunr: why do you divide 8132 by 120?
+ <braunr> mcsim: to see if it matches my assumption that the ~8000 number
+ matches the cpu_pool_fill call
+ <mcsim> braunr: I've got it
+ <braunr> mcsim: i'd be much interested in the dealloc results if you can
+ paste them too
+ <mcsim> dealloc: http://paste.debian.net/128589/
+ http://paste.debian.net/128590/
+ <braunr> mcsim: thanks
+ <mcsim> second dealloc: http://paste.debian.net/128591/
+ http://paste.debian.net/128592/
+ <braunr> mcsim: so the main conclusion i retain from your tests is that the
+ transfers from the cpu and the slab layers are what makes the new
+ allocator a bit slower
+ <mcsim> OPERATION_SMALL dealloc: http://paste.debian.net/128593/
+ http://paste.debian.net/128594/
+ <braunr> mcsim: what needs to be measured now is global memory usage
+ <mcsim> braunr: data from /proc/vmstat after kernel compilation will be
+ enough?
+ <braunr> mcsim: let me check
+ <braunr> mcsim: no it won't do, you need to measure kernel memory usage
+ <braunr> the best moment to measure it is right after zone_gc is called
+ <mcsim> Are there any facilities in gnumach for memory measurement?
+ <braunr> it's specific to the allocators
+ <braunr> just count the number of used pages
+ <braunr> after garbage collection, there should be no free page, so this
+ should be rather simple
+ <mcsim> ok
+ <mcsim> braunr: When I measure memory usage in balloc, what formula is
+ better cache->nr_slabs * cache->bufs_per_slab * cache->buf_size or
+ cache->nr_slabs * cache->slab_size?
+ <braunr> the latter
diff --git a/open_issues/mach_migrating_threads.mdwn b/open_issues/mach_migrating_threads.mdwn
new file mode 100644
index 00000000..5a70aac5
--- /dev/null
+++ b/open_issues/mach_migrating_threads.mdwn
@@ -0,0 +1,15 @@
+[[!meta copyright="Copyright © 2011 Free Software Foundation, Inc."]]
+
+[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
+id="license" text="Permission is granted to copy, distribute and/or modify this
+document under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no Invariant
+Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
+is included in the section entitled [[GNU Free Documentation
+License|/fdl]]."]]"""]]
+
+[[!tag open_issue_gnumach]]
+
+<http://www.brynosaurus.com/pub/os/thread-migrate.pdf>
+
+ * [[microkernel/mach/memory_object/discussion]]
diff --git a/open_issues/performance.mdwn b/open_issues/performance.mdwn
index 54f3ce39..2fd34621 100644
--- a/open_issues/performance.mdwn
+++ b/open_issues/performance.mdwn
@@ -30,3 +30,11 @@ call|/glibc/fork]]'s case.
---
* [[Degradation]]
+
+ * [[fork]]
+
+ * [[IPC_virtual_copy]]
+
+ * [[microbenchmarks]]
+
+ * [[microkernel_multi-server]]
diff --git a/open_issues/performance/degradation.mdwn b/open_issues/performance/degradation.mdwn
index 5db82e31..db759308 100644
--- a/open_issues/performance/degradation.mdwn
+++ b/open_issues/performance/degradation.mdwn
@@ -18,7 +18,7 @@ Thomas Schwinge)
> tree, reboot, build it again (1st): back to 11 h. Remove build tree, build
> it again (2nd): 12 h 40 min. Remove build tree, build it again (3rd): 15 h.
-IRC, freenode, #hurd, 2011-07-23
+IRC, freenode, #hurd, 2011-07-23:
< antrik> tschwinge: yes, the system definitely gets slower with
time. after running for a couple of weeks, it needs at least twice as
@@ -26,3 +26,15 @@ IRC, freenode, #hurd, 2011-07-23
< antrik> I don't know whether this is only related to swap usage, or there
are some serious fragmentation issues
< braunr> antrik: both could be induced by fragmentation
+
+---
+
+During [[IPC_virtual_copy]] testing:
+
+IRC, freenode, #hurd, 2011-09-02:
+
+ <manuel> interestingly, running it several times has made the performance
+ drop quite much (i'm getting 400-500MB/s with 1M now, compared to nearly
+ 800 fifteen minutes ago)
+ <braunr> manuel: i observed the same behaviour
+ [...]
diff --git a/open_issues/performance/ipc_virtual_copy.mdwn b/open_issues/performance/ipc_virtual_copy.mdwn
new file mode 100644
index 00000000..00fa7180
--- /dev/null
+++ b/open_issues/performance/ipc_virtual_copy.mdwn
@@ -0,0 +1,358 @@
+[[!meta copyright="Copyright © 2011 Free Software Foundation, Inc."]]
+
+[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
+id="license" text="Permission is granted to copy, distribute and/or modify this
+document under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no Invariant
+Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
+is included in the section entitled [[GNU Free Documentation
+License|/fdl]]."]]"""]]
+
+IRC, freenode, #hurd, 2011-09-02:
+
+ <slpz> what's the usual throughput for I/O operations (like "dd
+ if=/dev/zero of=/dev/null") in one of those Xen based Hurd machines
+ (*bber)?
+ <braunr> good question
+ <braunr> slpz: but don't use /dev/zero and /dev/null, as they don't have
+ anything to do with true I/O operations
+ <slpz> braunr: in fact, I want to test the performance of IPC's virtual
+ copy operations
+ <braunr> ok
+ <slpz> braunr: sorry, the "I/O" was misleading
+ <braunr> use bs=4096 then i guess
+ <slpz> bs > 2k
+ <braunr> ?
+ <slpz> braunr: everything about 2k is copied by vm_map_copyin/copyout
+ <slpz> s/about/above/
+ <slpz> braunr: MiG's stubs check for that value and generate complex (with
+ out_of_line memory) messages if datalen is above 2k, IIRC
+ <braunr> ok
+ <braunr> slpz: found it, thanks
+ <tschwinge> tschwinge@strauss:~ $ dd if=/dev/zero of=/dev/null bs=4k & p=$!
+ && sleep 10 && kill -s INFO $p && sleep 1 && kill $p
+ <tschwinge> [1] 13469
+ <tschwinge> 17091+0 records in
+ <tschwinge> 17090+0 records out
+ <tschwinge> 70000640 bytes (70 MB) copied, 17.1436 s, 4.1 MB/s
+ <tschwinge> Note, however 10 s vs. 17 s!
+ <tschwinge> And this is slow compared to heal hardware:
+ <tschwinge> thomas@coulomb:~ $ dd if=/dev/zero of=/dev/null bs=4k & p=$! &&
+ sleep 10 && kill -s INFO $p && sleep 1 && kill $p
+ <tschwinge> [1] 28290
+ <tschwinge> 93611+0 records in
+ <tschwinge> 93610+0 records out
+ <tschwinge> 383426560 bytes (383 MB) copied, 9.99 s, 38.4 MB/s
+ <braunr> tschwinge: is the first result on xen vm ?
+ <tschwinge> I think so.
+ <braunr> :/
+ <slpz> tschwinge: Thanks! Could you please try with a higher block size,
+ something like 128k or 256k?
+ <tschwinge> strauss is on a machine that also hosts a buildd, I think.
+ <braunr> oh ok
+ <pinotree> yes, aside either rossini or mozart
+ <tschwinge> And I can confirm that with dd if=/dev/zero of=/dev/null bs=4k
+ running, a parallel sleep 10 takes about 20 s (on strauss).
+
+[[open_issues/time]]
+
+ <braunr> slpz: i'll set up xen hosts soon and can try those tests while
+ nothing else runs to have more accurate results
+ <tschwinge> tschwinge@strauss:~ $ dd if=/dev/zero of=/dev/null bs=256k &
+ p=$! && sleep 10 && kill -s INFO $p && sleep 1 && kill $p
+ <tschwinge> [1] 13482
+ <tschwinge> 4566+0 records in
+ <tschwinge> 4565+0 records out
+ <tschwinge> 1196687360 bytes (1.2 GB) copied, 13.6751 s, 87.5 MB/s
+ <braunr> slpz: gains are logarithmic beyond the page size
+ <tschwinge> thomas@coulomb:~ $ dd if=/dev/zero of=/dev/null bs=256k & p=$!
+ && sleep 10 && kill -s INFO $p && sleep 1 && kill $p
+ <tschwinge> [1] 28295
+ <tschwinge> 6335+0 records in
+ <tschwinge> 6334+0 records out
+ <tschwinge> 1660420096 bytes (1.7 GB) copied, 9.99 s, 166 MB/s
+ <tschwinge> This time a the sleep 10 decided to take 13.6 s.
+ ``Interesting.''
+ <slpz> tschwinge: Thanks again. The results for the Xen machine are not bad
+ though. I can't obtain a throughput over 50MB/s with KVM.
+ <tschwinge> slpz: Want more data (bs)? Just tell.
+ <braunr> slpz: i easily get more than that
+ <braunr> slpz: what buffer size do you use ?
+ <slpz> tschwinge: no, I just wanted to see if Xen has an upper limit beyond
+ KVM's. Thank you.
+ <slpz> braunr: I try with different sizes until I find the maximum
+ throughput for a certain amount of requests (count)
+ <slpz> braunr: are you working with KVM?
+ <braunr> yes
+ <braunr> slpz: my processor is a model name : Intel(R) Core(TM)2 Duo
+ CPU E7500 @ 2.93GHz
+ <braunr> Linux silvermoon 2.6.32-5-amd64 #1 SMP Tue Jun 14 09:42:28 UTC
+ 2011 x86_64 GNU/Linux
+ <braunr> (standard amd64 squeeze kernel)
+ <slpz> braunr: and KVM's version?
+ <braunr> squeeze (0.12.5)
+ <braunr> bbl
+ <gnu_srs> 212467712 bytes (212 MB) copied, 9.95 s, 21.4 MB/s on kvm for me!
+ <slpz> gnu_srs: which block size?
+ <gnu_srs> 4k, and 61.7 MB/s with 256k
+ <slpz> gnu_srs: could you try with 512k and 1M?
+ <gnu_srs> 512k: 56.0 MB/s, 1024k: 40.2 MB/s Looks like the peak is around a
+ few 100k
+ <slpz> gnu_srs: thanks!
+ <slpz> I've just obtained 1.3GB/s with bs=512k on other (newer) machine
+ <braunr> on which hw/vm ?
+ <slpz> I knew this is a cpu-bound test, but I couldn't imagine faster
+ processors could make this difference
+ <slpz> braunr: Intel(R) Core(TM) i5 CPU 650 @ 3.20GHz
+ <slpz> braunr: KVM
+ <braunr> ok
+ <braunr> how much time did you wait before reading the result ?
+ <slpz> that was 20x times better than the same test on my Intel(R)
+ Core(TM)2 Duo CPU T7500 @ 2.20GHz
+ <slpz> braunr: I've repeated the test with a fixed "count"
+ <gnu_srs> My box is: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz: Max
+ is 67 MB/s around 140k block size
+ <braunr> yes but how much time did dd run ?
+ <gnu_srs> 10 s plus/minus a few fractions of a second,
+ <braunr> try waiting 30s
+ <slpz> braunr: didn't check, let me try again
+ <braunr> my kvm peaks at 130 MiB/s with bs 512k / 1M
+ <gnu_srs> 2029690880 bytes (2.0 GB) copied, 30.02 s, 67.6 MB/s, bs=140k
+ <braunr> gnu_srs: i'm very surprised with slpz's result of 1.3 GiB/s
+ <slpz> braunr: over 60 s running, same performance
+ <braunr> nice
+ <braunr> i wonder what makes it so fast
+ <braunr> how much cache ?
+ <gnu_srs> Me too, I cannot get better values than around 67 MB/s
+ <braunr> gnu_srs: same questions
+ <slpz> braunr: 4096KB, same as my laptop
+ <braunr> slpz: l2 ? l3 ?
+ <gnu_srs> kvm: cache=writeback, CPU: 4096 KB
+ <braunr> gnu_srs: this has nothing to do with the qemu option, it's about
+ the cpu
+ <slpz> braunr: no idea, it's the first time I touch this machine. I going
+ to see if I find the model in processorfinder
+ <braunr> under my host linux system, i get a similar plot, that is,
+ performance drops beyond bs=1M
+ <gnu_srs> braunr: OK, bu I gave you the cache size too, same as slpz.
+ <braunr> i wonder what dd actually does
+ <braunr> read() and writes i guess
+ <slpz> braunr: read/write repeatedly, nothing fancy
+ <braunr> slpz: i don't think it's a good test for virtual copy
+ <braunr> io_read_request, vm_deallocate, io_write_request, right
+ <braunr> slpz: i really wonder what it is about i5 that improves speed so
+ much
+ <slpz> braunr: me too
+ <slpz> braunr: L2: 2x256KB, L3: 4MB
+ <slpz> and something calling "SmartCache"
+ <gnu_srs> slpz: where did you find these values?
+ <slpz> gnu_srs: ark.intel.com and wikipedia
+ <gnu_srs> aha, cpuinfo just gives cache size.
+ <slpz> that "SmartCache" thing seems to be just L2 cache sharing between
+ cores. Shouldn't make a different since we're using only one core, and I
+ don't see KVM hooping between them.
+ <manuel> with bs=256k: 7004487680 bytes (7.0 GB) copied, 10 s, 700 MB/s
+ <manuel> (qemu/kvm, 3 * Intel(R) Xeon(R) E5504 2GHz, cache size 4096 KB)
+ <slpz> manuel: did you try with 512k/1M?
+ <manuel> bs=512k: 7730626560 bytes (7.7 GB) copied, 10 s, 773 MB/s
+ <manuel> bs=1M: 7896825856 bytes (7.9 GB) copied, 10 s, 790 MB/s
+ <slpz> manuel: those are pretty good numbers too
+ <braunr> xeon processor
+ <gnu_srs> lshw gave me: L1 Cache 256KiB, L2 cache 4MiB
+ <slpz> sincerely, I've never seen Hurd running this fast. Just checked
+ "uname -a" to make sure I didn't take the wrong image :-)
+ <manuel> for bs=256k, 60s: 40582250496 bytes (41 GB) copied, 60 s, 676 MB/s
+ <braunr> slpz: i think you can assume processor differences alter raw
+ copies too much to get any valuable results about virtual copy operations
+ <braunr> you need a specialized test program
+ <manuel> and bs=512k, 60s, 753 MB/s
+ <slpz> braunr: I'm using the mach_perf suite from OSFMach to do the
+ "serious" testing. I just wanted a non-synthetic test to confirm the
+ readings.
+
+[[!taglink open_issue_gnumach]] -- have a look at *mach_perf*.
+
+ <braunr> manuel: how much cache ? 2M ?
+ <braunr> slpz: ok
+ <braunr> manuel: hmno, more i guess
+ <manuel> braunr: /proc/cpuinfo says cache size : 4096 KB
+ <braunr> ok
+ <braunr> manuel: performance should drop beyond bs=2M
+ <braunr> but that's not relevant anyway
+ <gnu_srs> Linux: bs=1M, 10.8 GB/s
+ <slpz> I think this difference is too big to be only due to a bigger amount
+ of CPU cycles...
+ <braunr> slpz: clearly
+ <slpz> gnu_srs: your host system has 64 or 32 bits?
+ <slpz> braunr: I'm going to investigate a bit
+ <slpz> but this accidental discovery just made my day. We're able to run
+ Hurd at decent speeds on newer hardware!
+ <braunr> slpz: what result do you get with the same test on your host
+ system ?
+ <manuel> interestingly, running it several times has made the performance
+ drop quite much (i'm getting 400-500MB/s with 1M now, compared to nearly
+ 800 fifteen minutes ago)
+
+[[Degradataion]].
+
+ <slpz> braunr: probably an almost infinite throughput, but I don't consider
+ that a valid test, since in Linux, the write operation to "/dev/null"
+ doesn't involve memory copying/moving
+ <braunr> manuel: i observed the same behaviour
+ <gnu_srs> slpz: Host system is 64 bit
+ <braunr> slpz: it doesn't on the hurd either
+ <braunr> slpz: (under 2k, that is)
+ <braunr> over*
+ <slpz> braunr: humm, you're right, as the null translator doesn't "touch"
+ the memory, CoW rules apply
+ <braunr> slpz: the only thing which actually copies things around is dd
+ <braunr> probably by simply calling read()
+ <braunr> which gets its result from a VM copy operation, but copies the
+ content to the caller provided buffer
+ <braunr> then vm_deallocate() the data from the storeio (zero) translator
+ <braunr> if storeio isn't too dumb, it doesn't even touch the transfered
+ buffer (as anonymous vm_map()ped memory is already cleared)
+
+[[!taglink open_issue_documentation]]
+
+ <braunr> so this is a good test for measuring (profiling?) our ipc overhead
+ <braunr> and possibly the vm mapping operations (which could partly explain
+ why the results get worse over time)
+ <braunr> manuel: can you run vminfo | wc -l on your gnumach process ?
+ <slpz> braunr: Yes, unless some special situation apply, like the source
+ address/offset being unaligned, or if the translator decides to return
+ the result in a different buffer (which I assume is not the case for
+ storeio/zero)
+ <manuel> braunr: 35
+ <braunr> slpz: they can't be unaligned, the vm code asserts that
+ <braunr> manuel: ok, this is normal
+ <slpz> braunr: address/offset from read()
+ <braunr> slpz: the caller provided buffer you mean ?
+ <slpz> braunr: yes, and the offset of the memory_object, if it's a pager
+ based translator
+ <braunr> slpz: highly unlikely, the compiler chooses appropriate alignments
+ for such buffers
+ <slpz> braunr: in those cases, memcpy is used over vm_copy
+ <braunr> slpz: and the glibc memcpy() optimized versions can usually deal
+ with that
+ <braunr> slpz: i don't get your point about memory objects
+ <braunr> slpz: requests on memory objects always have aligned values too
+ <slpz> braunr: sure, but can't deal with the user requesting non
+ page-aligned sizes
+ <braunr> slpz: we're considering our dd tests, for which we made sure sizes
+ were page aligned
+ <slpz> braunr: oh, I was talking in a general sense, not just in this dd
+ tests, sorry
+ <slpz> by the way, dd on the host tops at 12 GB/s with bs=2M
+ <braunr> that's consistent with our other results
+ <braunr> slpz: you mean, even on your i5 processor with 1.3 GiB/s on your
+ hurd kvm ?
+ <slpz> braunr: yes, on the GNU/Linux which is running as host
+ <braunr> slpz: well that's not consistent
+ <slpz> braunr: consistent with what?
+ <braunr> slpz: i get roughly the same result on my host, but ten times less
+ on my hurd kvm
+ <braunr> slpz: what's your kernel/kvm versions ?
+ <slpz> 2.6.32-5-amd64 (debian's build) 0.12.5
+ <braunr> same here
+ <braunr> i'm a bit clueless
+ <braunr> why do i only get 130 MiB/s where you get 1.3 .. ? :)
+ <slpz> well, on my laptop, where Hurd on KVM tops on 50 MB/s, Linux gets a
+ bit more than 10 GB/s
+ <braunr> see
+ <braunr> slpz: reduce bs to 256k and test again if you have time please
+ <slpz> braunr: on which system?
+ <braunr> slpz: the fast one
+ <braunr> (linux host)
+ <slpz> braunr: Hurd?
+ <slpz> ok
+ <slpz> 12 GB/s
+ <braunr> i get 13.3
+ <slpz> same for 128k, only at 64k starts dropping
+ <slpz> maybe, on linux we're being limited by memory speed, while on Hurd's
+ this test is (much) more CPU-bound?
+ <braunr> slpz: maybe
+ <braunr> too bad processor stalls aren't easy to measure
+ <slpz> braunr: that's very true. It's funny when you read a paper which
+ measures performance by cycles on an old RISC processor. That's almost
+ impossible to do (with reliability) nowadays :-/
+ <slpz> I wonder which throughput can achieve Hurd running bare-metal on
+ this machine...
+ <antrik> both the Xeon and the i5 use cores based on the Nehalem
+ architecture
+ <antrik> apparently Nehalem is where Intel first introduces nested page
+ tables
+ <antrik> which pretty much explains the considerably lower overhead of VM
+ magic
+ <cjuner> antrik, what are nested page tables? (sounds like the 4-level page
+ tables we already have on amd64, or 2-level or 3-level on x86 pae)
+ <antrik> page tables were always 2-level on x86
+ <antrik> that's unrelated
+ <antrik> nested page tables means there is another layer of address
+ translation, so the VMM can do it's own translation and doesn't care what
+ the guest system does => no longer has to intercept all page table
+ manipulations
+ <braunr> antrik: do you imply it only applies to virtualized systems ?
+ <antrik> braunr: yes
+ <slpz> antrik: Good guess. Looks like Intel's EPT are doing the trick by
+ allowing the guest OS deal with its own page faults
+ <slpz> antrik: next monday, I'll try disabling EPT support in KVM on that
+ machine (the fast one). That should confirm your theory empirically.
+ <slpz> this also means that there're too many page faults, as we should be
+ doing virtual copies of memory that is not being accessed
+ <slpz> and looking at how the value of "page faults" in "vmstat" increases,
+ shows that page faults are directly proportional to the number of pages
+ we are asking from the translator
+ <slpz> I've also tried doing a long read() directly, to be sure that "dd"
+ is not doing something weird, and it shows the same behaviour.
+ <braunr> slpz: dd does copy buffers
+ <braunr> slpz: i told you, it's not a good test case for pure virtual copy
+ evaluation
+ <braunr> antrik: do you know if xen benefits from nested page tables ?
+ <antrik> no idea
+
+[[!taglink open_issue_xen]]
+
+ <slpz> braunr: but my small program doesn't, and still provokes a lot of
+ page faults
+ <braunr> slpz: are you certain it doesn't ?
+ <slpz> braunr: looking at google, it looks like recent Xen > 3.4 supports
+ EPT
+ <braunr> ok
+ <braunr> i'm ordering my new server right now, core i5 :)
+ <slpz> braunr: at least not explicitily. I need to look at MiG stubs again,
+ I don't remember if they do something weird.
+ <antrik> braunr: sandybridge or nehalem? :-)
+ <braunr> antrik: no idea
+ <antrik> does it tell a model number?
+ <braunr> not yet
+ <braunr> but i don't have a choice for that, so i'll order it first, check
+ after
+ <antrik> hehe
+ <antrik> I'm not sure it makes all that much difference anyways for a
+ server... unless you are running it at 100% load ;-)
+ <braunr> antrik: i'm planning on running xen guests suchs as new buildd
+ <antrik> hm... note though that some of the nehalem-generation i5s were
+ dual-core, while all the new ones are quad
+ <braunr> it's a quad
+ <antrik> the newer generation has better performance per GHz and per
+ Watt... but considering that we are rather I/O-limited in most cases, it
+ probably won't make much difference
+ <antrik> not sure whether there are further virtualisation improvements
+ that could be relevant...
+ <braunr> buildds spend much time running gcc, so even such improvements
+ should help
+ <braunr> there, server ordered :)
+ <braunr> antrik: model name : Intel(R) Core(TM) i5-2400 CPU @ 3.10GHz
+
+IRC, freenode, #hurd, 2011-09-06:
+
+ <slpz> youpi: what machines are being used for buildd? Do you know if they
+ have EPT/RVI?
+ <youpi> we use PV Xen there
+ <slpz> I think Xen could also take advantage of those technologies. Not
+ sure if only in HVM or with PV too.
+ <youpi> only in HVM
+ <youpi> in PV it does not make sense: the guest already provides the
+ translated page table
+ <youpi> which is just faster than anything else
diff --git a/open_issues/time.mdwn b/open_issues/time.mdwn
index eda5b635..ab239aef 100644
--- a/open_issues/time.mdwn
+++ b/open_issues/time.mdwn
@@ -1,4 +1,4 @@
-[[!meta copyright="Copyright © 2009 Free Software Foundation, Inc."]]
+[[!meta copyright="Copyright © 2009, 2011 Free Software Foundation, Inc."]]
[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
id="license" text="Permission is granted to copy, distribute and/or modify this
@@ -53,3 +53,17 @@ GNU time's *elapsed* value is off by some factor.
As above; also here all the running time should be attriuted to *user* time.
This is probably a [[!taglink open_issue_gnumach]].
+
+
+# 2011-09-02
+
+Might want to revisit this, and take Xen [[!tag open_issue_xen]] into account
+-- I believe flubber has already been Xenified at that time.
+
+
+## IRC, freenode, #hurd, 2011-09-02
+
+While testing some [[performance/IPC_virtual_copy]] performance issues:
+
+ <tschwinge> And I can confirm that with dd if=/dev/zero of=/dev/null bs=4k
+ running, a parallel sleep 10 takes about 20 s (on strauss).
diff --git a/open_issues/translators_set_up_by_untrusted_users.mdwn b/open_issues/translators_set_up_by_untrusted_users.mdwn
index cee7a2bc..36fe5438 100644
--- a/open_issues/translators_set_up_by_untrusted_users.mdwn
+++ b/open_issues/translators_set_up_by_untrusted_users.mdwn
@@ -281,3 +281,46 @@ Protection](https://wiki.ubuntu.com/SecurityTeam/Roadmap/KernelHardening#Symlink
and [Hardlink
Protection](https://wiki.ubuntu.com/SecurityTeam/Roadmap/KernelHardening#Hardlink_Protection)
do bear some similarity with the issue we're discussing here.
+
+
+# IRC, freenode, #hurd, 2011-08-31
+
+ <antrik> I don't see any problems with following only translators of
+ trusted users
+ <youpi> where to store the list of trusted users?
+ <youpi> is there a way to access the underlying node, which for /dev
+ entries belongs to root?
+ <ArneBab> youpi: why a list of trusted users? Does it not suffice to
+ require /hurd/trust set by root or ourselves?
+ <youpi> ArneBab: just because that's what antrik suggests, so I ask him for
+ more details
+ <ArneBab> ah, ok
+ <antrik> youpi: probably make them members of a group
+ <antrik> of course that doesn't allow normal users to add their own trusted
+ users... but that's not the only limitation of the user-based
+ authentication mechanism, so I wouldn't consider that an extra problem
+ <antrik> ArneBab: we can't set a translator on top of another user's
+ translator in general
+ <antrik> root could, but that's not very flexible...
+ <antrik> the group-based solution seems more useful to me
+ <ArneBab> antrik: why can’t we?
+ <antrik> also note that you can't set passive translators on top of other
+ translators
+ <antrik> ArneBab: because we can only set translators on our own nodes
+ <ArneBab> active ones, too?
+ <antrik> yes
+ <ArneBab> antrik: I always thought I could…
+ <ArneBab> but did not test it
+ <ArneBab> antrik: so I need a subhurd to change nodes which do not belong
+ to me?
+ * ArneBab in that case finally understands why you like subhurds so much:
+ That should be my normal right
+ <antrik> it should be your normal right to change stuff not belonging to
+ you? that's an odd world view :-)
+ <antrik> subhurds don't really have anything to do with it
+ <ArneBab> change it in a way that only I see the changes
+ <antrik> you need local namespaces to allow making local modifications to
+ global resources
+ <youpi> it should be one's normal right to change the view one has of it
+ <antrik> we discussed that once actually I believe...
+ <antrik> err... private namespaces I mean