summaryrefslogtreecommitdiff
path: root/open_issues/performance
diff options
context:
space:
mode:
authorThomas Schwinge <tschwinge@gnu.org>2011-10-03 20:49:54 +0200
committerThomas Schwinge <tschwinge@gnu.org>2011-10-03 20:49:54 +0200
commit219988e74ba30498a1c5d71cf557913a70ccca91 (patch)
tree56b85456808cd06e020ef8455ea123c58f624176 /open_issues/performance
parent278f76de415c83bd06146b2f25a002cf0411d025 (diff)
IRC.
Diffstat (limited to 'open_issues/performance')
-rw-r--r--open_issues/performance/degradation.mdwn16
-rw-r--r--open_issues/performance/io_system/clustered_page_faults.mdwn23
-rw-r--r--open_issues/performance/ipc_virtual_copy.mdwn37
3 files changed, 72 insertions, 4 deletions
diff --git a/open_issues/performance/degradation.mdwn b/open_issues/performance/degradation.mdwn
index db759308..8c9a087c 100644
--- a/open_issues/performance/degradation.mdwn
+++ b/open_issues/performance/degradation.mdwn
@@ -10,8 +10,12 @@ License|/fdl]]."]]"""]]
[[!meta title="Degradation of GNU/Hurd ``system performance''"]]
-Email, *id:"87mxg2ahh8.fsf@kepler.schwinge.homeip.net"* (bug-hurd, 2011-07-25,
-Thomas Schwinge)
+[[!tag open_issue_gnumach open_issue_hurd]]
+
+[[!toc]]
+
+
+# Email, `id:"87mxg2ahh8.fsf@kepler.schwinge.homeip.net"` (bug-hurd, 2011-07-25, Thomas Schwinge)
> Building a certain GCC configuration on a freshly booted system: 11 h.
> Remove build tree, build it again (2nd): 12 h 50 min. Huh. Remove build
@@ -27,9 +31,8 @@ IRC, freenode, #hurd, 2011-07-23:
are some serious fragmentation issues
< braunr> antrik: both could be induced by fragmentation
----
-During [[IPC_virtual_copy]] testing:
+# During [[IPC_virtual_copy]] testing
IRC, freenode, #hurd, 2011-09-02:
@@ -38,3 +41,8 @@ IRC, freenode, #hurd, 2011-09-02:
800 fifteen minutes ago)
<braunr> manuel: i observed the same behaviour
[...]
+
+
+# IRC, freenode, #hurd, 2011-09-22
+
+See [[/open_issues/pagers]], IRC, freenode, #hurd, 2011-09-22.
diff --git a/open_issues/performance/io_system/clustered_page_faults.mdwn b/open_issues/performance/io_system/clustered_page_faults.mdwn
index 9e20f8e1..a3baf30d 100644
--- a/open_issues/performance/io_system/clustered_page_faults.mdwn
+++ b/open_issues/performance/io_system/clustered_page_faults.mdwn
@@ -137,3 +137,26 @@ License|/fdl]]."]]"""]]
where the pager interface needs to be modified, not the Mach one?...
<braunr> antrik: would be nice wouldn't it ? :)
<braunr> antrik: more probably the page fault handler
+
+
+# IRC, freenode, #hurd, 2011-09-28
+
+ <slpz> antrik: I've just recovered part of my old multipage I/O work
+ <slpz> antrik: I intend to clean and submit it after finishing the changes
+ to the pageout system.
+ <antrik> slpz: oh, great!
+ <antrik> didn't know you worked on multipage I/O
+ <antrik> slpz: BTW, have you checked whether any of the work done for GSoC
+ last year is any good?...
+ <antrik> (apart from missing copyright assignments, which would be a
+ serious problem for the Hurd parts...)
+ <slpz> antrik: It was seven years ago, but I did:
+ http://www.mail-archive.com/bug-hurd@gnu.org/msg10285.html :-)
+ <slpz> antrik: Sincerely, I don't think the quality of that code is good
+ enough to be considered... but I think it was my fault as his mentor for
+ not correcting him soon enough...
+ <antrik> slpz: I see
+ <antrik> TBH, I feel guilty myself, for not asking about the situation
+ immediately when he stopped attending meetings...
+ <antrik> slpz: oh, you even already looked into vm_pageout_scan() back then
+ :-)
diff --git a/open_issues/performance/ipc_virtual_copy.mdwn b/open_issues/performance/ipc_virtual_copy.mdwn
index 00fa7180..9708ab96 100644
--- a/open_issues/performance/ipc_virtual_copy.mdwn
+++ b/open_issues/performance/ipc_virtual_copy.mdwn
@@ -356,3 +356,40 @@ IRC, freenode, #hurd, 2011-09-06:
<youpi> in PV it does not make sense: the guest already provides the
translated page table
<youpi> which is just faster than anything else
+
+IRC, freenode, #hurd, 2011-09-09:
+
+ <antrik> oh BTW, for another data point: dd zero->null gets around 225 MB/s
+ on my lowly 1 GHz Pentium3, with a blocksize of 32k
+ <antrik> (but only half of that with 256k blocksize, and even less with 1M)
+ <antrik> the system has been up for a while... don't know whether it's
+ faster on a freshly booted one
+
+IRC, freenode, #hurd, 2011-09-15:
+
+ <sudoman>
+ http://www.reddit.com/r/gnu/comments/k68mb/how_intelamd_inadvertently_fixed_gnu_hurd/
+ <sudoman> so is the dd command pointed to by that article a measure of io
+ performance?
+ <antrik> sudoman: no, not really
+ <antrik> it's basically the baseline of what is possible -- but the actual
+ slowness we experience is more due to very unoptimal disk access patterns
+ <antrik> though using KVM with writeback caching does actually help with
+ that...
+ <antrik> also note that the title of this post really makes no
+ sense... nested page tables should provide similar improvements for *any*
+ guest system doing VM manipulation -- it's not Hurd-specific at all
+ <sudoman> ok, that makes sense. thanks :)
+
+IRC, freenode, #hurd, 2011-09-16:
+
+ <slpz> antrik: I wrote that article (the one about How AMD/Intel fixed...)
+ <slpz> antrik: It's obviously a bit of an exaggeration, but it's true that
+ nested pages supposes a great improvement in the performance of Hurd
+ running on virtual machines
+ <slpz> antrik: and it's Hurd specific, as this system is more affected by
+ the cost of page faults
+ <slpz> antrik: and as the impact of virtualization on the performance is
+ much higher than (almost) any other OS.
+ <slpz> antrik: also, dd from /dev/zero to /dev/null it's a measure on how
+ fast OOL IPC is.