summaryrefslogtreecommitdiff
path: root/open_issues/performance/io_system
diff options
context:
space:
mode:
Diffstat (limited to 'open_issues/performance/io_system')
-rw-r--r--open_issues/performance/io_system/binutils_ld_64ksec.mdwn15
-rw-r--r--open_issues/performance/io_system/clustered_page_faults.mdwn63
2 files changed, 75 insertions, 3 deletions
diff --git a/open_issues/performance/io_system/binutils_ld_64ksec.mdwn b/open_issues/performance/io_system/binutils_ld_64ksec.mdwn
index 79c2300f..359d5fee 100644
--- a/open_issues/performance/io_system/binutils_ld_64ksec.mdwn
+++ b/open_issues/performance/io_system/binutils_ld_64ksec.mdwn
@@ -33,3 +33,18 @@ the testee shows that (primarily) an ever-repeating series of `io_seek` and
`io_read` is being processed. Running the testee on GNU/Linux with strace
shows the equivalent thing (`_llseek`, `read`) -- but Linux' I/O system isn't
as slow as the Hurd's.
+
+---
+
+IRC, freenode, #hurd, 2011-09-01:
+
+ <youpi> hum, f951 does myriads of 71->io_seek_request (32768 0) = 0 32768
+ <youpi> no wonder it's slow
+ <youpi> unfortunately that's also what it does on linux, the system call is
+ just less costly
+ <youpi> apparently gfortran calls io_seek for, like, every token of the
+ sourced file
+ <youpi> (fgetpos actually, but that's the same)
+ <youpi> and it is indeed about 10 times slower under Xen for some reason
+
+[[!tag open_issue_xen]]
diff --git a/open_issues/performance/io_system/clustered_page_faults.mdwn b/open_issues/performance/io_system/clustered_page_faults.mdwn
index 37433e06..a3baf30d 100644
--- a/open_issues/performance/io_system/clustered_page_faults.mdwn
+++ b/open_issues/performance/io_system/clustered_page_faults.mdwn
@@ -12,7 +12,10 @@ License|/fdl]]."]]"""]]
[[community/gsoc/project_ideas/disk_io_performance]].
-IRC, freenode, #hurd, 2011-02-16
+[[!toc]]
+
+
+# IRC, freenode, #hurd, 2011-02-16
<braunr> exceptfor the kernel, everything in an address space is
represented with a VM object
@@ -88,9 +91,8 @@ IRC, freenode, #hurd, 2011-02-16
<braunr> recommend*
<etenil> ok
----
-IRC, freenode, #hurd, 2011-02-16
+# IRC, freenode, #hurd, 2011-02-16
<antrik> etenil: OSF Mach does have clustered paging BTW; so that's one
place to start looking...
@@ -103,3 +105,58 @@ IRC, freenode, #hurd, 2011-02-16
can serve as a starting point
<http://lists.gnu.org/archive/html/bug-hurd/2010-06/msg00023.html>
+
+
+# IRC, freenode, #hurd, 2011-07-22
+
+ <braunr> but concerning clustered pagins/outs, i'm not sure it's a mach
+ interface limitation
+ <braunr> the external memory pager interface does allow multiple pages to
+ be transfered
+ <braunr> isn't it an internal Mach VM problem ?
+ <braunr> isn't it simply the page fault handler ?
+ <antrik> braunr: are you sure? I was under the impression that changing the
+ pager interface was among the requirements...
+ <antrik> hm... I wonder whether for pageins, it could actually be handled
+ in the pages instead of Mach... though this wouldn't work for pageouts,
+ so probably not very helpful
+ <antrik> err... in the pagers
+ <braunr> antrik: i'm almost sure
+ <braunr> but i've be proven wrong many times, so ..
+ <braunr> there are two main facts that lead me to think this
+ <braunr> 1/
+ http://www.gnu.org/software/hurd/gnumach-doc/Memory-Objects-and-Data.html#Memory-Objects-and-Data
+ says lengths are provided and doesn't mention the limitation
+ <braunr> 2/ when reading about UVM, one of the major improvements (between
+ 10 and 30% of global performance depending on the benchmarks) was
+ implementing the madvise semantics
+ <braunr> and this didn't involve a new pager interface, but rather a new
+ page fault handler
+ <antrik> braunr: hm... the interface indeed looks like it can handle
+ multiple pages in both directions... perhaps it was at the Hurd level
+ where the pager interface needs to be modified, not the Mach one?...
+ <braunr> antrik: would be nice wouldn't it ? :)
+ <braunr> antrik: more probably the page fault handler
+
+
+# IRC, freenode, #hurd, 2011-09-28
+
+ <slpz> antrik: I've just recovered part of my old multipage I/O work
+ <slpz> antrik: I intend to clean and submit it after finishing the changes
+ to the pageout system.
+ <antrik> slpz: oh, great!
+ <antrik> didn't know you worked on multipage I/O
+ <antrik> slpz: BTW, have you checked whether any of the work done for GSoC
+ last year is any good?...
+ <antrik> (apart from missing copyright assignments, which would be a
+ serious problem for the Hurd parts...)
+ <slpz> antrik: It was seven years ago, but I did:
+ http://www.mail-archive.com/bug-hurd@gnu.org/msg10285.html :-)
+ <slpz> antrik: Sincerely, I don't think the quality of that code is good
+ enough to be considered... but I think it was my fault as his mentor for
+ not correcting him soon enough...
+ <antrik> slpz: I see
+ <antrik> TBH, I feel guilty myself, for not asking about the situation
+ immediately when he stopped attending meetings...
+ <antrik> slpz: oh, you even already looked into vm_pageout_scan() back then
+ :-)