summaryrefslogtreecommitdiff
path: root/open_issues/performance/io_system
diff options
context:
space:
mode:
authorThomas Schwinge <thomas@schwinge.name>2011-02-17 14:15:11 +0100
committerThomas Schwinge <thomas@schwinge.name>2011-02-17 14:15:11 +0100
commitbdd896e0b81cfb40c8d24a78f9022f6cd1ae5e8c (patch)
treeb910b9f770f08a7f0bc5951025db358f03dfffd7 /open_issues/performance/io_system
parent5aef0778f741625959e4d474cac3e6c783c78175 (diff)
open_issues/performance/io_system/clustered_page_faults: New. And some more IRC discussions.
Diffstat (limited to 'open_issues/performance/io_system')
-rw-r--r--open_issues/performance/io_system/clustered_page_faults.mdwn103
-rw-r--r--open_issues/performance/io_system/read-ahead.mdwn19
2 files changed, 122 insertions, 0 deletions
diff --git a/open_issues/performance/io_system/clustered_page_faults.mdwn b/open_issues/performance/io_system/clustered_page_faults.mdwn
new file mode 100644
index 00000000..3a187523
--- /dev/null
+++ b/open_issues/performance/io_system/clustered_page_faults.mdwn
@@ -0,0 +1,103 @@
+[[!meta copyright="Copyright © 2011 Free Software Foundation, Inc."]]
+
+[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
+id="license" text="Permission is granted to copy, distribute and/or modify this
+document under the terms of the GNU Free Documentation License, Version 1.2 or
+any later version published by the Free Software Foundation; with no Invariant
+Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
+is included in the section entitled [[GNU Free Documentation
+License|/fdl]]."]]"""]]
+
+[[!tag open_issue_gnumach open_issue_hurd]]
+
+IRC, freenode, #hurd, 2011-02-16
+
+ <braunr> exceptfor the kernel, everything in an address space is
+ represented with a VM object
+ <braunr> those objects can represent anonymous memory (from malloc() or
+ because of a copy-on-write)
+ <braunr> or files
+ <braunr> on classic Unix systems, these are files
+ <braunr> on the Hurd, these are memory objects, backed by external pagers
+ (like ext2fs)
+ <braunr> so when you read a file
+ <braunr> the kernel maps it from ext2fs in your address space
+ <braunr> and when you access the memory, a fault occurs
+ <braunr> the kernel determines it's a region backed by ext2fs
+ <braunr> so it asks ext2fs to provide the data
+ <braunr> when the fault is resolved, your process goes on
+ <etenil> does the faul occur because Mach doesn't know how to access the
+ memory?
+ <braunr> it occurs because Mach intentionnaly didn't back the region with
+ physical memory
+ <braunr> the MMU is programmed not to know what is present in the memory
+ region
+ <braunr> or because it's read only
+ <braunr> (which is the case for COW faults)
+ <etenil> so that means this bit of memory is a buffer that ext2fs loads the
+ file into and then it is remapped to the application that asked for it
+ <braunr> more or less, yes
+ <braunr> ideally, it's directly written into the right pages
+ <braunr> there is no intermediate buffer
+ <etenil> I see
+ <etenil> and as you told me before, currently the page faults are handled
+ one at a time
+ <etenil> which wastes a lot of time
+ <braunr> a certain amount of time
+ <etenil> enough to bother the user :)
+ <etenil> I've seen pages have a fixed size
+ <braunr> yes
+ <braunr> use the PAGE_SIZE macro
+ <etenil> and when allocating memory, the size that's asked for is rounded
+ up to the page size
+ <etenil> so if I have this correctly, it means that a file ext2fs provides
+ could be split into a lot of pages
+ <braunr> yes
+ <braunr> once in memory, it is managed by the page cache
+ <braunr> so that pages more actively used are kept longer than others
+ <braunr> in order to minimize I/O
+ <etenil> ok
+ <braunr> so a better page cache code would also improve overall performance
+ <braunr> and more RAM would help a lot, since we are strongly limited by
+ the 768 MiB limit
+ <braunr> which reduces the page cache size a lot
+ <etenil> but the problem is that reading a whole file in means trigerring
+ many page faults just for one file
+ <braunr> if you want to stick to the page clustering thing, yes
+ <braunr> you want less page faults, so that there are less IPC between the
+ kernel and the pager
+ <etenil> so either I make pages bigger
+ <etenil> or I modify Mach so it can check up on a range of pages for faults
+ before actually processing
+ <braunr> you *don't* change the page size
+ <etenil> ah
+ <etenil> that's hardware isn't it?
+ <braunr> in Mach, yes
+ <etenil> ok
+ <braunr> and usually, you want the page size to be the CPU page size
+ <etenil> I see
+ <braunr> current CPU can support multiple page sizes, but it becomes quite
+ hard to correctly handle
+ <braunr> and bigger page sizes mean more fragmentation, so it only suits
+ machines with large amounts of RAM, which isn't the case for us
+ <etenil> ok
+ <etenil> so I'll try the second approach then
+ <braunr> that's what i'd recommand
+ <braunr> recommend*
+ <etenil> ok
+
+---
+
+IRC, freenode, #hurd, 2011-02-16
+
+ <antrik> etenil: OSF Mach does have clustered paging BTW; so that's one
+ place to start looking...
+ <antrik> (KAM ported the OSF code to gnumach IIRC)
+ <antrik> there is also an existing patch for clustered paging in libpager,
+ which needs some adaptation
+ <antrik> the biggest part of the task is probably modifying the Hurd
+ servers to use the new interface
+ <antrik> but as I said, KAM's code should be available through google, and
+ can serve as a starting point
+
+<http://lists.gnu.org/archive/html/bug-hurd/2010-06/msg00023.html>
diff --git a/open_issues/performance/io_system/read-ahead.mdwn b/open_issues/performance/io_system/read-ahead.mdwn
index 241cda41..3ee30b5d 100644
--- a/open_issues/performance/io_system/read-ahead.mdwn
+++ b/open_issues/performance/io_system/read-ahead.mdwn
@@ -278,3 +278,22 @@ IRC, freenode, #hurd, 2011-02-15
<antrik> BTW, I remembered now that KAM's GSoC application should have a
pretty good description of the necessary changes... unfortunately, these
are not publicly visible IIRC :-(
+
+---
+
+IRC, freenode, #hurd, 2011-02-16
+
+ <etenil> braunr: I've looked in the kernel to see where prefetching would
+ fit best. We talked of the VM yesterday, but I'm not sure about it. It
+ seems to me that the device part of the kernel makes more sense since
+ it's logically what manages devices, am I wrong?
+ <braunr> etenil: you are
+ <braunr> etenil: well
+ <braunr> etenil: drivers should already support clustered sector
+ read/writes
+ <etenil> ah
+ <braunr> but yes, there must be support in the drivers too
+ <braunr> what would really benefit the Hurd mostly concerns page faults, so
+ the right place is the VM subsystem
+
+[[clustered_page_faults]]