summaryrefslogtreecommitdiff
path: root/open_issues/performance/io_system/read-ahead.mdwn
diff options
context:
space:
mode:
authorThomas Schwinge <thomas@schwinge.name>2011-02-16 10:12:26 +0100
committerThomas Schwinge <thomas@schwinge.name>2011-02-16 10:12:26 +0100
commitc01c02cd0e656e7f569560ce38ddad96ff9bbb43 (patch)
tree185372ddb89d3b6fc0a3e4240b70bf0bec609c46 /open_issues/performance/io_system/read-ahead.mdwn
parentb91f1d19f234c6da2ad4795db625030be855c992 (diff)
open_issues/performance/io_system/read-ahead: Some more IRC.
Diffstat (limited to 'open_issues/performance/io_system/read-ahead.mdwn')
-rw-r--r--open_issues/performance/io_system/read-ahead.mdwn230
1 files changed, 230 insertions, 0 deletions
diff --git a/open_issues/performance/io_system/read-ahead.mdwn b/open_issues/performance/io_system/read-ahead.mdwn
index c3a0c1bb..241cda41 100644
--- a/open_issues/performance/io_system/read-ahead.mdwn
+++ b/open_issues/performance/io_system/read-ahead.mdwn
@@ -26,6 +26,8 @@ IRC, #hurd, freenode, 2011-02-13:
portability
<youpi> it's not in posix indeed
+---
+
IRC, #hurd, freenode, 2011-02-14:
<etenil> youpi: I've investigated prefetching (readahead) techniques. One
@@ -47,4 +49,232 @@ IRC, #hurd, freenode, 2011-02-14:
<braunr> oh, madvise() stuff
<braunr> i could help him with that
+---
+
[[Etenil]] is now working in this area.
+
+---
+
+IRC, freenode, #hurd, 2011-02-15
+
+ <etenil> oh, I'm looking into prefetching/readahead to improve I/O
+ performance
+ <braunr> etenil: ok
+ <braunr> etenil: that's actually a VM improvement, like samuel told you
+ <etenil> yes
+ <braunr> a true I/O improvement would be I/O scheduling
+ <braunr> and how to implement it in a hurdish way
+ <braunr> (or if it makes sense to have it in the kernel)
+ <etenil> that's what I've been wondering too lately
+ <braunr> concerning the VM, you should look at madvise()
+ <etenil> my understanding is that Mach considers devices without really
+ knowing what they are
+ <braunr> that's roughly the interface used both at the syscall() and the
+ kernel levels in BSD, which made it in many other unix systems
+ <etenil> whereas I/O optimisations are often hard disk drives specific
+ <braunr> that's true for almost any kernel
+ <braunr> the device knowledge is at the driver level
+ <etenil> yes
+ <braunr> (here, I separate kernels from their drivers ofc)
+ <etenil> but Mach also contains some drivers, so I'm going through the code
+ to find the apropriate place for these improvements
+ <braunr> you shouldn't tough the drivers at all
+ <braunr> touch
+ <etenil> true, but I need to understand how it works before fiddling around
+ <braunr> hm
+ <braunr> not at all
+ <braunr> the VM improvement is about pagein clustering
+ <braunr> you don't need to know how pages are fetched
+ <braunr> well, not at the device level
+ <braunr> you need to know about the protocol between the kernel and
+ external pagers
+ <etenil> ok
+ <braunr> you could also implement pageout clustering
+ <etenil> if I understand you well, you say that what I'd need to do is a
+ queuing system for the paging in the VM?
+ <braunr> no
+ <braunr> i'm saying that, when a page fault occurs, the kernel should
+ (depending on what was configured through madvise()) transfer pages in
+ multiple blocks rather than one at a time
+ <braunr> communication with external pagers is already async, made through
+ regular ports
+ <braunr> which already implement message queuing
+ <braunr> you would just need to make the mapped regions larger
+ <braunr> and maybe change the interface so that this size is passed
+ <etenil> mmh
+ <braunr> (also don't forget that page clustering can include pages *before*
+ the page which caused the fault, so you may have to pass the start of
+ that region too)
+ <etenil> I'm not sure I understand the page fault thing
+ <etenil> is it like a segmentation error?
+ <etenil> I can't find a clear definition in Mach's manual
+ <braunr> ah
+ <braunr> it's a fundamental operating system concept
+ <braunr> http://en.wikipedia.org/wiki/Page_fault
+ <etenil> ah ok
+ <etenil> I understand now
+ <etenil> so what's currently happening is that when a page fault occurs,
+ Mach is transfering pages one at a time and wastes time
+ <braunr> sometimes, transferring just one page is what you want
+ <braunr> it depends on the application, which is why there is madvise()
+ <braunr> our rootfs, on the other hand, would benefit much from such an
+ improvement
+ <braunr> in UVM, this optimization is account for around 10% global
+ performance improvement
+ <braunr> accounted*
+ <etenil> not bad
+ <braunr> well, with an improved page cache, I'm sure I/O would matter less
+ on systems with more RAM
+ <braunr> (and another improvement would make mach support more RAM in the
+ first place !)
+ <braunr> an I/O scheduler outside the kernel would be a very good project
+ IMO
+ <braunr> in e.g. libstore/storeio
+ <etenil> yes
+ <braunr> but as i stated in my thesis, a resource scheduler should be as
+ close to its resource as it can
+ <braunr> and since mach can host several operating systems, I/O schedulers
+ should reside near device drivers
+ <braunr> and since current drivers are in the kernel, it makes sens to have
+ it in the kernel too
+ <braunr> so there must be some discussion about this
+ <etenil> doesn't this mean that we'll have to get some optimizations in
+ Mach and have the same outside of Mach for translators that access the
+ hardware directly?
+ <braunr> etenil: why ?
+ <etenil> well as you said Mach contains some drivers, but in principle, it
+ shouldn't, translators should do disk access etc, yes?
+ <braunr> etenil: ok
+ <braunr> etenil: so ?
+ <etenil> well, let's say if one were to introduce SATA support in Hurd,
+ nothing would stop him/her to do so with a translator rather than in Mach
+ <braunr> you should avoid the term translator here
+ <braunr> it's really hurd specific
+ <braunr> let's just say a user space task would be responsible for that
+ job, maybe multiple instances of it, yes
+ <etenil> ok, so in this case, let's say we have some I/O optimization
+ techniques like readahead and I/O scheduling within Mach, would these
+ also apply to the user-space task, or would they need to be
+ reimplemented?
+ <braunr> if you have user space drivers, there is no point having I/O
+ scheduling in the kernel
+ <etenil> but we also have drivers within the kernel
+ <braunr> what you call readahead, and I call pagein/out clustering, is
+ really tied to the VM, so it must be in Mach in any case
+ <braunr> well
+ <braunr> you either have one or the other
+ <braunr> currently we have them in the kernel
+ <braunr> if we switch to DDE, we should have all of them outside
+ <braunr> that's why such things must be discussed
+ <etenil> ok so if I follow you, then future I/O device drivers will need to
+ be implemented for Mach
+ <braunr> currently, yes
+ <braunr> but preferrably, someone should continue the work that has been
+ done on DDe so that drivers are outside the kernel
+ <etenil> so for the time being, I will try and improve I/O in Mach, and if
+ drivers ever get out, then some of the I/O optimizations will need to be
+ moved out of Mach
+ <braunr> let me remind you one of the things i said
+ <braunr> i said I/O scheduling should be close to their resource, because
+ we can host several operating systems
+ <braunr> now, the Hurd is the only system running on top of Mach
+ <braunr> so we could just have I/O scheduling outside too
+ <braunr> then you should consider neighbor hurds
+ <braunr> which can use different partitions, but on the same device
+ <braunr> currently, partitions are managed in the kernel, so file systems
+ (and storeio) can't make good scheduling decisions if it remains that way
+ <braunr> but that can change too
+ <braunr> a single storeio representing a whole disk could be shared by
+ several hurd instances, just as if it were a high level driver
+ <braunr> then you could implement I/O scheduling in storeio, which would be
+ an improvement for the current implementation, and reusable for future
+ work
+ <etenil> yes, that was my first instinct
+ <braunr> and you would be mostly free of the kernel internals that make it
+ a nightmare
+ <etenil> but youpi said that it would be better to modify Mach instead
+ <braunr> he mentioned the page clustering thing
+ <braunr> not I/O scheduling
+ <braunr> theseare really two different things
+ <etenil> ok
+ <braunr> you *can't* implement page clustering outside Mach because Mach
+ implements virtual memory
+ <braunr> both policies and mechanisms
+ <etenil> well, I'd rather think of one thing at a time if that's alright
+ <etenil> so what I'm busy with right now is setting up clustered page-in
+ <etenil> which need to be done within Mach
+ <braunr> keep clustered page-outs in mind too
+ <braunr> although there are more constraints on those
+ <etenil> yes
+ <etenil> I've looked up madvise(). There's a lot of documentation about it
+ in Linux but I couldn't find references to it in Mach (nor Hurd), does it
+ exist?
+ <braunr> well, if it did, you wouldn't be caring about clustered page
+ transfers, would you ?
+ <braunr> be careful about linux specific stuff
+ <etenil> I suppose not
+ <braunr> you should implement at least posix options, and if there are
+ more, consider the bsd variants
+ <braunr> (the Mach VM is the ancestor of all modern BSD VMs)
+ <etenil> madvise() seems to be posix
+ <braunr> there are system specific extensions
+ <braunr> be careful
+ <braunr> CONFORMING TO POSIX.1b. POSIX.1-2001 describes posix_madvise(3)
+ with constants POSIX_MADV_NORMAL, etc., with a behavā€ ior close to that
+ described here. There is a similar posix_fadvise(2) for file access.
+ <braunr> MADV_REMOVE, MADV_DONTFORK, MADV_DOFORK, MADV_HWPOISON,
+ MADV_MERGEABLE, and MADV_UNMERGEABLE are Linux- specific.
+ <etenil> I was about to post these
+ <etenil> ok, so basically madvise() allows tasks etc. to specify a usage
+ type for a chunk of memory, then I could apply the relevant I/O
+ optimization based on this
+ <braunr> that's it
+ <etenil> cool, then I don't need to worry about knowing what the I/O is
+ operating on, I just need to apply the optimizations as advised
+ <etenil> that's convenient
+ <etenil> ok I'll start working on this tonight
+ <etenil> making a basic readahead shouldn't be too hard
+ <braunr> readahead is a misleading name
+ <etenil> is pagein better?
+ <braunr> applies to too many things, doesn't include the case where
+ previous elements could be prefetched
+ <braunr> clustered page transfers is what i would use
+ <braunr> page prefetching maybe
+ <etenil> ok
+ <braunr> you should stick to something that's already used in the
+ literature since you're not inventing something new
+ <etenil> yes I've read a paper about prefetching
+ <etenil> ok
+ <etenil> thanks for your help braunr
+ <braunr> sure
+ <braunr> you're welcome
+ <antrik> braunr: madvise() is really the least important part of the
+ picture...
+ <antrik> very few applications actually use it. but pretty much all
+ applications will profit from clustered paging
+ <antrik> I would consider madvise() an optional goody, not an integral part
+ of the implementation
+ <antrik> etenil: you can find some stuff about KAM's work on
+ http://www.gnu.org/software/hurd/user/kam.html
+ <antrik> not much specific though
+ <etenil> thanks
+ <antrik> I don't remember exactly, but I guess there is also some
+ information on the mailing list. check the archives for last summer
+ <antrik> look for Karim Allah Ahmed
+ <etenil> antrik: I disagree, madvise gives me a good starting point, even
+ if eventually the optimisations should run even without it
+ <antrik> the code he wrote should be available from Google's summer of code
+ page somewhere...
+ <braunr> antrik: right, i was mentioning madvise() because the kernel (VM)
+ interface is pretty similar to the syscall
+ <braunr> but even a default policy would be nice
+ <antrik> etenil: I fear that many bits were discussed only on IRC... so
+ you'd better look through the IRC logs from last April onwards...
+ <etenil> ok
+
+ <etenil> at the beginning I thought I could put that into libstore
+ <etenil> which would have been fine
+
+ <antrik> BTW, I remembered now that KAM's GSoC application should have a
+ pretty good description of the necessary changes... unfortunately, these
+ are not publicly visible IIRC :-(