summaryrefslogtreecommitdiff
path: root/open_issues/performance
diff options
context:
space:
mode:
authorThomas Schwinge <thomas@codesourcery.com>2012-05-24 23:08:09 +0200
committerThomas Schwinge <thomas@codesourcery.com>2012-05-24 23:08:09 +0200
commit2910b7c5b1d55bc304344b584a25ea571a9075fb (patch)
treebfbfbc98d4c0e205d2726fa44170a16e8421855e /open_issues/performance
parent35b719f54c96778f571984065579625bc9f15bf5 (diff)
Prepare toolchain/logs/master branch.
Diffstat (limited to 'open_issues/performance')
-rw-r--r--open_issues/performance/degradation.mdwn52
-rw-r--r--open_issues/performance/fork.mdwn37
-rw-r--r--open_issues/performance/io_system/binutils_ld_64ksec.mdwn39
-rw-r--r--open_issues/performance/io_system/binutils_ld_64ksec/test.tar.xzbin378092 -> 0 bytes
-rw-r--r--open_issues/performance/io_system/clustered_page_faults.mdwn162
-rw-r--r--open_issues/performance/io_system/read-ahead.mdwn391
-rw-r--r--open_issues/performance/ipc_virtual_copy.mdwn395
-rw-r--r--open_issues/performance/microbenchmarks.mdwn13
-rw-r--r--open_issues/performance/microkernel_multi-server.mdwn47
9 files changed, 0 insertions, 1136 deletions
diff --git a/open_issues/performance/degradation.mdwn b/open_issues/performance/degradation.mdwn
deleted file mode 100644
index 1aaae4d2..00000000
--- a/open_issues/performance/degradation.mdwn
+++ /dev/null
@@ -1,52 +0,0 @@
-[[!meta copyright="Copyright © 2011, 2012 Free Software Foundation, Inc."]]
-
-[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
-id="license" text="Permission is granted to copy, distribute and/or modify this
-document under the terms of the GNU Free Documentation License, Version 1.2 or
-any later version published by the Free Software Foundation; with no Invariant
-Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
-is included in the section entitled [[GNU Free Documentation
-License|/fdl]]."]]"""]]
-
-[[!meta title="Degradation of GNU/Hurd ``system performance''"]]
-
-[[!tag open_issue_gnumach open_issue_hurd]]
-
-[[!toc]]
-
-
-# Email, [[!message-id "87mxg2ahh8.fsf@kepler.schwinge.homeip.net"]] (bug-hurd, 2011-07-25, Thomas Schwinge)
-
-> Building a certain GCC configuration on a freshly booted system: 11 h.
-> Remove build tree, build it again (2nd): 12 h 50 min. Huh. Remove build
-> tree, reboot, build it again (1st): back to 11 h. Remove build tree, build
-> it again (2nd): 12 h 40 min. Remove build tree, build it again (3rd): 15 h.
-
-IRC, freenode, #hurd, 2011-07-23:
-
- < antrik> tschwinge: yes, the system definitely gets slower with
- time. after running for a couple of weeks, it needs at least twice as
- long to open a new shell for example
- < antrik> I don't know whether this is only related to swap usage, or there
- are some serious fragmentation issues
- < braunr> antrik: both could be induced by fragmentation
-
-
-# During [[IPC_virtual_copy]] testing
-
-IRC, freenode, #hurd, 2011-09-02:
-
- <manuel> interestingly, running it several times has made the performance
- drop quite much (i'm getting 400-500MB/s with 1M now, compared to nearly
- 800 fifteen minutes ago)
- <braunr> manuel: i observed the same behaviour
- [...]
-
-
-# IRC, freenode, #hurd, 2011-09-22
-
-See [[/open_issues/resource_management_problems/pagers]], IRC, freenode, #hurd,
-2011-09-22.
-
-
-# [[ext2fs_page_cache_swapping_leak]]
diff --git a/open_issues/performance/fork.mdwn b/open_issues/performance/fork.mdwn
deleted file mode 100644
index 5ceb6455..00000000
--- a/open_issues/performance/fork.mdwn
+++ /dev/null
@@ -1,37 +0,0 @@
-[[!meta copyright="Copyright © 2010, 2011 Free Software Foundation, Inc."]]
-
-[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
-id="license" text="Permission is granted to copy, distribute and/or modify this
-document under the terms of the GNU Free Documentation License, Version 1.2 or
-any later version published by the Free Software Foundation; with no Invariant
-Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
-is included in the section entitled [[GNU Free Documentation
-License|/fdl]]."]]"""]]
-
-[[!tag open_issue_glibc open_issue_hurd]]
-
-Our [[`fork` implementation|glibc/fork]] is nontrivial.
-
-To do: hard numbers.
-[[Microbenchmarks]]?
-
-
-# Windows / Cygwin
-
- * <http://www.google.com/search?q=cygwin+fork>
-
- * <http://www.redhat.com/support/wpapers/cygnus/cygnus_cygwin/architecture.html>
-
- In particular, *5.6. Process Creation*.
-
- * <http://archive.gamedev.net/community/forums/topic.asp?topic_id=360290>
-
- * <http://cygwin.com/cgi-bin/cvsweb.cgi/src/winsup/cygwin/how-cygheap-works.txt?cvsroot=src>
-
- > Cygwin has recently adopted something called the "cygwin heap". This is
- > an internal heap that is inherited by forked/execed children. It
- > consists of process specific information that should be inherited. So
- > things like the file descriptor table, the current working directory, and
- > the chroot value live there.
-
- * <http://www.perlmonks.org/?node_id=588994>
diff --git a/open_issues/performance/io_system/binutils_ld_64ksec.mdwn b/open_issues/performance/io_system/binutils_ld_64ksec.mdwn
deleted file mode 100644
index 931fd0ee..00000000
--- a/open_issues/performance/io_system/binutils_ld_64ksec.mdwn
+++ /dev/null
@@ -1,39 +0,0 @@
-[[!meta copyright="Copyright © 2010, 2011 Free Software Foundation, Inc."]]
-
-[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
-id="license" text="Permission is granted to copy, distribute and/or modify this
-document under the terms of the GNU Free Documentation License, Version 1.2 or
-any later version published by the Free Software Foundation; with no Invariant
-Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
-is included in the section entitled [[GNU Free Documentation
-License|/fdl]]."]]"""]]
-
-[[!tag open_issue_hurd]]
-
-This one may be considered as a testcase for [[I/O system
-optimization|community/gsoc/project_ideas/disk_io_performance]].
-
-It is taken from the [[binutils testsuite|binutils]],
-`ld/ld-elf/sec64k.exp`, where this
-test may occasionally [[trigger a timeout|binutils#64ksec]]. It is
-extracted from cdf7c161ebd4a934c9e705d33f5247fd52975612 sources, 2010-10-24.
-
- $ wget -O - http://www.gnu.org/software/hurd/open_issues/performance/io_system/binutils_ld_64ksec/test.tar.xz | xz -d | tar -x
- $ cd test/
- $ \time ./ld-new.stripped -o dump dump?.o dump??.o
- 0.00user 0.00system 2:46.11elapsed 0%CPU (0avgtext+0avgdata 0maxresident)k
- 0inputs+0outputs (0major+0minor)pagefaults 0swaps
-
-On the idle grubber, this one repeatedly takes a few minutes wall time to
-complete successfully, contrary to a few seconds on a GNU/Linux system.
-
-While processing the object files, there is heavy interaction with the relevant
-[[hurd/translator/ext2fs]] process. Running [[hurd/debugging/rpctrace]] on
-the testee shows that (primarily) an ever-repeating series of `io_seek` and
-`io_read` is being processed. Running the testee on GNU/Linux with strace
-shows the equivalent thing (`_llseek`, `read`) -- but Linux' I/O system isn't
-as slow as the Hurd's.
-
-As Samuel figured out later, this slowness may in fact be due to a Xen-specific
-issue, see [[Xen_lseek]]. After the latter has been addressed, we can
-re-evaluate this issue here.
diff --git a/open_issues/performance/io_system/binutils_ld_64ksec/test.tar.xz b/open_issues/performance/io_system/binutils_ld_64ksec/test.tar.xz
deleted file mode 100644
index 6d7c606c..00000000
--- a/open_issues/performance/io_system/binutils_ld_64ksec/test.tar.xz
+++ /dev/null
Binary files differ
diff --git a/open_issues/performance/io_system/clustered_page_faults.mdwn b/open_issues/performance/io_system/clustered_page_faults.mdwn
deleted file mode 100644
index a3baf30d..00000000
--- a/open_issues/performance/io_system/clustered_page_faults.mdwn
+++ /dev/null
@@ -1,162 +0,0 @@
-[[!meta copyright="Copyright © 2011 Free Software Foundation, Inc."]]
-
-[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
-id="license" text="Permission is granted to copy, distribute and/or modify this
-document under the terms of the GNU Free Documentation License, Version 1.2 or
-any later version published by the Free Software Foundation; with no Invariant
-Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
-is included in the section entitled [[GNU Free Documentation
-License|/fdl]]."]]"""]]
-
-[[!tag open_issue_gnumach open_issue_hurd]]
-
-[[community/gsoc/project_ideas/disk_io_performance]].
-
-[[!toc]]
-
-
-# IRC, freenode, #hurd, 2011-02-16
-
- <braunr> exceptfor the kernel, everything in an address space is
- represented with a VM object
- <braunr> those objects can represent anonymous memory (from malloc() or
- because of a copy-on-write)
- <braunr> or files
- <braunr> on classic Unix systems, these are files
- <braunr> on the Hurd, these are memory objects, backed by external pagers
- (like ext2fs)
- <braunr> so when you read a file
- <braunr> the kernel maps it from ext2fs in your address space
- <braunr> and when you access the memory, a fault occurs
- <braunr> the kernel determines it's a region backed by ext2fs
- <braunr> so it asks ext2fs to provide the data
- <braunr> when the fault is resolved, your process goes on
- <etenil> does the faul occur because Mach doesn't know how to access the
- memory?
- <braunr> it occurs because Mach intentionnaly didn't back the region with
- physical memory
- <braunr> the MMU is programmed not to know what is present in the memory
- region
- <braunr> or because it's read only
- <braunr> (which is the case for COW faults)
- <etenil> so that means this bit of memory is a buffer that ext2fs loads the
- file into and then it is remapped to the application that asked for it
- <braunr> more or less, yes
- <braunr> ideally, it's directly written into the right pages
- <braunr> there is no intermediate buffer
- <etenil> I see
- <etenil> and as you told me before, currently the page faults are handled
- one at a time
- <etenil> which wastes a lot of time
- <braunr> a certain amount of time
- <etenil> enough to bother the user :)
- <etenil> I've seen pages have a fixed size
- <braunr> yes
- <braunr> use the PAGE_SIZE macro
- <etenil> and when allocating memory, the size that's asked for is rounded
- up to the page size
- <etenil> so if I have this correctly, it means that a file ext2fs provides
- could be split into a lot of pages
- <braunr> yes
- <braunr> once in memory, it is managed by the page cache
- <braunr> so that pages more actively used are kept longer than others
- <braunr> in order to minimize I/O
- <etenil> ok
- <braunr> so a better page cache code would also improve overall performance
- <braunr> and more RAM would help a lot, since we are strongly limited by
- the 768 MiB limit
- <braunr> which reduces the page cache size a lot
- <etenil> but the problem is that reading a whole file in means trigerring
- many page faults just for one file
- <braunr> if you want to stick to the page clustering thing, yes
- <braunr> you want less page faults, so that there are less IPC between the
- kernel and the pager
- <etenil> so either I make pages bigger
- <etenil> or I modify Mach so it can check up on a range of pages for faults
- before actually processing
- <braunr> you *don't* change the page size
- <etenil> ah
- <etenil> that's hardware isn't it?
- <braunr> in Mach, yes
- <etenil> ok
- <braunr> and usually, you want the page size to be the CPU page size
- <etenil> I see
- <braunr> current CPU can support multiple page sizes, but it becomes quite
- hard to correctly handle
- <braunr> and bigger page sizes mean more fragmentation, so it only suits
- machines with large amounts of RAM, which isn't the case for us
- <etenil> ok
- <etenil> so I'll try the second approach then
- <braunr> that's what i'd recommand
- <braunr> recommend*
- <etenil> ok
-
-
-# IRC, freenode, #hurd, 2011-02-16
-
- <antrik> etenil: OSF Mach does have clustered paging BTW; so that's one
- place to start looking...
- <antrik> (KAM ported the OSF code to gnumach IIRC)
- <antrik> there is also an existing patch for clustered paging in libpager,
- which needs some adaptation
- <antrik> the biggest part of the task is probably modifying the Hurd
- servers to use the new interface
- <antrik> but as I said, KAM's code should be available through google, and
- can serve as a starting point
-
-<http://lists.gnu.org/archive/html/bug-hurd/2010-06/msg00023.html>
-
-
-# IRC, freenode, #hurd, 2011-07-22
-
- <braunr> but concerning clustered pagins/outs, i'm not sure it's a mach
- interface limitation
- <braunr> the external memory pager interface does allow multiple pages to
- be transfered
- <braunr> isn't it an internal Mach VM problem ?
- <braunr> isn't it simply the page fault handler ?
- <antrik> braunr: are you sure? I was under the impression that changing the
- pager interface was among the requirements...
- <antrik> hm... I wonder whether for pageins, it could actually be handled
- in the pages instead of Mach... though this wouldn't work for pageouts,
- so probably not very helpful
- <antrik> err... in the pagers
- <braunr> antrik: i'm almost sure
- <braunr> but i've be proven wrong many times, so ..
- <braunr> there are two main facts that lead me to think this
- <braunr> 1/
- http://www.gnu.org/software/hurd/gnumach-doc/Memory-Objects-and-Data.html#Memory-Objects-and-Data
- says lengths are provided and doesn't mention the limitation
- <braunr> 2/ when reading about UVM, one of the major improvements (between
- 10 and 30% of global performance depending on the benchmarks) was
- implementing the madvise semantics
- <braunr> and this didn't involve a new pager interface, but rather a new
- page fault handler
- <antrik> braunr: hm... the interface indeed looks like it can handle
- multiple pages in both directions... perhaps it was at the Hurd level
- where the pager interface needs to be modified, not the Mach one?...
- <braunr> antrik: would be nice wouldn't it ? :)
- <braunr> antrik: more probably the page fault handler
-
-
-# IRC, freenode, #hurd, 2011-09-28
-
- <slpz> antrik: I've just recovered part of my old multipage I/O work
- <slpz> antrik: I intend to clean and submit it after finishing the changes
- to the pageout system.
- <antrik> slpz: oh, great!
- <antrik> didn't know you worked on multipage I/O
- <antrik> slpz: BTW, have you checked whether any of the work done for GSoC
- last year is any good?...
- <antrik> (apart from missing copyright assignments, which would be a
- serious problem for the Hurd parts...)
- <slpz> antrik: It was seven years ago, but I did:
- http://www.mail-archive.com/bug-hurd@gnu.org/msg10285.html :-)
- <slpz> antrik: Sincerely, I don't think the quality of that code is good
- enough to be considered... but I think it was my fault as his mentor for
- not correcting him soon enough...
- <antrik> slpz: I see
- <antrik> TBH, I feel guilty myself, for not asking about the situation
- immediately when he stopped attending meetings...
- <antrik> slpz: oh, you even already looked into vm_pageout_scan() back then
- :-)
diff --git a/open_issues/performance/io_system/read-ahead.mdwn b/open_issues/performance/io_system/read-ahead.mdwn
deleted file mode 100644
index d6a98070..00000000
--- a/open_issues/performance/io_system/read-ahead.mdwn
+++ /dev/null
@@ -1,391 +0,0 @@
-[[!meta copyright="Copyright © 2011, 2012 Free Software Foundation, Inc."]]
-
-[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
-id="license" text="Permission is granted to copy, distribute and/or modify this
-document under the terms of the GNU Free Documentation License, Version 1.2 or
-any later version published by the Free Software Foundation; with no Invariant
-Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
-is included in the section entitled [[GNU Free Documentation
-License|/fdl]]."]]"""]]
-
-[[!tag open_issue_gnumach open_issue_hurd]]
-
-[[!toc]]
-
-
-# [[community/gsoc/project_ideas/disk_io_performance]]
-
-
-# 2011-02
-
-[[Etenil]] has been working in this area.
-
-
-## IRC, freenode, #hurd, 2011-02-13
-
- <etenil> youpi: Would libdiskfs/diskfs.h be in the right place to make
- readahead functions?
- <youpi> etenil: no, it'd rather be at the memory management layer,
- i.e. mach, unfortunately
- <youpi> because that's where you see the page faults
- <etenil> youpi: Linux also provides a readahead() function for higher level
- applications. I'll probably have to add the same thing in a place that's
- higher level than mach
- <youpi> well, that should just be hooked to the same common implementation
- <etenil> the man page for readahead() also states that portable
- applications should avoid it, but it could be benefic to have it for
- portability
- <youpi> it's not in posix indeed
-
-
-## IRC, freenode, #hurd, 2011-02-14
-
- <etenil> youpi: I've investigated prefetching (readahead) techniques. One
- called DiskSeen seems really efficient. I can't tell yet if it's patented
- etc. but I'll keep you informed
- <youpi> don't bother with complicated techniques, even the most simple ones
- will be plenty :)
- <etenil> it's not complicated really
- <youpi> the matter is more about how to plug it into mach
- <etenil> ok
- <youpi> then don't bother with potential pattents
- <antrik> etenil: please take a look at the work KAM did for last year's
- GSoC
- <youpi> just use a trivial technique :)
- <etenil> ok, i'll just go the easy way then
-
- <braunr> antrik: what was etenil referring to when talking about
- prefetching ?
- <braunr> oh, madvise() stuff
- <braunr> i could help him with that
-
-
-## IRC, freenode, #hurd, 2011-02-15
-
- <etenil> oh, I'm looking into prefetching/readahead to improve I/O
- performance
- <braunr> etenil: ok
- <braunr> etenil: that's actually a VM improvement, like samuel told you
- <etenil> yes
- <braunr> a true I/O improvement would be I/O scheduling
- <braunr> and how to implement it in a hurdish way
- <braunr> (or if it makes sense to have it in the kernel)
- <etenil> that's what I've been wondering too lately
- <braunr> concerning the VM, you should look at madvise()
- <etenil> my understanding is that Mach considers devices without really
- knowing what they are
- <braunr> that's roughly the interface used both at the syscall() and the
- kernel levels in BSD, which made it in many other unix systems
- <etenil> whereas I/O optimisations are often hard disk drives specific
- <braunr> that's true for almost any kernel
- <braunr> the device knowledge is at the driver level
- <etenil> yes
- <braunr> (here, I separate kernels from their drivers ofc)
- <etenil> but Mach also contains some drivers, so I'm going through the code
- to find the apropriate place for these improvements
- <braunr> you shouldn't tough the drivers at all
- <braunr> touch
- <etenil> true, but I need to understand how it works before fiddling around
- <braunr> hm
- <braunr> not at all
- <braunr> the VM improvement is about pagein clustering
- <braunr> you don't need to know how pages are fetched
- <braunr> well, not at the device level
- <braunr> you need to know about the protocol between the kernel and
- external pagers
- <etenil> ok
- <braunr> you could also implement pageout clustering
- <etenil> if I understand you well, you say that what I'd need to do is a
- queuing system for the paging in the VM?
- <braunr> no
- <braunr> i'm saying that, when a page fault occurs, the kernel should
- (depending on what was configured through madvise()) transfer pages in
- multiple blocks rather than one at a time
- <braunr> communication with external pagers is already async, made through
- regular ports
- <braunr> which already implement message queuing
- <braunr> you would just need to make the mapped regions larger
- <braunr> and maybe change the interface so that this size is passed
- <etenil> mmh
- <braunr> (also don't forget that page clustering can include pages *before*
- the page which caused the fault, so you may have to pass the start of
- that region too)
- <etenil> I'm not sure I understand the page fault thing
- <etenil> is it like a segmentation error?
- <etenil> I can't find a clear definition in Mach's manual
- <braunr> ah
- <braunr> it's a fundamental operating system concept
- <braunr> http://en.wikipedia.org/wiki/Page_fault
- <etenil> ah ok
- <etenil> I understand now
- <etenil> so what's currently happening is that when a page fault occurs,
- Mach is transfering pages one at a time and wastes time
- <braunr> sometimes, transferring just one page is what you want
- <braunr> it depends on the application, which is why there is madvise()
- <braunr> our rootfs, on the other hand, would benefit much from such an
- improvement
- <braunr> in UVM, this optimization is account for around 10% global
- performance improvement
- <braunr> accounted*
- <etenil> not bad
- <braunr> well, with an improved page cache, I'm sure I/O would matter less
- on systems with more RAM
- <braunr> (and another improvement would make mach support more RAM in the
- first place !)
- <braunr> an I/O scheduler outside the kernel would be a very good project
- IMO
- <braunr> in e.g. libstore/storeio
- <etenil> yes
- <braunr> but as i stated in my thesis, a resource scheduler should be as
- close to its resource as it can
- <braunr> and since mach can host several operating systems, I/O schedulers
- should reside near device drivers
- <braunr> and since current drivers are in the kernel, it makes sens to have
- it in the kernel too
- <braunr> so there must be some discussion about this
- <etenil> doesn't this mean that we'll have to get some optimizations in
- Mach and have the same outside of Mach for translators that access the
- hardware directly?
- <braunr> etenil: why ?
- <etenil> well as you said Mach contains some drivers, but in principle, it
- shouldn't, translators should do disk access etc, yes?
- <braunr> etenil: ok
- <braunr> etenil: so ?
- <etenil> well, let's say if one were to introduce SATA support in Hurd,
- nothing would stop him/her to do so with a translator rather than in Mach
- <braunr> you should avoid the term translator here
- <braunr> it's really hurd specific
- <braunr> let's just say a user space task would be responsible for that
- job, maybe multiple instances of it, yes
- <etenil> ok, so in this case, let's say we have some I/O optimization
- techniques like readahead and I/O scheduling within Mach, would these
- also apply to the user-space task, or would they need to be
- reimplemented?
- <braunr> if you have user space drivers, there is no point having I/O
- scheduling in the kernel
- <etenil> but we also have drivers within the kernel
- <braunr> what you call readahead, and I call pagein/out clustering, is
- really tied to the VM, so it must be in Mach in any case
- <braunr> well
- <braunr> you either have one or the other
- <braunr> currently we have them in the kernel
- <braunr> if we switch to DDE, we should have all of them outside
- <braunr> that's why such things must be discussed
- <etenil> ok so if I follow you, then future I/O device drivers will need to
- be implemented for Mach
- <braunr> currently, yes
- <braunr> but preferrably, someone should continue the work that has been
- done on DDe so that drivers are outside the kernel
- <etenil> so for the time being, I will try and improve I/O in Mach, and if
- drivers ever get out, then some of the I/O optimizations will need to be
- moved out of Mach
- <braunr> let me remind you one of the things i said
- <braunr> i said I/O scheduling should be close to their resource, because
- we can host several operating systems
- <braunr> now, the Hurd is the only system running on top of Mach
- <braunr> so we could just have I/O scheduling outside too
- <braunr> then you should consider neighbor hurds
- <braunr> which can use different partitions, but on the same device
- <braunr> currently, partitions are managed in the kernel, so file systems
- (and storeio) can't make good scheduling decisions if it remains that way
- <braunr> but that can change too
- <braunr> a single storeio representing a whole disk could be shared by
- several hurd instances, just as if it were a high level driver
- <braunr> then you could implement I/O scheduling in storeio, which would be
- an improvement for the current implementation, and reusable for future
- work
- <etenil> yes, that was my first instinct
- <braunr> and you would be mostly free of the kernel internals that make it
- a nightmare
- <etenil> but youpi said that it would be better to modify Mach instead
- <braunr> he mentioned the page clustering thing
- <braunr> not I/O scheduling
- <braunr> theseare really two different things
- <etenil> ok
- <braunr> you *can't* implement page clustering outside Mach because Mach
- implements virtual memory
- <braunr> both policies and mechanisms
- <etenil> well, I'd rather think of one thing at a time if that's alright
- <etenil> so what I'm busy with right now is setting up clustered page-in
- <etenil> which need to be done within Mach
- <braunr> keep clustered page-outs in mind too
- <braunr> although there are more constraints on those
- <etenil> yes
- <etenil> I've looked up madvise(). There's a lot of documentation about it
- in Linux but I couldn't find references to it in Mach (nor Hurd), does it
- exist?
- <braunr> well, if it did, you wouldn't be caring about clustered page
- transfers, would you ?
- <braunr> be careful about linux specific stuff
- <etenil> I suppose not
- <braunr> you should implement at least posix options, and if there are
- more, consider the bsd variants
- <braunr> (the Mach VM is the ancestor of all modern BSD VMs)
- <etenil> madvise() seems to be posix
- <braunr> there are system specific extensions
- <braunr> be careful
- <braunr> CONFORMING TO POSIX.1b. POSIX.1-2001 describes posix_madvise(3)
- with constants POSIX_MADV_NORMAL, etc., with a behav‐ ior close to that
- described here. There is a similar posix_fadvise(2) for file access.
- <braunr> MADV_REMOVE, MADV_DONTFORK, MADV_DOFORK, MADV_HWPOISON,
- MADV_MERGEABLE, and MADV_UNMERGEABLE are Linux- specific.
- <etenil> I was about to post these
- <etenil> ok, so basically madvise() allows tasks etc. to specify a usage
- type for a chunk of memory, then I could apply the relevant I/O
- optimization based on this
- <braunr> that's it
- <etenil> cool, then I don't need to worry about knowing what the I/O is
- operating on, I just need to apply the optimizations as advised
- <etenil> that's convenient
- <etenil> ok I'll start working on this tonight
- <etenil> making a basic readahead shouldn't be too hard
- <braunr> readahead is a misleading name
- <etenil> is pagein better?
- <braunr> applies to too many things, doesn't include the case where
- previous elements could be prefetched
- <braunr> clustered page transfers is what i would use
- <braunr> page prefetching maybe
- <etenil> ok
- <braunr> you should stick to something that's already used in the
- literature since you're not inventing something new
- <etenil> yes I've read a paper about prefetching
- <etenil> ok
- <etenil> thanks for your help braunr
- <braunr> sure
- <braunr> you're welcome
- <antrik> braunr: madvise() is really the least important part of the
- picture...
- <antrik> very few applications actually use it. but pretty much all
- applications will profit from clustered paging
- <antrik> I would consider madvise() an optional goody, not an integral part
- of the implementation
- <antrik> etenil: you can find some stuff about KAM's work on
- http://www.gnu.org/software/hurd/user/kam.html
- <antrik> not much specific though
- <etenil> thanks
- <antrik> I don't remember exactly, but I guess there is also some
- information on the mailing list. check the archives for last summer
- <antrik> look for Karim Allah Ahmed
- <etenil> antrik: I disagree, madvise gives me a good starting point, even
- if eventually the optimisations should run even without it
- <antrik> the code he wrote should be available from Google's summer of code
- page somewhere...
- <braunr> antrik: right, i was mentioning madvise() because the kernel (VM)
- interface is pretty similar to the syscall
- <braunr> but even a default policy would be nice
- <antrik> etenil: I fear that many bits were discussed only on IRC... so
- you'd better look through the IRC logs from last April onwards...
- <etenil> ok
-
- <etenil> at the beginning I thought I could put that into libstore
- <etenil> which would have been fine
-
- <antrik> BTW, I remembered now that KAM's GSoC application should have a
- pretty good description of the necessary changes... unfortunately, these
- are not publicly visible IIRC :-(
-
-
-## IRC, freenode, #hurd, 2011-02-16
-
- <etenil> braunr: I've looked in the kernel to see where prefetching would
- fit best. We talked of the VM yesterday, but I'm not sure about it. It
- seems to me that the device part of the kernel makes more sense since
- it's logically what manages devices, am I wrong?
- <braunr> etenil: you are
- <braunr> etenil: well
- <braunr> etenil: drivers should already support clustered sector
- read/writes
- <etenil> ah
- <braunr> but yes, there must be support in the drivers too
- <braunr> what would really benefit the Hurd mostly concerns page faults, so
- the right place is the VM subsystem
-
-[[clustered_page_faults]]
-
-
-# 2012-03
-
-
-## IRC, freenode, #hurd, 2012-03-21
-
- <mcsim> I thought that readahead should have some heuristics, like
- accounting size of object and last access time, but i didn't find any in
- kam's patch. Are heuristics needed or it will be overhead for
- microkernel?
- <youpi> size of object and last access time are not necessarily useful to
- take into account
- <youpi> what would usually typically be kept is the amount of contiguous
- data that has been read lately
- <youpi> to know whether it's random or sequential, and how much is read
- <youpi> (the whole size of the object does not necessarily give any
- indication of how much of it will be read)
- <mcsim> if big object is accessed often, performance could be increased if
- frame that will be read ahead will be increased too.
- <youpi> yes, but the size of the object really does not matter
- <youpi> you can just observe how much data is read and realize that it's
- read a lot
- <youpi> all the more so with userland fs translators
- <youpi> it's not because you mount a CD image that you need to read it all
- <mcsim> youpi: indeed. this will be better. But on other hand there is
- principle about policy and mechanism. And kernel should implement
- mechanism, but heuristics seems to be policy. Or in this case moving
- readahead policy to user level would be overhead?
- <antrik> mcsim: paging policy is all in kernel anyways; so it makes perfect
- sense to put the readahead policy there as well
- <antrik> (of course it can be argued -- probably rightly -- that all of
- this should go into userspace instead...)
- <mcsim> antrik: probably defpager partly could do that. AFAIR, it is
- possible for defpager to return more memory than was asked.
- <mcsim> antrik: I want to outline what should be done during gsoc. First,
- kernel should support simple readahead for specified number of pages
- (regarding direction of access) + simple heuristic for changing frame
- size. Also default pager could make some analysis, for instance if it has
- many data located consequentially it could return more data then was
- asked. For other pagers I won't do anything. Is it suitable?
- <antrik> mcsim: I think we actually had the same discussion already with
- KAM ;-)
- <antrik> for clustered pageout, the kernel *has* to make the decision. I'm
- really not convinced it makes sense to leave the decision for clustered
- pagein to the individual pagers
- <antrik> especially as this will actually complicate matters because a) it
- will require work in *every* pager, and b) it will probably make handling
- of MADVISE & friends more complex
- <antrik> implementing readahead only for the default pager would actually
- be rather unrewarding. I'm pretty sure it's the one giving the *least*
- benefit
- <antrik> it's much, much more important for ext2
- <youpi> mcsim: maybe try to dig in the irc logs, we discussed about it with
- neal. the current natural place would be the kernel, because it's the
- piece that gets the traps and thus knows what happens with each
- projection, while the backend just provides the pages without knowing
- which projection wants it. Moving to userland would not only be overhead,
- but quite difficult
- <mcsim> antrik: OK, but I'm not sure that I could do it for ext2.
- <mcsim> OK, I'll dig.
-
-
-## IRC, freenode, #hurd, 2012-04-01
-
- <mcsim> as part of implementing of readahead project I have to add
- interface for setting appropriate behaviour for memory range. This
- interface than should be compatible with madvise call, that has a lot of
- possible advises, but most part of them are specific for Linux (according
- to man page). Should mach also support these Linux-specific values?
- <mcsim> p.s. these Linux-specific values shouldn't affect readahead
- algorithm.
- <youpi> the interface shouldn't prevent from adding them some day
- <youpi> so that we don't have to add them yet
- <mcsim> ok. And what behaviour with value MADV_NORMAL should be look like?
- Seems that it should be synonym to MADV_SEQUENTIAL, isn't it?
- <youpi> no, it just means "no idea what it is"
- <youpi> in the linux implementation, that means some given readahead value
- <youpi> while SEQUENTIAL means twice as much
- <youpi> and RANDOM means zero
- <mcsim> youpi: thank you.
- <mcsim> youpi: Than, it seems to be better that kernel interface for
- setting behaviour will accept readahead value, without hiding it behind
- such constants, like VM_BEHAVIOR_DEFAULT (like it was in kam's
- patch). And than implementation of madvise will call vm_behaviour_set
- with appropriate frame size. Is that right?
- <youpi> question of taste, better ask on the list
- <mcsim> ok
diff --git a/open_issues/performance/ipc_virtual_copy.mdwn b/open_issues/performance/ipc_virtual_copy.mdwn
deleted file mode 100644
index 9708ab96..00000000
--- a/open_issues/performance/ipc_virtual_copy.mdwn
+++ /dev/null
@@ -1,395 +0,0 @@
-[[!meta copyright="Copyright © 2011 Free Software Foundation, Inc."]]
-
-[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
-id="license" text="Permission is granted to copy, distribute and/or modify this
-document under the terms of the GNU Free Documentation License, Version 1.2 or
-any later version published by the Free Software Foundation; with no Invariant
-Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
-is included in the section entitled [[GNU Free Documentation
-License|/fdl]]."]]"""]]
-
-IRC, freenode, #hurd, 2011-09-02:
-
- <slpz> what's the usual throughput for I/O operations (like "dd
- if=/dev/zero of=/dev/null") in one of those Xen based Hurd machines
- (*bber)?
- <braunr> good question
- <braunr> slpz: but don't use /dev/zero and /dev/null, as they don't have
- anything to do with true I/O operations
- <slpz> braunr: in fact, I want to test the performance of IPC's virtual
- copy operations
- <braunr> ok
- <slpz> braunr: sorry, the "I/O" was misleading
- <braunr> use bs=4096 then i guess
- <slpz> bs > 2k
- <braunr> ?
- <slpz> braunr: everything about 2k is copied by vm_map_copyin/copyout
- <slpz> s/about/above/
- <slpz> braunr: MiG's stubs check for that value and generate complex (with
- out_of_line memory) messages if datalen is above 2k, IIRC
- <braunr> ok
- <braunr> slpz: found it, thanks
- <tschwinge> tschwinge@strauss:~ $ dd if=/dev/zero of=/dev/null bs=4k & p=$!
- && sleep 10 && kill -s INFO $p && sleep 1 && kill $p
- <tschwinge> [1] 13469
- <tschwinge> 17091+0 records in
- <tschwinge> 17090+0 records out
- <tschwinge> 70000640 bytes (70 MB) copied, 17.1436 s, 4.1 MB/s
- <tschwinge> Note, however 10 s vs. 17 s!
- <tschwinge> And this is slow compared to heal hardware:
- <tschwinge> thomas@coulomb:~ $ dd if=/dev/zero of=/dev/null bs=4k & p=$! &&
- sleep 10 && kill -s INFO $p && sleep 1 && kill $p
- <tschwinge> [1] 28290
- <tschwinge> 93611+0 records in
- <tschwinge> 93610+0 records out
- <tschwinge> 383426560 bytes (383 MB) copied, 9.99 s, 38.4 MB/s
- <braunr> tschwinge: is the first result on xen vm ?
- <tschwinge> I think so.
- <braunr> :/
- <slpz> tschwinge: Thanks! Could you please try with a higher block size,
- something like 128k or 256k?
- <tschwinge> strauss is on a machine that also hosts a buildd, I think.
- <braunr> oh ok
- <pinotree> yes, aside either rossini or mozart
- <tschwinge> And I can confirm that with dd if=/dev/zero of=/dev/null bs=4k
- running, a parallel sleep 10 takes about 20 s (on strauss).
-
-[[open_issues/time]]
-
- <braunr> slpz: i'll set up xen hosts soon and can try those tests while
- nothing else runs to have more accurate results
- <tschwinge> tschwinge@strauss:~ $ dd if=/dev/zero of=/dev/null bs=256k &
- p=$! && sleep 10 && kill -s INFO $p && sleep 1 && kill $p
- <tschwinge> [1] 13482
- <tschwinge> 4566+0 records in
- <tschwinge> 4565+0 records out
- <tschwinge> 1196687360 bytes (1.2 GB) copied, 13.6751 s, 87.5 MB/s
- <braunr> slpz: gains are logarithmic beyond the page size
- <tschwinge> thomas@coulomb:~ $ dd if=/dev/zero of=/dev/null bs=256k & p=$!
- && sleep 10 && kill -s INFO $p && sleep 1 && kill $p
- <tschwinge> [1] 28295
- <tschwinge> 6335+0 records in
- <tschwinge> 6334+0 records out
- <tschwinge> 1660420096 bytes (1.7 GB) copied, 9.99 s, 166 MB/s
- <tschwinge> This time a the sleep 10 decided to take 13.6 s.
- ``Interesting.''
- <slpz> tschwinge: Thanks again. The results for the Xen machine are not bad
- though. I can't obtain a throughput over 50MB/s with KVM.
- <tschwinge> slpz: Want more data (bs)? Just tell.
- <braunr> slpz: i easily get more than that
- <braunr> slpz: what buffer size do you use ?
- <slpz> tschwinge: no, I just wanted to see if Xen has an upper limit beyond
- KVM's. Thank you.
- <slpz> braunr: I try with different sizes until I find the maximum
- throughput for a certain amount of requests (count)
- <slpz> braunr: are you working with KVM?
- <braunr> yes
- <braunr> slpz: my processor is a model name : Intel(R) Core(TM)2 Duo
- CPU E7500 @ 2.93GHz
- <braunr> Linux silvermoon 2.6.32-5-amd64 #1 SMP Tue Jun 14 09:42:28 UTC
- 2011 x86_64 GNU/Linux
- <braunr> (standard amd64 squeeze kernel)
- <slpz> braunr: and KVM's version?
- <braunr> squeeze (0.12.5)
- <braunr> bbl
- <gnu_srs> 212467712 bytes (212 MB) copied, 9.95 s, 21.4 MB/s on kvm for me!
- <slpz> gnu_srs: which block size?
- <gnu_srs> 4k, and 61.7 MB/s with 256k
- <slpz> gnu_srs: could you try with 512k and 1M?
- <gnu_srs> 512k: 56.0 MB/s, 1024k: 40.2 MB/s Looks like the peak is around a
- few 100k
- <slpz> gnu_srs: thanks!
- <slpz> I've just obtained 1.3GB/s with bs=512k on other (newer) machine
- <braunr> on which hw/vm ?
- <slpz> I knew this is a cpu-bound test, but I couldn't imagine faster
- processors could make this difference
- <slpz> braunr: Intel(R) Core(TM) i5 CPU 650 @ 3.20GHz
- <slpz> braunr: KVM
- <braunr> ok
- <braunr> how much time did you wait before reading the result ?
- <slpz> that was 20x times better than the same test on my Intel(R)
- Core(TM)2 Duo CPU T7500 @ 2.20GHz
- <slpz> braunr: I've repeated the test with a fixed "count"
- <gnu_srs> My box is: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz: Max
- is 67 MB/s around 140k block size
- <braunr> yes but how much time did dd run ?
- <gnu_srs> 10 s plus/minus a few fractions of a second,
- <braunr> try waiting 30s
- <slpz> braunr: didn't check, let me try again
- <braunr> my kvm peaks at 130 MiB/s with bs 512k / 1M
- <gnu_srs> 2029690880 bytes (2.0 GB) copied, 30.02 s, 67.6 MB/s, bs=140k
- <braunr> gnu_srs: i'm very surprised with slpz's result of 1.3 GiB/s
- <slpz> braunr: over 60 s running, same performance
- <braunr> nice
- <braunr> i wonder what makes it so fast
- <braunr> how much cache ?
- <gnu_srs> Me too, I cannot get better values than around 67 MB/s
- <braunr> gnu_srs: same questions
- <slpz> braunr: 4096KB, same as my laptop
- <braunr> slpz: l2 ? l3 ?
- <gnu_srs> kvm: cache=writeback, CPU: 4096 KB
- <braunr> gnu_srs: this has nothing to do with the qemu option, it's about
- the cpu
- <slpz> braunr: no idea, it's the first time I touch this machine. I going
- to see if I find the model in processorfinder
- <braunr> under my host linux system, i get a similar plot, that is,
- performance drops beyond bs=1M
- <gnu_srs> braunr: OK, bu I gave you the cache size too, same as slpz.
- <braunr> i wonder what dd actually does
- <braunr> read() and writes i guess
- <slpz> braunr: read/write repeatedly, nothing fancy
- <braunr> slpz: i don't think it's a good test for virtual copy
- <braunr> io_read_request, vm_deallocate, io_write_request, right
- <braunr> slpz: i really wonder what it is about i5 that improves speed so
- much
- <slpz> braunr: me too
- <slpz> braunr: L2: 2x256KB, L3: 4MB
- <slpz> and something calling "SmartCache"
- <gnu_srs> slpz: where did you find these values?
- <slpz> gnu_srs: ark.intel.com and wikipedia
- <gnu_srs> aha, cpuinfo just gives cache size.
- <slpz> that "SmartCache" thing seems to be just L2 cache sharing between
- cores. Shouldn't make a different since we're using only one core, and I
- don't see KVM hooping between them.
- <manuel> with bs=256k: 7004487680 bytes (7.0 GB) copied, 10 s, 700 MB/s
- <manuel> (qemu/kvm, 3 * Intel(R) Xeon(R) E5504 2GHz, cache size 4096 KB)
- <slpz> manuel: did you try with 512k/1M?
- <manuel> bs=512k: 7730626560 bytes (7.7 GB) copied, 10 s, 773 MB/s
- <manuel> bs=1M: 7896825856 bytes (7.9 GB) copied, 10 s, 790 MB/s
- <slpz> manuel: those are pretty good numbers too
- <braunr> xeon processor
- <gnu_srs> lshw gave me: L1 Cache 256KiB, L2 cache 4MiB
- <slpz> sincerely, I've never seen Hurd running this fast. Just checked
- "uname -a" to make sure I didn't take the wrong image :-)
- <manuel> for bs=256k, 60s: 40582250496 bytes (41 GB) copied, 60 s, 676 MB/s
- <braunr> slpz: i think you can assume processor differences alter raw
- copies too much to get any valuable results about virtual copy operations
- <braunr> you need a specialized test program
- <manuel> and bs=512k, 60s, 753 MB/s
- <slpz> braunr: I'm using the mach_perf suite from OSFMach to do the
- "serious" testing. I just wanted a non-synthetic test to confirm the
- readings.
-
-[[!taglink open_issue_gnumach]] -- have a look at *mach_perf*.
-
- <braunr> manuel: how much cache ? 2M ?
- <braunr> slpz: ok
- <braunr> manuel: hmno, more i guess
- <manuel> braunr: /proc/cpuinfo says cache size : 4096 KB
- <braunr> ok
- <braunr> manuel: performance should drop beyond bs=2M
- <braunr> but that's not relevant anyway
- <gnu_srs> Linux: bs=1M, 10.8 GB/s
- <slpz> I think this difference is too big to be only due to a bigger amount
- of CPU cycles...
- <braunr> slpz: clearly
- <slpz> gnu_srs: your host system has 64 or 32 bits?
- <slpz> braunr: I'm going to investigate a bit
- <slpz> but this accidental discovery just made my day. We're able to run
- Hurd at decent speeds on newer hardware!
- <braunr> slpz: what result do you get with the same test on your host
- system ?
- <manuel> interestingly, running it several times has made the performance
- drop quite much (i'm getting 400-500MB/s with 1M now, compared to nearly
- 800 fifteen minutes ago)
-
-[[Degradataion]].
-
- <slpz> braunr: probably an almost infinite throughput, but I don't consider
- that a valid test, since in Linux, the write operation to "/dev/null"
- doesn't involve memory copying/moving
- <braunr> manuel: i observed the same behaviour
- <gnu_srs> slpz: Host system is 64 bit
- <braunr> slpz: it doesn't on the hurd either
- <braunr> slpz: (under 2k, that is)
- <braunr> over*
- <slpz> braunr: humm, you're right, as the null translator doesn't "touch"
- the memory, CoW rules apply
- <braunr> slpz: the only thing which actually copies things around is dd
- <braunr> probably by simply calling read()
- <braunr> which gets its result from a VM copy operation, but copies the
- content to the caller provided buffer
- <braunr> then vm_deallocate() the data from the storeio (zero) translator
- <braunr> if storeio isn't too dumb, it doesn't even touch the transfered
- buffer (as anonymous vm_map()ped memory is already cleared)
-
-[[!taglink open_issue_documentation]]
-
- <braunr> so this is a good test for measuring (profiling?) our ipc overhead
- <braunr> and possibly the vm mapping operations (which could partly explain
- why the results get worse over time)
- <braunr> manuel: can you run vminfo | wc -l on your gnumach process ?
- <slpz> braunr: Yes, unless some special situation apply, like the source
- address/offset being unaligned, or if the translator decides to return
- the result in a different buffer (which I assume is not the case for
- storeio/zero)
- <manuel> braunr: 35
- <braunr> slpz: they can't be unaligned, the vm code asserts that
- <braunr> manuel: ok, this is normal
- <slpz> braunr: address/offset from read()
- <braunr> slpz: the caller provided buffer you mean ?
- <slpz> braunr: yes, and the offset of the memory_object, if it's a pager
- based translator
- <braunr> slpz: highly unlikely, the compiler chooses appropriate alignments
- for such buffers
- <slpz> braunr: in those cases, memcpy is used over vm_copy
- <braunr> slpz: and the glibc memcpy() optimized versions can usually deal
- with that
- <braunr> slpz: i don't get your point about memory objects
- <braunr> slpz: requests on memory objects always have aligned values too
- <slpz> braunr: sure, but can't deal with the user requesting non
- page-aligned sizes
- <braunr> slpz: we're considering our dd tests, for which we made sure sizes
- were page aligned
- <slpz> braunr: oh, I was talking in a general sense, not just in this dd
- tests, sorry
- <slpz> by the way, dd on the host tops at 12 GB/s with bs=2M
- <braunr> that's consistent with our other results
- <braunr> slpz: you mean, even on your i5 processor with 1.3 GiB/s on your
- hurd kvm ?
- <slpz> braunr: yes, on the GNU/Linux which is running as host
- <braunr> slpz: well that's not consistent
- <slpz> braunr: consistent with what?
- <braunr> slpz: i get roughly the same result on my host, but ten times less
- on my hurd kvm
- <braunr> slpz: what's your kernel/kvm versions ?
- <slpz> 2.6.32-5-amd64 (debian's build) 0.12.5
- <braunr> same here
- <braunr> i'm a bit clueless
- <braunr> why do i only get 130 MiB/s where you get 1.3 .. ? :)
- <slpz> well, on my laptop, where Hurd on KVM tops on 50 MB/s, Linux gets a
- bit more than 10 GB/s
- <braunr> see
- <braunr> slpz: reduce bs to 256k and test again if you have time please
- <slpz> braunr: on which system?
- <braunr> slpz: the fast one
- <braunr> (linux host)
- <slpz> braunr: Hurd?
- <slpz> ok
- <slpz> 12 GB/s
- <braunr> i get 13.3
- <slpz> same for 128k, only at 64k starts dropping
- <slpz> maybe, on linux we're being limited by memory speed, while on Hurd's
- this test is (much) more CPU-bound?
- <braunr> slpz: maybe
- <braunr> too bad processor stalls aren't easy to measure
- <slpz> braunr: that's very true. It's funny when you read a paper which
- measures performance by cycles on an old RISC processor. That's almost
- impossible to do (with reliability) nowadays :-/
- <slpz> I wonder which throughput can achieve Hurd running bare-metal on
- this machine...
- <antrik> both the Xeon and the i5 use cores based on the Nehalem
- architecture
- <antrik> apparently Nehalem is where Intel first introduces nested page
- tables
- <antrik> which pretty much explains the considerably lower overhead of VM
- magic
- <cjuner> antrik, what are nested page tables? (sounds like the 4-level page
- tables we already have on amd64, or 2-level or 3-level on x86 pae)
- <antrik> page tables were always 2-level on x86
- <antrik> that's unrelated
- <antrik> nested page tables means there is another layer of address
- translation, so the VMM can do it's own translation and doesn't care what
- the guest system does => no longer has to intercept all page table
- manipulations
- <braunr> antrik: do you imply it only applies to virtualized systems ?
- <antrik> braunr: yes
- <slpz> antrik: Good guess. Looks like Intel's EPT are doing the trick by
- allowing the guest OS deal with its own page faults
- <slpz> antrik: next monday, I'll try disabling EPT support in KVM on that
- machine (the fast one). That should confirm your theory empirically.
- <slpz> this also means that there're too many page faults, as we should be
- doing virtual copies of memory that is not being accessed
- <slpz> and looking at how the value of "page faults" in "vmstat" increases,
- shows that page faults are directly proportional to the number of pages
- we are asking from the translator
- <slpz> I've also tried doing a long read() directly, to be sure that "dd"
- is not doing something weird, and it shows the same behaviour.
- <braunr> slpz: dd does copy buffers
- <braunr> slpz: i told you, it's not a good test case for pure virtual copy
- evaluation
- <braunr> antrik: do you know if xen benefits from nested page tables ?
- <antrik> no idea
-
-[[!taglink open_issue_xen]]
-
- <slpz> braunr: but my small program doesn't, and still provokes a lot of
- page faults
- <braunr> slpz: are you certain it doesn't ?
- <slpz> braunr: looking at google, it looks like recent Xen > 3.4 supports
- EPT
- <braunr> ok
- <braunr> i'm ordering my new server right now, core i5 :)
- <slpz> braunr: at least not explicitily. I need to look at MiG stubs again,
- I don't remember if they do something weird.
- <antrik> braunr: sandybridge or nehalem? :-)
- <braunr> antrik: no idea
- <antrik> does it tell a model number?
- <braunr> not yet
- <braunr> but i don't have a choice for that, so i'll order it first, check
- after
- <antrik> hehe
- <antrik> I'm not sure it makes all that much difference anyways for a
- server... unless you are running it at 100% load ;-)
- <braunr> antrik: i'm planning on running xen guests suchs as new buildd
- <antrik> hm... note though that some of the nehalem-generation i5s were
- dual-core, while all the new ones are quad
- <braunr> it's a quad
- <antrik> the newer generation has better performance per GHz and per
- Watt... but considering that we are rather I/O-limited in most cases, it
- probably won't make much difference
- <antrik> not sure whether there are further virtualisation improvements
- that could be relevant...
- <braunr> buildds spend much time running gcc, so even such improvements
- should help
- <braunr> there, server ordered :)
- <braunr> antrik: model name : Intel(R) Core(TM) i5-2400 CPU @ 3.10GHz
-
-IRC, freenode, #hurd, 2011-09-06:
-
- <slpz> youpi: what machines are being used for buildd? Do you know if they
- have EPT/RVI?
- <youpi> we use PV Xen there
- <slpz> I think Xen could also take advantage of those technologies. Not
- sure if only in HVM or with PV too.
- <youpi> only in HVM
- <youpi> in PV it does not make sense: the guest already provides the
- translated page table
- <youpi> which is just faster than anything else
-
-IRC, freenode, #hurd, 2011-09-09:
-
- <antrik> oh BTW, for another data point: dd zero->null gets around 225 MB/s
- on my lowly 1 GHz Pentium3, with a blocksize of 32k
- <antrik> (but only half of that with 256k blocksize, and even less with 1M)
- <antrik> the system has been up for a while... don't know whether it's
- faster on a freshly booted one
-
-IRC, freenode, #hurd, 2011-09-15:
-
- <sudoman>
- http://www.reddit.com/r/gnu/comments/k68mb/how_intelamd_inadvertently_fixed_gnu_hurd/
- <sudoman> so is the dd command pointed to by that article a measure of io
- performance?
- <antrik> sudoman: no, not really
- <antrik> it's basically the baseline of what is possible -- but the actual
- slowness we experience is more due to very unoptimal disk access patterns
- <antrik> though using KVM with writeback caching does actually help with
- that...
- <antrik> also note that the title of this post really makes no
- sense... nested page tables should provide similar improvements for *any*
- guest system doing VM manipulation -- it's not Hurd-specific at all
- <sudoman> ok, that makes sense. thanks :)
-
-IRC, freenode, #hurd, 2011-09-16:
-
- <slpz> antrik: I wrote that article (the one about How AMD/Intel fixed...)
- <slpz> antrik: It's obviously a bit of an exaggeration, but it's true that
- nested pages supposes a great improvement in the performance of Hurd
- running on virtual machines
- <slpz> antrik: and it's Hurd specific, as this system is more affected by
- the cost of page faults
- <slpz> antrik: and as the impact of virtualization on the performance is
- much higher than (almost) any other OS.
- <slpz> antrik: also, dd from /dev/zero to /dev/null it's a measure on how
- fast OOL IPC is.
diff --git a/open_issues/performance/microbenchmarks.mdwn b/open_issues/performance/microbenchmarks.mdwn
deleted file mode 100644
index de3a54b7..00000000
--- a/open_issues/performance/microbenchmarks.mdwn
+++ /dev/null
@@ -1,13 +0,0 @@
-[[!meta copyright="Copyright © 2010 Free Software Foundation, Inc."]]
-
-[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
-id="license" text="Permission is granted to copy, distribute and/or modify this
-document under the terms of the GNU Free Documentation License, Version 1.2 or
-any later version published by the Free Software Foundation; with no Invariant
-Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
-is included in the section entitled [[GNU Free Documentation
-License|/fdl]]."]]"""]]
-
-Microbenchmarks may give useful hints, or they may not.
-
-<http://www.ibm.com/developerworks/java/library/j-jtp02225.html>
diff --git a/open_issues/performance/microkernel_multi-server.mdwn b/open_issues/performance/microkernel_multi-server.mdwn
deleted file mode 100644
index 111d2b88..00000000
--- a/open_issues/performance/microkernel_multi-server.mdwn
+++ /dev/null
@@ -1,47 +0,0 @@
-[[!meta copyright="Copyright © 2011 Free Software Foundation, Inc."]]
-
-[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
-id="license" text="Permission is granted to copy, distribute and/or modify this
-document under the terms of the GNU Free Documentation License, Version 1.2 or
-any later version published by the Free Software Foundation; with no Invariant
-Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license
-is included in the section entitled [[GNU Free Documentation
-License|/fdl]]."]]"""]]
-
-[[!tag open_issue_documentation]]
-
-Performance issues due to the microkernel/multi-server system architecture?
-
-IRC, freenode, #hurd, 2011-07-26
-
- < CTKArcher> I read that, because of its microkernel+servers design, the
- hurd was slower than a monolithic kernel, is that confirmed ?
- < youpi> the hurd is currently slower than current monolithic kernels, but
- it's not due to the microkernel + servers design
- < youpi> the microkernel+servers design makes the system call path longer
- < youpi> but you're bound by disk and network speed
- < youpi> so the extra overhead will not hurt so much
- < youpi> except dumb applications keeping doing system calls all the time
- of course, but they are usually considered bogus
- < braunr> there may be some patterns (like applications using pipes
- extensively, e.g. git-svn) which may suffer from the design, but still in
- an acceptable range
- < CTKArcher> so, you are saying that disk and network are more slowing the
- system than the longer system call path and because of that, it wont
- really matter ?
- < youpi> braunr: they should sitll be fixed because they'll suffer (even if
- less) on monolithic kernels
- < youpi> CTKArcher: yes
- < braunr> yes
- < CTKArcher> mmh
- < youpi> CTKArcher: you might want to listen to AST's talk at fosdem 10
- iirc, about minix
- < youpi> they even go as far as using an IPC for each low-level in/out
- < youpi> for security
- < braunr> this has been expected for a long time
- < braunr> which is what motivated research in microkernels
- < CTKArcher> I've already downloaded the video :)
- < youpi> and it has been more and more true with faster and faster cpus
- < braunr> but in 95, processors weren't that fast compared to other
- components as they are now
- < youpi> while disk/mem haven't evovled so fast