diff options
author | http://etenil.myopenid.com/ <http://etenil.myopenid.com/@web> | 2011-02-15 19:38:23 +0000 |
---|---|---|
committer | GNU Hurd web pages engine <web-hurd@gnu.org> | 2011-02-15 19:38:23 +0000 |
commit | d47700de2813d7de38b332b65c9249a15f8bea01 (patch) | |
tree | eaf0c170de3850e744c9fb1485844a833d77bf3e /user | |
parent | 5c53b3d3c90d5da55a7859f1a44e7eaeaa2f12f9 (diff) |
Diffstat (limited to 'user')
-rw-r--r-- | user/Etenil.mdwn | 5 |
1 files changed, 4 insertions, 1 deletions
diff --git a/user/Etenil.mdwn b/user/Etenil.mdwn index e96ac699..a19aacd8 100644 --- a/user/Etenil.mdwn +++ b/user/Etenil.mdwn @@ -14,6 +14,8 @@ License|/fdl]]."]]"""]] Write a clusterized pagein (prefetching) mechanism in Mach. +- - - + ## General information on system architecture In order to implement the pagein properly, it was necessary for me to get a general idea of the I/O path that data follows in the Hurd/Mach. To accomplish this, I've investigated top-down from the [[ext2fs]] translator to Mach. This section contains the main nodes that data passes through. @@ -35,8 +37,9 @@ VM allocation happens with a call to: kern_return_t vm_allocate (vm_task_t target_task, vm_address_t *address, vm_size_t size, boolean_t anywhere) +- - - -## Implementation idea +## Implementation plan To start of with, I will toy with the VM (even if it breaks stuff). My initial intent is to systematically allocate more memory than requested in the hope that the excess will be manipulated by the task in the near future, thus saving on future I/O requests. I'd also need to keep track of the pre-allocated memory and so that I can pass it on to the task on demand and prefetch even more. I could also possibly time the prefetched data and unallocate it if it's not requested after a while, but that's just an idea. |