summaryrefslogtreecommitdiff
path: root/user
diff options
context:
space:
mode:
Diffstat (limited to 'user')
-rw-r--r--user/Etenil.mdwn16
1 files changed, 15 insertions, 1 deletions
diff --git a/user/Etenil.mdwn b/user/Etenil.mdwn
index 9eff7ea7..e96ac699 100644
--- a/user/Etenil.mdwn
+++ b/user/Etenil.mdwn
@@ -18,6 +18,8 @@ Write a clusterized pagein (prefetching) mechanism in Mach.
In order to implement the pagein properly, it was necessary for me to get a general idea of the I/O path that data follows in the Hurd/Mach. To accomplish this, I've investigated top-down from the [[ext2fs]] translator to Mach. This section contains the main nodes that data passes through.
+This section is probably unnecessary to implement the prefetcher in Mach, however it is always interesting to understand how things work so we can notice when they get broken.
+
This is based on my understanding of the system and is probably imprecise. Refer to the manuals of both Hurd and Mach for more detailed information.
### Pagers
@@ -26,5 +28,17 @@ Pagers are implemented in libpager and provide abstracted access to Mach's [[VM]
### Libstore
Libstore provides abstracted access to Mach's storage access.
-I am currently looking at the way the stores call Mach, especially for memory allocation. My intuition is that memory is allocated in Mach when the function *store_create()*. I am currently investigating this to see where in Mach would the prefetcher fit.
+I am currently looking at the way the stores call Mach, especially for memory allocation. My intuition is that memory is allocated in Mach when the function *store_create()*. I am currently investigating this to see how the memory allocation process happens in practice.
+
+### Mach
+VM allocation happens with a call to:
+
+ kern_return_t vm_allocate (vm_task_t target_task, vm_address_t *address, vm_size_t size, boolean_t anywhere)
+
+
+## Implementation idea
+To start of with, I will toy with the VM (even if it breaks stuff). My initial intent is to systematically allocate more memory than requested in the hope that the excess will be manipulated by the task in the near future, thus saving on future I/O requests.
+
+I'd also need to keep track of the pre-allocated memory and so that I can pass it on to the task on demand and prefetch even more. I could also possibly time the prefetched data and unallocate it if it's not requested after a while, but that's just an idea.
+The tricky part is to understand how the memory allocation works in Mach and to create an additional struct for the prefetched data.