From d22a3b299d00ce757237f9aee9794d0d4f2758e2 Mon Sep 17 00:00:00 2001 From: "http://etenil.myopenid.com/" Date: Fri, 18 Feb 2011 19:50:45 +0000 Subject: --- user/Etenil.mdwn | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'user') diff --git a/user/Etenil.mdwn b/user/Etenil.mdwn index 603bbdec..a1a3373b 100644 --- a/user/Etenil.mdwn +++ b/user/Etenil.mdwn @@ -27,7 +27,7 @@ This is where the problem lies. Hard disks are inherently efficient at sequentia There are a couple of ways I could think of to solve this problem. Pages could be enlarged, but that would cause a lot more problems. Or pages must be handled by groups instead of one by one. This means the changes will also need to be applied in the way user-space processes talk to Mach. ## What's already been done -[[hurd/user/KAM]] has already made a patch that provides basic page clustering. I have yet to understand it completely, but there are troubling changes in the patch, most notably the removal of continuations in *vm_fault* and *vm_fault_page*. +[[user/KAM]] has already made a patch that provides basic page clustering. I have yet to understand it completely, but there are troubling changes in the patch, most notably the removal of continuations in *vm_fault* and *vm_fault_page*. So far, what I can tell is that KAM seems to have modified the memory objects in Mach so that they handle clusters of pages. -- cgit v1.2.3