From eccdd13dd3c812b8f0b3d046ef9d8738df00562a Mon Sep 17 00:00:00 2001 From: Thomas Schwinge Date: Wed, 25 Sep 2013 21:45:38 +0200 Subject: IRC. --- open_issues/arm_port.mdwn | 53 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 53 insertions(+) (limited to 'open_issues/arm_port.mdwn') diff --git a/open_issues/arm_port.mdwn b/open_issues/arm_port.mdwn index b07df939..ebbad1a4 100644 --- a/open_issues/arm_port.mdwn +++ b/open_issues/arm_port.mdwn @@ -273,3 +273,56 @@ architecture. braunr: OK, thanks. I'm interested on it, and didn't want to duplicate efforts. little addition: it may have started, but we don't know about it + + +# IRC, freenode, #hurd, 2013-09-18 + + as i understand ; on startup, vm_resident.c functions configure + the whole available memory ; but at this point the system does not split + space for kernel and space for future apps + when pages are tagged to be used by userspace ? + Hooligan0: at page fault time + the split is completely virtual, vm_resident deals with physical + memory only + braunr: do you think it's possible to change (at least) + pmap_steal_memory to mark somes pages as kernel-reserved ? + why do you want to reserve memory ? + and which memory ? + braunr: first because on my mmu i have two entry points ; so i + want to set kernel pages into a dedicated space that never change on + context switch (for best cache performance) + braunr: and second, because i want to use larger pages into + kernel (1MB) to reduce mmu work + vm_resident isn't well suited for large pages :( + i don't see the effect of context switch on kernel pages + at many times, context switch flush caches + ah you want something like global pages on x86 ? + yes, something like + how is it done on arm ? + virtual memory is split into two parts depending on msb bits + for example 3G/1G + MMU will use two pages tables depending on vaddr (hi-side or + low-side) + hi is kernel, low is user ? + so, for the moment i've put mach at 0xC0000000 -> 0xFFFFFFFF ; + and want to use 0x00000000 -> 0xBFFFFFFF for userspace + yes + ok, that's what is done for x86 too + 1MB pages for kernel ; and 4kB (or 64kB) pages for apps + i suggest you give up the large page stuff + well, you can use them for the direct physical mapping, but for + kernel objects, it's a waste + or you can rewrite vm_resident to use something like a buddy + allocator but it's additional work + for the moment it's waste ; but with some littles changes this + allow only one level of allocation mapping ; -i think- it's better for + performances + Hooligan0: it is, but not worth it + will you allow changes into vm_resident if i update i386 too ? + Hooligan0: sure, as long as these are relevant and don't introduce + regressions + ok + Hooligan0: i suggest you look at x15, since you may want to use it + as a template for your own changes + as it was done for the slab allocator for example + e.g. x15 already uses a buddy allocator for physical memory -- cgit v1.2.3