From 8cee055ec4fac00e59f19620ab06e2b30dccee3c Mon Sep 17 00:00:00 2001 From: Thomas Schwinge Date: Wed, 11 Jul 2012 22:39:59 +0200 Subject: IRC. --- open_issues/performance/io_system/read-ahead.mdwn | 1176 +++++++++++++++++++++ 1 file changed, 1176 insertions(+) (limited to 'open_issues/performance/io_system') diff --git a/open_issues/performance/io_system/read-ahead.mdwn b/open_issues/performance/io_system/read-ahead.mdwn index d6a98070..710c746b 100644 --- a/open_issues/performance/io_system/read-ahead.mdwn +++ b/open_issues/performance/io_system/read-ahead.mdwn @@ -16,6 +16,9 @@ License|/fdl]]."]]"""]] # [[community/gsoc/project_ideas/disk_io_performance]] +# [[gnumach_page_cache_policy]] + + # 2011-02 [[Etenil]] has been working in this area. @@ -389,3 +392,1176 @@ License|/fdl]]."]]"""]] with appropriate frame size. Is that right? question of taste, better ask on the list ok + + +## IRC, freenode, #hurd, 2012-06-09 + + hello. What fictitious pages in gnumach are needed for? + I mean why real page couldn't be grabbed straight, but in sometimes + fictitious page is grabbed first and than converted to real? + mcsim: iirc, fictitious pages are needed by device pagers which + must comply with the vm pager interface + mcsim: specifically, they must return a vm_page structure, but + this vm_page describes device memory + mcsim: and then, it must not be treated like normal vm_page, which + can be added to page queues (e.g. page cache) + + +## IRC, freenode, #hurd, 2012-06-22 + + braunr: Ah. Patch for large storages introduced new callback + pager_notify_evict. User had to define this callback on his own as + pager_dropweak, for instance. But neal's patch change this. Now all + callbacks could have any name, but user defines structure with pager ops + and supplies it in pager_create. + So, I just changed notify_evict to confirm it to new style. + braunr: I want to changed interface of mo_change_attributes and + test my changes with real partitions. For both these I have to update + ext2fs translator, but both partitions I have are bigger than 2Gb, that's + why I need apply this patch.z + But what to do with mo_change_attributes? I need somehow inform + kernel about page fault policy. + When I change mo_ interface in kernel I have to update all programs + that use this interface and ext2fs is one of them. + + braunr: Who do you think better to inform kernel about fault + policy? At the moment I've added fault_strategy parameter that accepts + following strategies: randow, sequential with single page cluster, + sequential with double page cluster and sequential with quad page + cluster. OSF/mach has completely another interface of + mo_change_attributes. In OSF/mach mo_change_attributes accepts structure + of parameter. This structure could have different formats depending o + This rpc could be useful because it is not very handy to update + mo_change_attributes for kernel, for hurd libs and for glibc. Instead of + this kernel will accept just one more structure format. + well, like i wrote on the mailing list several weeks ago, i don't + think the policy selection is of concern currently + you should focus on the implementation of page clustering and + readahead + concerning the interface, i don't think it's very important + also, i really don't like the fact that the policy is per object + it should be per map entry + i think it mentioned that in my mail too + i really think you're wasting time on this + http://lists.gnu.org/archive/html/bug-hurd/2012-04/msg00064.html + http://lists.gnu.org/archive/html/bug-hurd/2012-04/msg00029.html + mcsim: any reason you completely ignored those ? + braunr: Ok. I'll do clustering for map entries. + no it's not about that either :/ + clustering is grouping several pages in the same transfer between + kernel and pager + the *policy* is held in map entries + mcsim: I'm not sure I properly understand your question about the + policy interface... but if I do, it's IMHO usually better to expose + individual parameters as RPC arguments explicitly, rather than hiding + them in an opaque structure... + (there was quite some discussion about that with libburn guy) + antrik: Following will be ok? kern_return_t vm_advice(map, address, + length, advice, cluster_size) + Where advice will be either random or sequential + looks fine to me... but then, I'm not an expert on this stuff :-) + perhaps "policy" would be clearer than "advice"? + madvise has following prototype: int madvise(void *addr, size_t + len, int advice); + hmm... looks like I made a typo. Or advi_c_e is ok too? + advise is a verb; advice a noun... there is a reason why both + forms show up in the madvise prototype :-) + so final variant should be kern_return_t vm_advise(map, address, + length, policy, cluster_size)? + mcsim: nah, you are probably right that its better to keep + consistency with madvise, even if the name of the "advice" parameter + there might not be ideal... + BTW, where does cluster_size come from? from the filesystem? + I see merits both to naming the parameter "policy" (clearer) or + "advice" (more consistent) -- you decide :-) + antrik: also there is variant strategy, like with inheritance :) + I'll choose advice for now. + What do you mean under "where does cluster_size come from"? + well, madvise doesn't have this parameter; so the value must come + from a different source? + in madvise implementation it could fixed value or somehow + calculated basing on size of memory range. In OSF/mach cluster size is + supplied too (via mo_change_attributes). + ah, so you don't really know either :-) + well, my guess is that it is derived from the cluster size used by + the filesystem in question + so for us it would always be 4k for now + (and thus you can probably leave it out alltogether...) + well, fatfs can use larger clusters + I would say, implement it only if it's very easy to do... if it's + extra effort, it's probably not worth it + There is sense to make cluster size bigger for ext2 too, since most + likely consecutive clusters will be within same group. + But anyway I'll handle this later. + well, I don't know what cluster_size does exactly; but by the + sound of it, I'd guess it makes an assumption that it's *always* better + to read in this cluster size, even for random access -- which would be + simply wrong for 4k filesystem clusters... + BTW, I agree with braunr that madvice() is optional -- it is way + way more important to get readahead working as a default policy first + + +## IRC, freenode, #hurd, 2012-07-01 + + youpi: Do you think you could review my code? + sure, just post it to the list + make sure to break it down into logical pieces + youpi: I pushed it my branch at gnumach repository + youpi: or it is still better to post changes to list? + posting to the list would permit feedback from other people too + mcsim: posix distinguishes normal, sequential and random + we should probably too + the system call should probably be named "vm_advise", to be a verb + like allocate etc. + youpi: ok. A have a talk with antrik regarding naming, I'll change + this later because compiling of glibc take a lot of time. + mcsim: I find it odd that vm_for_every_page allocates non-existing + pages + there should probably be at least a flag to request it or not + youpi: normal policy is synonym to default. And this could be + treated as either random or sequential, isn't it? + mcsim: normally, no + yes, the normal policy would be the default + it doesn't mean random or sequential + it's just to be a compromise between both + random is meant to make no read-ahead, since that'd be spurious + anyway + while by default we should make readahead + and sequential makes even more aggressive readahead, which usually + implies a greater number of pages to fetch + that's all + yes + well, that part is handled by the cluster_size parameter actually + what about reading pages preceding the faulted paged ? + Shouldn't sequential clean some pages (if they, for example, are + not precious) that are placed before fault page? + ? + that could make sense, yes + you lost me + and something that you wouldn't to with the normal policy + braunr: clear what has been read previously + ? + since the access is supposed to be sequential + oh + the application will proabably not re-read what was already read + you mean to avoid caching it ? + yes + inactive memory is there for that + while with the normal policy you'd assume that the application + might want to go back etc. + yes, but you can help it + yes + instead of making other pages compete with it + but then, it's for precious pages + I have to say I don't know what a precious page it + s + does it mean dirty pages? + no + precious means cached pages + "If precious is FALSE, the kernel treats the data as a temporary + and may throw it away if it hasn't been changed. If the precious value is + TRUE, the kernel treats its copy as a data repository and promises to + return it to the manager; the manager may tell the kernel to throw it + away instead by flushing and not cleaning the data" + hm no + precious means the kernel must keep it + youpi: According to vm_for_every_page. What kind of flag do you + suppose? If object is internal, I suppose not to cross the bound of + object, setting in_end appropriately in vm_calculate_clusters. + If object is external we don't know its actual size, so we should + make mo request first. And for this we should create fictitious pages. + mcsim: but how would you implement this "cleaning" with sequential + ? + mcsim: ah, ok, I thought you were allocating memory, but it's just + fictitious pages + comment "Allocate a new page" should be fixed :) + braunr: I don't now how I will implement this specifically (haven't + tried yet), but I don't think that this is impossible + braunr: anyway it's useful as an example where normal and + sequential would be different + if it can be done simply + because i can see more trouble than gains in there :) + braunr: ok :) + mcsim: hm also, why fictitious pages ? + fictitious pages should normally be used only when dealing with + memory mapped physically which is not real physical memory, e.g. device + memory + but vm_fault could occur when object represent some device memory. + that's exactly why there are fictitious pages + at the moment of allocating of fictitious page it is not know what + backing store of object is. + really ? + damn, i've got used to UVM too much :/ + braunr: I said something wrong? + no no + it's just that sometimes, i'm confusing details about the various + BSD implementations i've studied + out-of-gsoc-topic question: besides network drivers, do you think + we'll have other drivers that will run in userspace and have to implement + memory mapping ? like framebuffers ? + or will there be a translation layer such as storeio that will + handle mapping ? + framebuffers typically will, yes + that'd be antrik's work on drm + hmm + ok + mcsim: so does the implementation work, and do you see performance + improvement? + youpi: I haven't tested it yet with large ext2 :/ + youpi: I'm going to finish now moving of ext2 to new interface, + than other translators in hurd repository and than finish memory policies + in gnumach. Is it ok? + which new interface? + Written by neal. I wrote some temporary code to make ext2 work with + it, but I'm going to change this now. + you mean the old unapplied patch? + yes + did you have a look at Karim's work? + (I have to say I never found the time to check how it related with + neal's patch) + I found only his work in kernel. I didn't see his work in applying + of neal's patch. + ok + how do they relate with each other? + (I have never actually looked at either of them :/) + his work in kernel and neal's patch? + yes + They do not correlate with each other. + ah, I must be misremembering what each of them do + in kam's patch was changes to support sequential reading in reverse + order (as in OSF/Mach), but posix does not support such behavior, so I + didn't implement this either. + I can't find the pointer to neal's patch, do you have it off-hand? + http://comments.gmane.org/gmane.os.hurd.bugs/351 + thx + I think we are not talking about the same patch from Karim + I mean lists.gnu.org/archive/html/bug-hurd/2010-06/msg00023.html + I mean this patch: + http://lists.gnu.org/archive/html/bug-hurd/2010-06/msg00024.html + Oh. + ok + seems, this is just the same + yes + from a non-expert view, I would have thought these patches play + hand in hand, do they really? + this patch is completely for kernel and neal's one is completely + for libpager. + i.e. neal's fixes libpager, and karim's fixes the kernel + yes + ending up with fixing the whole path? + AIUI, karim's patch will be needed so that your increased readahead + will end up with clustered page request? + I will not use kam's patch + is it not needed to actually get pages in together? + how do you tell libpager to fetch pages together? + about the cluster size, I'd say it shouldn't be specified at + vm_advise() level + in other OSes, it is usually automatically tuned + by ramping it up to a maximum readahead size (which, however, could + be specified) + that's important for the normal policy, where there are typically + successive periods of sequential reads, but you don't know in advance for + how long + braunr said that there are legal issues with his code, so I cannot + use it. + did i ? + mcsim: can you give me a link to the code again please ? + see above :) + which one ? + both + they only differ by a typo + mcsim: i don't remember saying that, do you have any link ? + or log ? + sorry, can you rephrase "ending up with fixing the whole path"? + cluster_size in vm_advise also could be considered as advise + no + it must be the third time we're talking about this + mcsim: I mean both parts would be needed to actually achieve + clustered i/o + again, why make cluster_size a per object attribute ? :( + wouldn't some objects benefit from bigger cluster sizes, while + others wouldn't? + but again, I believe it should rather be autotuned + (for each object) + if we merely want posix compatibility (and for a first attempt, + it's quite enough), vm_advise is good, and the kernel selects the + implementation (and thus the cluster sizes) + if we want finer grained control, perhaps a per pager cluster_size + would be good, although its efficiency depends on several parameters + (e.g. where the page is in this cluster) + but a per object cluster size is a large waste of memory + considering very few applications (if not none) would use the "feature" + .. + (if any*) + there must be a misunderstanding + why would it be a waste of memory? + "per object" + so? + there can be many memory objects in the kernel + so? + so such an overhead must be useful to accept it + in my understanding, a cluster size per object is just a mere + integer for each object + what overhead? + yes + don't we have just thousands of objects? + for now + remember we're trying to remove the page cache limit :) + that still won't be more than tens of thousands of objects + times an integer + that's completely neglectible + braunr: Strange, Can't find in logs. Weird things are happening in + my memory :/ Sorry. + mcsim: i'm almost sure i never said that :/ + but i don't trust my memory too much either + youpi: depends + mcsim: I mean both parts would be needed to actually achieve + clustered i/o + braunr: I made I call vm_advise that applies policy to memory range + (vm_map_entry to be specific) + mcsim: good + actually the cluster size should even be per memory range + youpi: In this sense, yes + k + sorry, Internet connection lags + when changing a structure used to create many objects, keep in + mind one thing + if its size gets larger than a threshold (currently, powers of + two), the cache used by the slab allocator will allocate twice the + necessary amount + sure + this is the case with most object caching allocators, although + some can have specific caches for common sizes such as 96k which aren't + powers of two + anyway, an integer is negligible, but the final structure size + must be checked + (for both 32 and 64 bits) + braunr: ok. + But I didn't understand what should be done with cluster size in + vm_advise? Should I delete it? + to me, the cluster size is a pager property + to me, the cluster size is a map property + whereas vm_advise indicates what applications want + you could have several process accessing the same file in different + ways + youpi: that's why there is a policy + isn't cluster_size part of the policy? + but if the pager abilities are limited, it won't change much + i'm not sure + cluster_size is the amount of readahead, isn't it? + no, it's the amount of data in a single transfer + Yes, it is. + ok, i'll have to check your code + shouldn't transfers permit unbound amounts of data? + braunr: than I misunderstand what readahead is + well then cluster size is per policy :) + e.g. random => 0, normal => 3, sequential => 15 + why make it per map entry ? + because it depends on what the application doezs + let me check the code + if it's accessing randomly, no need for big transfers + just page transfers will be fine + if accessing sequentially, rather use whole MiB of transfers + and these behavior can be for the same file + mcsim: the call is vm_advi*s*e + mcsim: the call is vm_advi_s_e + not advice + yes, he agreed earlier + ok + cluster_size is the amount of data that I try to read at one time. + at singe mo_data_request + *single + which, to me, will depend on the actual map + ok so it is the transfer size + and should be autotuned, especially for normal behavior + youpi: it makes no sense to have both the advice and the actual + size per map entry + to get big readahead with all apps + braunr: the size is not only dependent on the advice, but also on + the application behavior + youpi: how does this application tell this ? + even for sequential, you shouldn't necessarily use very big amounts + of transfers + there is no need for the advice if there is a cluster size + there can be, in the case of sequential, as we said, to clear + previous pages + but otherwise, indeed + but for me it's the converse + the cluster size should be tuned anyway + and i'm against giving the cluster size in the advise call, as we + may want to prefetch previous data as well + I don't see how that collides + well, if you consider it's the transfer size, it doesn't + to me cluster size is just the size of a window + if you consider it's the amount of pages following a faulted page, + it will + also, if your policy says e.g. "3 pages before, 10 after", and + your cluster size is 2, what happens ? + i would find it much simpler to do what other VM variants do: + compute the I/O sizes directly from the policy + don't they autotune, and use the policy as a maximum ? + depends on the implementations + ok, but yes I agree + although casting the size into stone in the policy looks bogus to + me + but making cluster_size part of the kernel interface looks way too + messy + it is + that's why i would have thought it as part of the pager properties + the pager is the true component besides the kernel that is + actually involved in paging ... + well, for me the flexibility should still be per application + by pager you mean the whole pager, not each file, right? + if a pager can page more because e.g. it's a file system with big + block sizes, why not fetch more ? + yes + it could be each file + but only if we have use for it + and i don't see that currently + well, posix currently doesn't provide a way to set it + so it would be useless atm + i was thinking about our hurd pagers + could we perhaps say that the policy maximum could be a fraction of + available memory? + why would we want that ? + (total memory, I mean) + to make it not completely cast into stone + as have been in the past in gnumach + i fail to understand :/ + there must be a misunderstanding then + (pun not intended) + why do you want to limit the policy maximum ? + how to decide it? + the pager sets it + actually I don't see how a pager could decide it + on what ground does it make the decision? + readahead should ideally be as much as 1MiB + 02:02 < braunr> if a pager can page more because e.g. it's a file + system with big block sizes, why not fetch more ? + is the example i have in mind + otherwise some default values + that's way smaller than 1MiB, isn't it? + yes + and 1 MiB seems a lot to me :) + for readahead, not really + maybe for sequential + that's what we care about! + ah, i thought we cared about normal + "as much as 1MiB", I said + I don't mean normal :) + right + but again, why limit ? + we could have 2 or more ? + at some point you don't get more efficiency + but eat more memory + having the pager set the amount allows us to easily adjust it over + time + braunr: Do you think that readahead should be implemented in + libpager? + than needed + mcsim: no + mcsim: err + mcsim: can't answer + mcsim: do you read the log of what you have missed during + disconnection? + i'm not sure about what libpager does actually + yes + for me it's just mutualisation of code used by pagers + i don't know the details + youpi: yes + youpi: that's why we want these values not hardcoded in the kernel + youpi: so that they can be adjusted by our shiny user space OS + (btw apparently linux uses minimum 16k, maximum 128 or 256k) + that's more reasonable + that's just 4 times less :) + braunr: You say that pager should decide how much data should be + read ahead, but each pager can't implement it on it's own as there will + be too much overhead. So the only way is to implement this in libpager. + mcsim: gni ? + why couldn't they ? + mcsim: he means the size, not the actual implementation + the maximum size, actually + actually, i would imagine it as the pager giving per policy + parameters + right + like how many before and after + I agree, then + the kernel could limit, sure, to avoid letting pagers use + completely insane values + (and that's just a max, the kernel autotunes below that) + why not + that kernel limit could be a fraction of memory, then? + it could, yes + i see what you mean now + mcsim: did you understand our discussion? + don't hesitate to ask for clarification + I supposed cluster_size to be such parameter. And advice will help + to interpret this parameter (whether data should be read after fault page + or some data should be cleaned before) + mcsim: we however believe that it's rather the pager than the + application that would tell that + at least for the default values + posix doesn't have a way to specify it, and I don't think it will + in the future + and i don't think our own hurd-specific programs will need more + than that + if they do, we can slightly change the interface to make it a per + object property + i've checked the slab properties, and it seems we can safely add + it per object + cf http://www.sceen.net/~rbraun/slabinfo.out + so it would still be set by the pager, but if depending on the + object, the pager could set different values + youpi: do you think the pager should just provide one maximum size + ? or per policy sizes ? + I'd say per policy size + so people can increase sequential size like crazy when they know + their sequential applications need it, without disturbing the normal + behavior + right + so the last decision is per pager or per object + mcsim: i'd say whatever makes your implementation simpler :) + braunr: how kernel knows that object are created by specific pager? + that's the kind of things i'm referring to with "whatever makes + your implementation simpler" + but usually, vm_objects have an ipc port and some properties + relatedto their pagers + -usually + the problem i had in mind was the locking protocol but our spin + locks are noops, so it will be difficult to detect deadlocks + braunr: and for every policy there should be variable in vm_object + structure with appropriate cluster_size? + if you want it per object, yes + although i really don't think we want it + better keep it per pager for now + let's imagine youpi finishes his 64-bits support, and i can + successfully remove the page cache limit + we'd jump from 1.8 GiB at most to potentially dozens of GiB of RAM + and 1.8, mostly unused + to dozens almost completely used, almost all the times for the + most interesting use cases + we may have lots and lots of objects to keep around + so if noone really uses the feature ... there is no point + but also lots and lots of memory to spend on it :) + a lot of objects are just one page, but a lof of them are not + sure + we wouldn't be doing that otherwise :) + i'm just saying there is no reason to add the overhead of several + integers for each object if they're simply not used at all + hmm, 64-bits, better page cache, clustered paging I/O :> + (and readahead included in the last ofc) + good night ! + than, probably, make system-global max-cluster_size? This will save + some memory. Also there is usually no sense in reading really huge chunks + at once. + but that'd be tedious to set + there are only a few pagers, that's no wasted memory + the user being able to set it for his own pager is however a very + nice feature, which can be very useful for databases, image processing, + etc. + In conclusion I have to implement following: 3 memory policies per + object and per vm_map_entry. Max cluster size for every policy should be + set per pager. + So, there should be 2 system calls for setting memory policy and + one for setting cluster sizes. + Also amount of data to transfer should be tuned automatically by + every page fault. + youpi: Correct me, please, if I'm wrong. + I believe that's what we ended up to decide, yes + + +## IRC, freenode, #hurd, 2012-07-02 + + is it safe to say that all memory objects implemented by external + pagers have "file" semantics ? + i wonder if the current memory manager interface is suitable for + device pagers + braunr: What does "file" semantics mean? + mcsim: anonymous memory doesn't have the same semantics as a file + for example + anonymous memory that is discontiguous in physical memory can be + contiguous in swap + and its location can change with time + whereas with a memory object, the data exchanged with pagers is + identified with its offset + in (probably) all other systems, this way of specifying data is + common to all files, whatever the file system + linux uses the struct vm_file name, while in BSD/Solaris they are + called vnodes (the link between a file system inode and virtual memory) + my question is : can we implement external device pagers with the + current interface, or is this interface really meant for files ? + also + mcsim: something about what you said yesterday + 02:39 < mcsim> In conclusion I have to implement following: 3 + memory policies per object and per vm_map_entry. Max cluster size for + every policy should be set per pager. + not per object + one policy per map entry + transfer parameters (pages before and after the faulted page) per + policy, defined by pagers + 02:39 < mcsim> So, there should be 2 system calls for setting + memory policy and one for setting cluster sizes. + adding one call for vm_advise is good because it mirrors the posix + call + but for the parameters, i'd suggest changing an already existing + call + not sure which one though + braunr: do you know how mo_change_attributes implemented in + OSF/Mach? + after a quick reading of the reference manual, i think i + understand why they made it per object + mcsim: no + did they change the call to include those paging parameters ? + it accept two parameters: flavor and pointer to structure with + parameters. + flavor determines semantics of structure with parameters. + + http://www.darwin-development.org/cgi-bin/cvsweb/osfmk/src/mach_kernel/vm/memory_object.c?rev=1.1 + structure can have 3 different views and what exect view will be is + determined by value of flavor + So, I thought about implementing similar call that could be used + for various purposes. + like ioctl + "pointer to structure with parameters" <= which one ? + mcsim: don't model anything anywhere like ioctl please + memory_object_info_t attributes + ioctl is the very thing we want NOT to have on the hurd + ok attributes + and what are the possible values of flavour, and what kinds of + attributes ? + and then appears something like this on each case: behave = + (old_memory_object_behave_info_t) attributes; + ok i see + flavor could be OLD_MEMORY_OBJECT_BEHAVIOR_INFO, + MEMORY_OBJECT_BEHAVIOR_INFO, MEMORY_OBJECT_PERFORMANCE_INFO etc + i don't really see the point of flavour here, other than + compatibility + having attributes is nice, but you should probably add it as a + call parameter, not inside a structure + as a general rule, we don't like passing structures too much + to/from the kernel, because handling them with mig isn't very clean + ok + What policy parameters should be defined by pager? + i'd say number of pages to page-in before and after the faulted + page + Only pages before and after the faulted page? + for me yes + youpi might have different things in mind + the page cleaning in sequential mode is something i wouldn't do + 1/ applications might want data read sequentially to remain in the + cache, for other sequential accesses + 2/ applications that really don't want to cache anything should + use O_DIRECT + 3/ it's complicated, and we're in july + i'd rather have a correct and stable result than too many unused + features + braunr: MADV_SEQUENTIAL Expect page references in sequential order. + (Hence, pages in the given range can be aggressively read ahead, and may + be freed soon after they are accessed.) + this is from linux man + braunr: Can I at least make keeping in mind that it could be + implemented? + I mean future rpc interface + braunr: From behalf of kernel pager is just a port. + That's why it is not clear for me how I can make in kernel + per-pager policy + mcsim: you can't + 15:19 < braunr> after a quick reading of the reference manual, i + think i understand why they made it per object + + http://pubs.opengroup.org/onlinepubs/009695399/functions/posix_madvise.html + POSIX_MADV_SEQUENTIAL + Specifies that the application expects to access the specified + range sequentially from lower addresses to higher addresses. + linux might free pages after their access, why not, but this is + entirely up to the implementation + I know, when but applications might want data read sequentially to + remain in the cache, for other sequential accesses this kind of access + could be treated rather normal or random + we can do differently + mcsim: no + sequential means the access will be sequential + so aggressive readahead (e.g. 0 pages before, many after), should + be used + for better performance + from my pov, it has nothing to do with caching + i actually sometimes expect data to remain in cache + e.g. before playing a movie from sshfs, i sometimes prefetch it + using dd + then i use mplayer + i'd be very disappointed if my data didn't remain in the cache :) + At least these pages could be placed into inactive list to be first + candidates for pageout. + that's what will happen by default + mcsim: if we need more properties for memory objects, we'll adjust + the call later, when we actually implement them + so, first call is vm_advise and second is changed + mo_change_attributes? + yes + there will appear 3 new parameters in mo_c_a: policy, pages before + and pages after? + braunr: With vm_advise I didn't understand one thing. This call is + defined in defs file, so that should mean that vm_advise is ordinal rpc + call. But on the same time it is defined as syscall in mach internals (in + mach_trap_table). + mcsim: what ? + were is it "defined" ? (it doesn't exit in gnumach currently) + Ok, let consider vm_map + I define it both in mach_trap_table and in defs file. + But why? + uh ? + let me see + Why defining in defs file is not enough? + and previous question: there will appear 3 new parameters in + mo_c_a: policy, pages before and pages after? + mcsim: give me the exact file paths please + mcsim: we'll discuss the new parameters after + kern/syscall_sw.c + right i see + here mach_trap_table in defined + i think they're not used + they were probably introduced for performance + and ./include/mach/mach.defs + don't bother adding vm_advise as a syscall + about the parameters, it's a bit more complicated + you should add 6 parameters + before and after, for the 3 policies + but + as seen in the posix page, there could be more policies .. + ok forget what i said, it's stupid + yes, the 3 parameters you had in mind are correct + don't forget a "don't change" value for the policy though, so the + kernel ignores the before/after values if we don't want to change that + ok + mcsim: another reason i asked about "file semantics" is the way we + handle the cache + mcsim: file semantics imply data is cached, whereas anonymous and + device memory usually isn't + (although having the cache at the vm layer instead of the pager + layer allows nice things like the swap cache) + But this shouldn't affect possibility of implementing of device + pager. + yes it may + consider how a fault is actually handled by a device + mach must use weird fictitious pages for that + whereas it would be better to simply let the pager handle the + fault as it sees fit + setting may_cache to false should resolve the issue + for the caching problem, yes + which is why i still think it's better to handle the cache at the + vm layer, unlike UVM which lets the vnode pager handle its own cache, and + removes the vm cache completely + The only issue with pager interface I see is implementing of + scatter-gather DMA (as current interface does not support non-consecutive + access) + right + but that's a performance issue + my problem with device pagers is correctness + currently, i think the kernel just asks pagers for "data" + whereas a device pager should really map its device memory where + the fault happen + braunr: You mean that every access to memory should cause page + fault? + I mean mapping of device memory + no + i mean a fault on device mapped memory should directly access a + shared region + whereas file pagers only implement backing store + let me explain a bit more + here is what happens with file mapped memory + you map it, access it (some I/O is done to get the page content in + physical memory), then later it's flushed back + whereas with device memory, there shouldn't be any I/O, the device + memory should directly be mapped (well, some devices need the same + caching behaviour, while others provide direct access) + one of the obvious consequences is that, when you map device + memory (e.g. a framebuffer), you expect changes in your mapped memory to + be effective right away + while with file mapped memory, you need to msync() it + (some framebuffers also need to be synced, which suggests greater + control is needed for external pagers) + Seems that I understand you. But how it is implemented in other + OS'es? Do they set something in mmu? + mcsim: in netbsd, pagers have a fault operatin in addition to get + and put + the device pager sets get and put to null and implements fault + only + the fault callback then calls the d_mmap callback of the specific + driver + which usually results in the mmu being programmed directly + (e.g. pmap_enter or similar) + in linux, i think raw device drivers, being implemented as + character device files, must provide raw read/write/mmap/etc.. functions + so it looks pretty much similar + i'd say our current external pager interface is insufficient for + device pagers + but antrik may know more since he worked on ggi + antrik: ^ + braunr: Seems he used io_map + mcsim: where ar eyou looking at ? the incubator ? + his master's thesis + ah the thesis + but where ? :) + I'll give you a link + http://dl.dropbox.com/u/36519904/kgi_on_hurd.pdf + thanks + see p 158 + arg, more than 200 pages, and he says he's lazy :/ + mcsim: btw, have a look at m_o_ready + braunr: This is old form of mo_change attributes + I'm not going to change it + mcsim: these are actually the default object parameters right ? + mcsim: if you don't change it, it means the kernel must set + default values until the pager changes them, if it does + yes. + mcsim: madvise() on Linux has a separate flag to indicate that + pages won't be reused. thus I think it would *not* be a good idea to + imply it in SEQUENTIAL + braunr: yes, my KMS code relies on mapping memory objects for the + framebuffer + (it should be noted though that on "modern" hardware, mapping + graphics memory directly usually gives very poor performance, and drivers + tend to avoid it...) + mcsim: BTW, it was most likely me who warned about legal issues + with KAM's work. AFAIK he never managed to get the copyright assignment + done :-( + (that's not really mandatory for the gnumach work though... only + for the Hurd userspace parts) + also I'd like to point out again that the cluster_size argument + from OSF Mach was probably *not* meant for advice from application + programs, but rather was supposed to reflect the cluster size of the + filesystem in question. at least that sounds much more plausible to me... + braunr: I have no idea whay you mean by "device pager". device + memory is mapped once when the VM mapping is established; there is no + need for any fault handling... + mcsim: to be clear, I think the cluster_size parameter is mostly + orthogonal to policy... and probably not very useful at all, as ext2 + almost always uses page-sized clusters. I'm strongly advise against + bothering with it in the initial implementation + mcsim: to avoid confusion, better use a completely different name + for the policy-decided readahead size + antrik: ok + braunr: well, yes, the thesis report turned out HUGE; but the + actual work I did on the KGI port is fairly tiny (not more than a few + weeks of actual hacking... everything else was just brooding) + braunr: more importantly, it's pretty much the last (and only + non-trivial) work I did on the Hurd :-( + (also, I don't think I used the word "lazy"... my problem is not + laziness per se; but rather inability to motivate myself to do anything + not providing near-instant gratification...) + antrik: right + antrik: i shouldn't consider myself lazy either + mcsim: i agree with antrik, as i told you weeks ago + about + 21:45 < antrik> mcsim: to be clear, I think the cluster_size + parameter is mostly orthogonal to policy... and probably not very useful + at all, as ext2 almost always uses page-sized clusters. I'm strongly + advise against bothering with it + in the initial implementation + antrik: but how do you actually map device memory ? + also, strangely enough, here is the comment in dragonflys + madvise(2) + 21:45 < antrik> mcsim: to be clear, I think the cluster_size + parameter is mostly orthogonal to policy... and probably not very useful + at all, as ext2 almost always uses page-sized clusters. I'm strongly + advise against bothering with it + in the initial implementation + arg + MADV_SEQUENTIAL Causes the VM system to depress the priority of + pages immediately preceding a given page when it is faulted in. + braunr: interesting... + (about SEQUENTIAL on dragonfly) + as for mapping device memory, I just use to device_map() on the + mem device to map the physical address space into a memory object, and + then through vm_map into the driver (and sometimes application) address + space + formally, there *is* a pager involved of course (implemented + in-kernel by the mem device), but it doesn't really do anything + interesting + thinking about it, there *might* actually be page faults involved + when the address ranges are first accessed... but even then, the handling + is really trivial and not terribly interesting + antrik: it does the most interesting part, create the physical + mapping + and as trivial as it is, it requires a special interface + i'll read about device_map again + but yes, the fact that it's in-kernel is what solves the problem + here + what i'm interested in is to do it outside the kernel :) + why would you want to do that? + there is no policy involved in doing an MMIO mapping + you ask for the pysical memory region you are interested in, and + that's it + whether the kernel adds the page table entries immediately or on + faults is really an implementation detail + braunr: ^ + yes it's a detail + but do we currently have the interface to make such mappings from + userspace ? + and i want to do that because i'd like as many drivers as possible + outside the kernel of course + again, the userspace driver asks the kernel to establish the + mapping (through device_map() and then vm_map() on the resulting memory + object) + hm i'm missing something + + http://www.gnu.org/software/hurd/gnumach-doc/Device-Map.html#Device-Map + <= this one ? + yes, this one + but this implies the device is implemented by the kernel + the mem device is, yes + but that's not a driver + ah + it's just the interface for doing MMIO + (well, any physical mapping... but MMIO is probably the only real + use case for that) + ok + i was thinking about completely removing the device interface from + the kernel actually + but it makes sense to have such devices there + well, in theory, specific kernel drivers can expose their own + device_map() -- but IIRC the only one that does (besides mem of course) + is maptime -- which is not a real driver either... + oh btw, i didn't know you had a blog :) + well, it would be possible to replace the device interface by + specific interfaces for the generic pseudo devices... I'm not sure how + useful that would be + there are lots of interesting stuff there + hehe... another failure ;-) + failure ? + well, when I realized that I'm speding a lot of time pondering + things, and never can get myself to actually impelemnt any of them, I had + the idea that if I write them down, there might at least be *some* good + from it... + unfortunately it turned out that I need so much effort to write + things down, that most of the time I can't get myself to do that either + :-( + i see + well it's still nice to have it + (notice that the latest entry is two years old... and I haven't + even started describing most of my central ideas :-( ) + antrik: i tried to create a blog once, and found what i wrote so + stupid i immediately removed it + hehe + actually some of my entries seem silly in retrospect as well... + but I guess that's just the way it is ;-) + :) + i'm almost sure other people would be interested in what i had to + say + BTW, I'm actually not sure whether the Mach interfaces are + sufficient to implement GEM/TTM... we would certainly need kernel support + for GART (as for any other kind IOMMU in fact); but beyond that it's not + clear to me + GEM ? TTM ? GART ? + GEM = Graphics Execution Manager. part of the "new" DRM interface, + closely tied with KMS + TTM = Translation Table Manager. does part of the background work + for most of the GEM drivers + "The Graphics Execution Manager (GEM) is a computer software + system developed by Intel to do memory management for device drivers for + graphics chipsets." hmm + (in fact it was originally meant to provide the actual interface; + but the Inter folks decided that it's not useful for their UMA graphics) + GART = Graphics Aperture + kind of an IOMMU for graphics cards + allowing the graphics card to work with virtual mappings of main + memory + (i.e. allowing safe DMA) + ok + all this graphics stuff looks so complex :/ + it is + I have a whole big chapter on that in my thesis... and I'm not + even sure I got everything right + what is nvidia using/doing (except for getting the finger) ? + flushing out all the details for KMS, GEM etc. took the developers + like two years (even longer if counting the history of TTM) + Nvidia's proprietary stuff uses a completely own kernel interface, + which is of course not exposed or docuemented in any way... but I guess + it's actually similar in what it does) + ok + (you could ask the nouveau guys if you are truly + interested... they are doing most of their reverse engineering at the + kernel interface level) + it seems graphics have very special needs, and a lot of them + and the interfaces are changing often + so it's not that much interesting currently + it just means we'll probably have to change the mach interface too + like you said + so the answer to my question, which was something like "do mach + external pagers only implement files ?", is likely yes + well, KMS/GEM had reached some stability; but now there are + further changes ahead with the embedded folks coming in with all their + dedicated hardware, calling for unified buffer management across the + whole pipeline (from capture to output) + and yes: graphics hardware tends to be much more complex regarding + the interface than any other hardware. that's because it's a combination + of actual I/O (like most other devices) with a very powerful coprocessor + and the coprocessor part is pretty much unique amongst peripherial + devices + (actually, the I/O part is also much more complex than most other + hardware... but that alone would only require a more complex driver, not + special interfaces) + embedded hardware makes it more interesting in that the I/O + part(s) are separate from the coprocessor ones; and that there are often + several separate specialised ones of each... the DRM/KMS stuff is not + prepared to deal with this + v4l over time has evolved to cover such things; but it's not + really the right place to implement graphics drivers... which is why + there are not efforts to unify these frameworks. funny times... + + +## IRC, freenode, #hurd, 2012-07-03 + + mcsim: vm_for_every_page should be static + braunr: ok + mcsim: see http://gcc.gnu.org/onlinedocs/gcc/Inline.html + and it looks big enough that you shouldn't make it inline + let the compiler decide for you (which is possible only if the + function is static) + (otherwise a global symbol needs to exist) + mcsim: i don't know where you copied that comment from, but you + should review the description of the vm_advice call in mach.Defs + braunr: I see + braunr: It was vm_inherit :) + mcsim: why isn't NORMAL defined in vm_advise.h ? + mcsim: i figured actually ;) + braunr: I was going to do it later when. + mcsim: for more info on inline, see + http://www.kernel.org/doc/Documentation/CodingStyle + arg that's an old one + braunr: I know that I do not follow coding style + mcsim: this one is about linux :p + mcsim: http://lxr.linux.no/linux/Documentation/CodingStyle should + have it + mcsim: "Chapter 15: The inline disease" + I was going to fix it later during refactoring when I'll merge + mplaneta/gsoc12/working to mplaneta/gsoc12/master + be sure not to forget :p + and the best not to forget is to do it asap + +way + As to inline. I thought that even if I specify function as inline + gcc makes final decision about it. + There was a specifier that made function always inline, AFAIR. + gcc can force a function not to be inline, yes + but inline is still considered as a strong hint + + +## IRC, freenode, #hurd, 2012-07-05 + + braunr: hello. You've said that pager has to supply 2 values to + kernel to give it an advice how execute page fault. These two values + should be number of pages before and after the page where fault + occurred. But for sequential policy number of pager before makes no + sense. For random policy too. For normal policy it would be sane to make + readahead symmetric. Probably it would be sane to make pager supply + cluster_size (if it is necessary to supply any) that w + *that will be advice for kernel of least sane value? And maximal + value will be f(free_memory, map_entry_size)? + mcsim1: I doubt symmetric readahead would be a good default + policy... while it's hard to estimate an optimum over all typical use + cases, I'm pretty sure most situtations will benefit almost exclusively + from reading following pages, not preceeding ones + I'm not even sure it's useful to read preceding pages at all in + the default policy -- the use cases are probably so rare that the penalty + in all other use cases is not justified. I might be wrong on that + though... + I wonder how other systems handle that + antrik: if there is a mismatch between pages and the underlying + store, like why changing small bits of data on an ssd is slow? + mcsim1: i don't see why not + antrik: netbsd reads a few pages before too + actually, what netbsd does vary on the version, some only mapped + in resident pages, later versions started asynchronous transfers in the + hope those pages would be there + LarstiQ: not sure what you are trying to say + in linux : + 321 * MADV_NORMAL - the default behavior is to read clusters. + This + 322 * results in some read-ahead and read-behind. + not sure if it's actually what the implementation does + well, right -- it's probably always useful to read whole clusters + at a time, especially if they are the same size as pages... that doesn't + mean it always reads preceding pages; only if the read is in the middle + of the cluster AIUI + antrik: basically what braunr just pasted + and in most cases, we will want to read some *following* clusters + as well, but probably not preceding ones + * LarstiQ nods + antrik: the default policy is usually rather sequential + here are the numbers for netbsd + 166 static struct uvm_advice uvmadvice[] = { + 167 { MADV_NORMAL, 3, 4 }, + 168 { MADV_RANDOM, 0, 0 }, + 169 { MADV_SEQUENTIAL, 8, 7}, + 170 }; + struct uvm_advice { + int advice; + int nback; + int nforw; + }; + surprising isn't it ? + they may suggest sequential may be backwards too + makes sense + braunr: what are these numbers? pages? + yes + braunr: I suspect the idea behind SEQUENTIAL is that with typical + sequential access patterns, you will start at one end of the file, and + then go towards the other end -- so the extra clusters in the "wrong" + direction do not actually come into play + only situation where some extra clusters are actually read is when + you start in the middle of a file, and thus do not know yet in which + direction the sequential read will go... + yes, there are similar comments in the linux code + mcsim1: so having before and after numbers seems both + straightforward and in par with other implementations + I'm still surprised about the almost symmetrical policy for NORMAL + though + BTW, is it common to use heuristics for automatically recognizing + random and sequential patterns in the absence of explicit madise? + i don't know + netbsd doesn't use any, linux seems to have different behaviours + for anonymous and file memory + when KAM was working on this stuff, someone suggested that... + there is a file_ra_state struct in linux, for per file read-ahead + policy + now the structure is of course per file system, since they all use + the same address + (which is why i wanted it to be per pager in the first place) + mcsim1: as I said before, it might be useful for the pager to + supply cluster size, if it's different than page size. but right now I + don't think this is something worth bothering with... + I seriously doubt it would be useful for the pager to supply any + other kind of policy + braunr: I don't understand your remark about using the same + address... + braunr: pre-mapping seems the obvious way to implement readahead + policy + err... per-mapping + the ra_state (read ahead state) isn't the policy + the policy is per mapping, parts of the implementation of the + policy is per file system + braunr: How do you look at following implementation of NORMAL + policy: We have fault page that is current. Than we have maximal size of + readahead block. First we find first absent pages before and after + current. Than we try to fit block that will be readahead into this + range. Here could be following situations: in range RBS/2 (RBS -- size of + readahead block) there is no any page, so readahead will be symmetric; if + current page is first absent page than all + RBS block will consist of pages that are after current; on the + contrary if current page is last absent than readahead will go backwards. + Additionally if current page is approximately in the middle of the + range we can decrease RBS, supposing that access is random. + mcsim1: i think your gsoc project is about readahead, we're in + july, and you need to get the job done + mcsim1: grab one policy that works, pages before and after are + good enough + use sane default values, let the pagers decide if they want + something else + and concentrate on the real work now + braunr: I still don't see why pagers should mess with that... only + complicates matters IMHO + antrik: probably, since they almost all use the default + implementation + mcsim1: just use sane values inside the kernel :p + this simplifies things by only adding the new vm_advise call and + not change the existing external pager interface -- cgit v1.2.3