[[!meta copyright="Copyright © 2012, 2013 Free Software Foundation, Inc."]] [[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable id="license" text="Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled [[GNU Free Documentation License|/fdl]]."]]"""]] [[!tag open_issue_mig]] Every [[message]] has an ID field, which is defined in the [[RPC]] `*.defs` files. [[!toc]] # IRC, freenode, #hurd, 2012-07-12 [Extending an existing RPC.] create a new call, either with a new variant of vm_statistics_t, or a new structure with only the extra fields that seems cleaner indeed but using different names for the same thing seems so tedious and unnecessary :/ it's extra effort, but it pays off i agree, it's the right way to do it but this implies some kind of versioning which is currently more or less done using mig subsystem numbers, and skipping obsolete calls in rpc definition files and a subsystem is like 100 calls (200 with the replies) at some point we should recycle them or use truely huge ranges braunr: that's not something we need to worry about until we get there -- which is not likely to happen any time soon :-) "There is no more room in this interface for additional calls." in mach.defs i'll use the mach4.defs file but it really makes no sense at all to do such things just because we want to be compatible with 20 year old software nobody uses any more who cares about the skips used to keep us from using the old mach 2.5 interface .. (and this 100 arbitrary limit is really ugly too) braunr: I agree that we don't want to be compatible with 20 years old software. just Hurd stuff from the last few years is perfectly fine. braunr, antrik: I agree with the approach of using a new RPC/data structure for incompatible changes, and I also agree that recycling RPC slots that have been unused (skipped) for some years is fine. tschwinge: well, we probably shouldn't just reuse them arbitrarily; but rather do a mass purge if the need really arises... it would be confusing otherwise IMHO antrik: What do you understand by doing a mass purge? My idea indeed was to replace arbitrary "skip"s by new RPC definitions. a purge would be good along with a mig change to make subsystem and routines identifier larger i guess 16-bits width should do But what do you unterstand by a "purge" in this context. removing all the skips But that moves the RPC ids following after? yes that's why i think it's not a good thing, unless we also change the numbering ... which is a incompatible change for all clients. yes OK, so you'd propose a new system and deprecate the current one. not really new just larger numbers we must acknowledge interfaces change with time Yes, that's "new" enough. ;-) New in the sense that all clients use new iterfaces. that's enough to completely break compability, yes at least binary Yes. However, I don't see an urgent need for that, do you? Why not just recycled a skip that has been unused for a decade? i don't think we should care much about that, as the only real issue i can see is when upgrading a system i don't say we shouldn't do that actually, my current patch does exactly this OK. :-) purging is another topic but purging without making numbers larger seems a bit pointless as the point is allowing developers to change interfaces without breaking short time compability compatibility* also, interfaces, even stable, can have more than 100 calls (at the same time, i don't think there would ever be many interfaces, so using 16-bits integers for the subsystems and the calls should really be fine, and cleanly aligned in memory) tschwinge: you are right, it was a brain fart :-) no purge obviously but I think we only should start with filling skips once all IDs in the subsystem are exhausted braunr: the 100 is not fixed in MIG IIRC; it's a definition we make somewhere BTW, using multiple subsystems for "overflowing" interfaces is a bit ugly, but not to bad I'd say... so I wouldn't really consider this a major problem err... not too bad especially since Hurd subsystem usually are spaced 1000 aways, so there are some "spare" blocks between them anyways hm i'm almost sure it's related to mig that's how the reply id is computed of course it is related to MIG... but I have a vague recollection that this constant is not fixed in the MIG code, but rather supplied somewhere. might be wrong though :-) you mean like the 101-200 skip block in hurd/tioctl.defs? pinotree: exactly these are reserved for reply message IDs at 200 a new request message block begins... server.c: fprintf(file, "\tOutP->Head.msgh_id = InP->msgh_id + 100;\n"); it's not even a define in the mig code :/ meaning that in the space of an hurd subsystem there are max 500 effective rpc's? actually, ioctls are rather special, as the numbers are computed from the ioctl properties... braunr: :-( pinotree: how do you get this value ? braunr: 1000/2? :) ? why not 20000/3 ? pinotree: yes where do they come from ? ah ok sorry braunr: 1000 is the space of each subsystem, and each rpc takes an id + its replu *reply right 500 is fine better than 100 but still, 64k is way better and not harder to do (hey, i'm the noob in this :) ) braunr: it's just how "we" lay out subsystems... nothing fixed about it really; we could just as well define new subsystems with 10000 or whatever if we wanted yes but we still have to consider this mig limit there are one or two odd exceptions though, with "related" subsystems starting at ??500... braunr: right. it's not pretty -- but I wouldn't consider it enough of a problem to invest major effort in changing this... agreed at least not while our interfaces don't change often which shouldn't happen any time soon Hmm, I also remember seeing some emails about indeed versioning RPCs (by Roland, I think). I can try to look that up if there's interest. i'm only adding a cached pages count you know :) (well actually, this is now a vm_stats call that can replace vm_statistics, and uses flavors similar to task_info) braunr: I don't think introducing "flavors" is a good idea i just did it the way others calls were done other* woud you prefer a larger structure with append-only upgrades ? I prefer introducing new calls. it avoids an unncessary layer of indirection flavors are not exactly RPC-over-RPC, but definitely going down that road... right as fetching VM statistics is not performance-critical, I would suggest adding a new call with only the extra stats you are introducing. then if someone runs an old kernel not implementing that call, the values are simply left blank in the caller. makes backward-compatibility a no-brainer (the alternative is a new call fetching both the traditional and the new stats -- but this is not necessary here, as an extra call shouldn't hurt) antrik: all right ## IRC, freenode, #hurd, 2012-07-13 so, should i replace old, unused mach.defs RPCs with mine, or add them to e.g. mach4.defs ? braunr: hm... actually I wonder whether we shouldn't add a gnumach.defs -- after all, it's neither old mach nor mach4 interfaces... true good idea i'll do just that hm, doesn't adding a new interface file requires some handling in glibc ? simply rebuild it youpi: no i mean youpi: glibc knows about mach.defs and mach4.defs, but i guess we should add something so that it knows about gnumach.defs ah probably, yes ok i don't understand why these files are part of the glibc headers are they? (i mean mach_interface.h and mach4.h) for example youpi: the interface i'll add is vm_cache_statistics(task, &cached_objects, &cached_pages) if it's ok i'll commit directly into the gnumach repository shouldn't it rather be a int array, to make it extensible? like other stat functions of gnumach antrik was against doing that well, he was against using flavors maybe we could have an extensible array yes, and require additions at the end of the structure ## IRC, freenode, #hurd, 2012-07-14 braunr: there are two reasons why the files are part of glibc. one is that glibc itself uses them, so it would be painful to handle otherwise. the other is that libc is traditionally responsible for providing the system interface... having said that, I'm not sure we should stick with that :-) antrik: what do you think about having a larger structure with reserved fields ? sounds a lot better than flavors, doesn't it ? antrik: it's in debian, yes grmbl, adding a new interface just for a single call is really tedious i'll just add it to mach4 braunr: well, it's not unlikely there will be other new calls in the future... but I guess using mach4.defs isn't too bad braunr: as for reserved fields, I guess that is somewhat better than flavors; but I can't say I exactly like the idea either... antrik: there is room in mach4 ;p ## IRC, freenode, #hurd, 2012-07-23 I'm not sure yet whether I'm happy with adding the RPC to mach4.defs. that's the only question yes (well, no, not only) as i know have a better view of what's involved, it may make sense to create a gnumach.defs file tschwinge: all right i'll create a gnumach.defs file braunr: Well, if there is general agreement that this is the way to go. braunr: In that case, I guess there's no point in being more fine-grained -- gnumach-vm.defs or similar -- that'd probably be over-engineering. If the glibc bits for libmachuser are not straight-forward, I can help with that of course. ok ## IRC, freenode, #hurd, 2012-07-27 tschwinge: i've pushed a patch on the gnumach page_cache branch that adds a gnumach.defs interface tschwinge: if you think it's ok, i'll rewrite a formal changelog so it can be applied ## IRC, freenode, #hurd, 2012-09-30 youpi: hey, didn't see you merged the page cache stats branch :) ## IRC, freenode, #hurd, 2013-01-12 youpi: the hurd master-vm_cache_stats branch (which makes vmstat displays some vm cache properties) is ready to be pulled [[open_issues/mach_tasks_memory_usage]]. tschwinge: i've updated the procfs server on darnassus, you can now see the amount of physical memory used by the vm cache with top/htop (not vmstat yet) ### IRC, freenode, #hurd, 2013-01-13 braunr: I'm not sure to understand what I'm supposed to do with the page cache statistics branch youpi: apply it ? can't you already do that? well, i don't consider myself a maintainer then submit to the list for review hm ok youpi: ok, next time, i'll commit such changes directly # Subsystems ## IRC, freenode, #hurd, 2013-09-03 anything I need to be aware of if I want to add a new subsystem? is there a convention for choosing the subsystem id? a subsystem takes 200 IDs grep other subsystems in mach and the hurd to avoid collisions of course yes i know that ;) :) i've noticed the _notify subsystems being x+500, should I follow that? 100 for rpc + 100 for their replies? teythoon: yes pinotree: yes ok we should really work on mig... ... :)