[[!meta copyright="Copyright © 2010, 2011, 2012, 2013 Free Software Foundation, Inc."]] [[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable id="license" text="Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled [[GNU Free Documentation License|/fdl]]."]]"""]] [[!tag open_issue_glibc]] There are a lot of reports about this issue, but no thorough analysis. # Short Timeouts ## `elinks` IRC, unknown channel, unknown date: This is related to ELinks... I've looked at the select() implementation for the Hurd in glibc and it seems that giving it a short timeout could cause it not to report that file descriptors are ready. It sends a request to the Mach port of each file descriptor and then waits for responses from the servers. Even if the file descriptors have data for reading or are ready for writing, the server processes might not respond immediately. So if I want ELinks to check which file descriptors are ready, how long should the timeout be in order to ensure that all servers can respond in time? Or do I just imagine this problem? ## [[dbus]] ## IRC ### IRC, freenode, #hurd, 2012-01-31 don't you find vim extremely slow lately ? (and not because of cpu usage but rather unnecessary sleeps) yes. wasn't there a discussion to add a minimum timeout to mach_msg for select() or something like that during the past months ? there was, and it was added that could be it I don't want to drop it though, some app really need it as a debian patch only iirc ? yes ok if i'm right, the proper solution was to fix remote servers instead of client calls (no drop, unless the actual bug gets fixed of course) so i'm guessing it's just a hack in between not only with a timeout of zero, mach will just give *no* time for the servers to give an answer that's because the timeout is part of the client call so the protocol has to be rethought, both server/client side a suggested solution was to make it a parameter i mean, part of the message not a mach_msg parameter OTOH the servers should probably not be trusted to enforce the timeout. why ? they're not necessarily trusted. (but then again, that's not the only circumstances where that's a problem) there is a proposed solution for that too (trust root and self servers only by default) I'm not sure they're particularily easy to identify in the general case "they" ? the solutions you mean ? or the servers ? jkoenig: you can't trust the servers in general to provide an answer, timeout or not yes the root/self servers. ah jkoenig: you can stat the actual node before dereferencing the translator could they not report FD activity asynchronously to the message port? libc would cache the state I don't understand what you mean anyway, really making the timeout part of the message is not a problem 10:10 < youpi> jkoenig: you can't trust the servers in general to provide an answer, timeout or not we already trust everything (e.g. read() ) into providing an answer immediately i don't see why braunr: put sleep(1) in S_io_read() it'll not give you an immediate answer, O_NODELAY being set or not well sleep is evil, but let's just say the server thread blocks ok well fix the server so we agree ? in the current security model, we trust the server into achieve the timeout yes and jkoenig's remark is more global than just select() taht's why we must make sure we're contacting trusted servers by default it affects read() too sure so there's no reason not to fix select() that's the important point but this doesn't mean we shouldn't pass the timeout to the server and expect it to handle it correctly we keep raising issues with things, and not achieve anything, in the Hurd if it doesn't, then it's a bug, like in any other kernel type I'm not the one to convince :) eh, some would say it's one of the goals :) who's to be convinced then ? jkoenig: who raised the issue ah well, see the irc log :) not that I'm objecting to any patch, mind you :-) i didn't understand it that way if you can't trust the servers to act properly, it's similar to not trusting linux fs code no, the difference is that servers can be non-root while on linux they can't again, trust root and self non-root fuse mounts are not followed by default as with fuse that's still to be written yes and as I said, you can stat the actual node and then dereference the translator afterwards but before writing anything, we'd better agree on the solution :) which, again, "just" needs to be written err... adding a timeout to mach_msg()? that's just wrong (unless I completely misunderstood what this discussion was about...) #### IRC, freenode, #hurd, 2012-02-04 this is confirmed: the select hack patch hurts vim performance a lot I'll use program_invocation_short_name to make the patch even more ugly (of course, we really need to fix select somehow) could it (also) be that vim uses select() somehow "badly"? fsvo "badly", possibly, but still Could that the select() stuff be the reason for a ten times slower ethernet too, e.g. scp and apt-get? i didn't find myself neither scp nor apt-get slower, unlike vim see strace: scp does not use select (I haven't checked apt yet) ### IRC, freenode, #hurd, 2012-02-14 on another subject, I'm wondering how to correctly implement select/poll with a timeout on a multiserver system :/ i guess a timeout of 0 should imply a non blocking round-trip to servers only oh good, the timeout is already part of the io_select call ### IRC, freenode, #hurdfr, 2012-02-22 le gros souci de notre implé, c'est que le timeout de select est un paramètre client un paramètre passé directement à mach_msg donc si tu mets un timeout à 0, y a de fortes chances que mach_msg retourne avant même qu'un RPC puisse se faire entièrement (round-trip client-serveur donc) et donc quand le timeout est à 0 pour du non bloquant, ben tu bloques pas, mais t'as pas tes évènements .. peut-être que passer le timeout de 10ms à 10 us améliorerait la situation. car 10ms c'est un peut beaucoup :) c'est l'interval timer système historique unix et mach n'est pas préemptible donc c'est pas envisageable en l'état ceci dit c'est pas complètement lié enfin si, il nous faudrait qqchose de similaire aux high res timers de linux enfin soit des timer haute résolution, soit un timer programmable facilement actuellement il n'y a que le 8254 qui est programmé, et pour assurer un scheduling à peu près correct, il est programmé une fois, à 10ms, et basta donc oui, préciser 1ms ou 1us, ça changera rien à l'interval nécessaire pour déterminer que le timer a expiré ### IRC, freenode, #hurd, 2012-02-27 braunr: extremely dirty hack I don't even want to detail :) oh does it affect vim only ? or all select users ? we've mostly seen it with vim but possibly fakeroot has some issues too it's very little probable that only vim has the issue :) i mean, is it that dirty to switch behaviour depending on the calling program ? not all select users ew :) just those which do select({0,0}) well sure braunr: you guessed right :) thanks anyway it's probably a good thing to do currently vim was getting me so mad i was using sshfs lately it's better than nothing yes ### IRC, freenode, #hurd, 2012-07-21 damn, select is actually completely misdesigned :/ iiuc, it makes servers *block*, in turn :/ can't be right ok i understand it better yes, timeouts should be passed along with the other parameters to correctly implement non blocking select (or the round-trip io_select should only ask for notification requests instead of making a server thread block, but this would require even more work) adding the timeout in the io_select call should be easy enough for whoever wants to take over a not-too-complicated-but-not-one-liner-either task :) braunr: why is a blocking server thread a problem? antrik: handling the timeout at client side while server threads block is the problem the timeout must be handled along with blocking obviously so you either do it at server side when async ipc is available, which is the case here or request notifications (synchronously) and block at client side, waiting forthose notifications braunr: are you saying the client has a receive timeout, but when it elapses, the server thread keeps on blocking?... antrik: no i'm referring to the non-blocking select issue we have antrik: the client doesn't block in this case, whereas the servers do which obviously doesn't work .. see http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=79358 this is the reason why vim (and probably others) are slow on the hurd, while not consuming any cpu the current work around is that whenevever a non-blocking select is done, it's transformed into a blocking select with the smallest possible timeout whenever* braunr: well, note that the issue only began after fixing some other select issue... it was fine before apparently, the issue was raised in 2000 also, note that there is a delay between sending the io_select requests and blocking on the replies when machines were slow, this delay could almost guarantee a preemption between these steps, making the servers reply soon enough even for a non blocking select the problem occurs when sending all the requests and checking for replies is done before servers have a chance the send the reply braunr: I don't know what issue was raised in 2000, but I do know that vim worked perfectly fine until last year or so. then some select fix was introduced, which in turn broke vim antrik: could be the timeout rounding, Aug 2 2010 hum but, the problem wasn't with vim vim does still work fine (in fact, glibc is patched to check some well known process names and selectively fix the timeout) which is why vim is fast and view isn't the problem was with other services apparently and in order to fix them, that workaround had to be introduced i think it has nothing to do with the timeout rounding it must be the time when youpi added the patch to the debian package braunr: the problem is that with the patch changing the timeout rounding, vim got extremely slow. this is why the ugly hacky exception was added later... after reading the report, I agree that the timeout needs to be handled by the server. at least the timeout=0 case. vim uses often 0-time selects to check whether there's input client-side handling might still be OK for other timeout settings I guess I'm a bit ambivalent about that I tend to agree with Neal though: it really doesn't make much sense to have a client-side watchdog timer for this specific call, while for all other ones we trust the servers not to block... or perhaps not. for standard sync I/O, clients should expect that an operation could take long (though not forever); but they might use select() precisely to avoid long delays in I/O... so it makes some sense to make sure that select() really doesn't delay because of a busy server OTOH, unless the server is actually broken (in which anything could happen), a 0-time select should never actually block for an extended period of time... I guess it's not wrong to trust the servers on that pinotree: hm... that might explain a certain issue I *was* observing with Vim on Hurd -- though I never really thought about it being an actual bug, as opposed to just general Hurd sluggishness... but it makes sense now antrik: http://patch-tracker.debian.org/patch/series/view/eglibc/2.13-34/hurd-i386/local-select.diff so I guess we all agree that moving the select timeout to the server is probably the most reasonably approach... braunr: BTW, I wouldn't really consider the sync vs. async IPC cases any different. the client blocks waiting for the server to reply either way... the only difference is that in the sync IPC case, the server might want to take some special precaution so it doesn't have to block until the client is ready to receive the reply but that's optional and not really select-specific I'd say (I'd say the only sane approach with sync IPC is probably for the server never to wait -- if the client fails to set up for receiving the reply in time, it looses...) and with the receive buffer approach in Viengoos, this can be done really easy and nice :-) #### IRC, freenode, #hurd, 2012-07-22 antrik: you can't block in servers with sync ipc so in this case, "select" becomes a request for notifications whereas with async ipc, you can, so it's less efficient to make a full round trip just to ask for requests when you can just do async requests (doing the actual blocking) and wait for any reply after braunr: I don't understand. why can't you block in servers with async IPC? braunr: err... with sync IPC I mean antrik: because select operates on more than one fd braunr: and what does that got to do with sync vs. async IPC?... maybe you are thinking of endpoints here, which is a whole different story traditional L4 has IPC ports bound to specific threads; so implementing select requires a separate client thread for each server. but that's not mandatory for sync IPC. Viengoos has endpoints not bound to threads antrik: i don't know what "endpoint" means here but, you can't use sync IPC to implement select on multiple fds (and thus possibly multiple servers) by blocking in the servers you'd block in the first and completely miss the others braunr: I still don't see why... or why async IPC would change anything in that regard antrik: well, you call select on 3 fds, each implemented by different servers antrik: you call a sync select on the first fd, obviously you'll block there antrik: if it's async, you don't block, you just send the requests, and wait for any reply like we do braunr: I think you might be confused about the meaning of sync IPC. it doesn't in any way imply that after sending an RPC request you have to block on some particular reply... antrik: what does sync mean then? braunr: you can have any number of threads listening for replies from the various servers (if using an L4-like model); or even a single thread, if you have endpoints that can listen on replies from different sources (which was pretty much the central concern in the Viengoos IPC design AIUI) antrik: I agree with your "so it makes some sense to make sure that select() really doesn't delay because of a busy server" (for blocking select) and "OTOH, unless the server is actually broken (in which anything could happen), a 0-time select should never actually block" (for non-blocking select) youpi: regarding the select, I was thinking out loud; the former statement was mostly cancelled by my later conclusions... and I'm not sure the latter statement was quite clear do you know when it was? after rethinking it, I finally concluded that it's probably *not* a problem to rely on the server to observe the timout. if it's really busy, it might take longer than the designated timeout (especially if timeout is 0, hehe) -- but I don't think this is a problem and if it doens't observe the timout because it's broken/malicious, that's not more problematic that any other RPC the server doesn't handle as expected ok did somebody wrote down the conclusion "let's make select timeout handled at server side" somewhere? youpi: well, neal already said that in a followup to the select issue Debian bug... and after some consideration, I completely agree with his reasoning (as does braunr) #### IRC, freenode, #hurd, 2012-07-23 antrik: i was meaning sync in the most common meaning, yes, the client blocking on the reply braunr: I think you are confusing sync IPC with sync I/O ;-) braunr: by that definition, the vast majority of Hurd IPC would be sync... but that's obviously not the case synchronous IPC means that send and receive happen at the same time -- nothing more, nothing less. that's why it's called synchronous antrik: yes antrik: so it means the client can't continue unless he actually receives in a pure sync model such as L4 or EROS, this means either the sender or the receiver has to block, so synchronisation can happen. which one is server and which one is client is completely irrelevant here -- this is about individual message transfer, not any RPC model on top of it i the case of select, i assume sender == client in Viengoos, the IPC is synchronous in the sense that transfer from the send buffer to the receive buffer happens at the same time; but it's asynchronous in the sense that the receiver doesn't necessarily have to be actively waiting for the incoming message ok, i was talking about a pure sync model (though it most cases it will still do so...) braunr: BTW, in the case of select, the sender is *not* the client. the reply is relevant here, not the request -- so the client is the receiver (the select request is boring) sorry, i don't understand, you seem to dismiss the select request for no valid reason I still don't see how sync vs. async affects the select reply receive though... blocking seems the right approach in either case blocking is required but you either block in the servers, or in the client (and if blocking in the servers, the client also blocks) i'll explain how i see it again there are two approaches to implementing select 1/ send requests to all servers, wait for any reply, this is what the hurd does but it's possible because you can send all the requests without waiting for the replies 2/ send notification requests, wait for a notification this doesn't require blocking in the servers (so if you have many clients, you don't need as many threads) i was wondering which approach was used by the hurd, and if it made sense to change TBH I don't see the difference between 1) and 2)... whether the message from the server is called an RPC reply or a notification is just a matter of definition I think I see though what you are getting at with sync IPC, if the client sent all requests and only afterwards started to listen for replies, the servers might need to block while trying to deliver the reply because the client is not ready yet that's one thing yes but even in the sync case, the client can immediately wait for replies to each individual request -- it might just be more complicated, depending on the specifics of the IPC design what i mean by "send notification requests" is actually more than just sending, it's a complete RPC and notifications are non-blocking, yes (with L4, it would require a separate client thread for each server contacted... which is precisely why a different mechanism was designed for Viengoos) seems weird though don't they have a portset like abstraction ? braunr: well, having an immediate reply to the request and a separate notification later is just a waste of resources... the immediate reply would have no information value no, in original L4 IPC is always directed to specific threads antrik: some could see the waste of resource as being the duplication of the number of client threads in the server you could have one thread listening to replies from several servers -- but then, replies can get lost i see (or the servers have to block on the reply) so, there are really no capabilities in the original l4 design ? though I guess in the case of select() it wouldn't really matter if replies get lost, as long as at least one is handled... would just require the listener thread by separate from the thread sending the requests braunr: right. no capabilities of any kind that was my initial understanding too thanks so I partially agree: in a purely sync IPC design, it would be more complicated (but not impossible) to make sure the client gets the replies without the server having to block while sending replies arg, we need hurd_condition_timedwait (and possible condition_timedwait) to cleanly fix io_select luckily, i still have my old patch for condition_timedwait :> bddebian: in order to implement timeouts in select calls, servers now have to use a hurd_condition_timedwait function is it possible that a thread both gets canceled and timeout on a wait ? looks unlikely to me hm, i guess the same kind of compatibility constraints exist for hurd interfaces so, should we have an io_select1 ? braunr: I would use a more descriptive name: io_select_timeout() antrik: ah yes well, i don't really like the idea of having 2 interfaces for the same call :) because all select should be select_timeout :) but ok antrik: actually, having two select calls may be better oh it's really minor, we do'nt care actually braunr: two select calls? antrik: one with a timeout and one without the glibc would choose at runtime right. that was the idea. like with most transitions, that's probably the best option there is no need to pass the timeout value if it's not needed, and it's easier to pass NULL this way oh nah, that would make the transition more complicated I think ? ok :) this way, it becomes very easy the existing io_select call moves into a select_common() function the old variant doesn't know that the server has to return immediately; changing that would be tricky. better just use the new variant for the new behaviour, and deprecate the old one and the entry points just call this common function with either NULL or the given timeout no need to deprecate the old one that's what i'm saying and i don't understand "the old variant doesn't know that the server has to return immediately" won't the old variant block indefinitely in the server if there are no ready fds? yes it will oh, you mean using the old variant if there is no timeout value? yes well, I guess this would work well of course, the question is rather if we want this or not :) hm... not sure we need something to improve the process of changing our interfaces it's really painful currnelty inside the servers, we probably want to use common code anyways... so in the long run, I think it simplifies the code when we can just drop the old variant at some point a lot of the work we need to do involves changing interfaces, and we very often get to the point where we don't know how to do that and hardly agree on a final version : :/ ok but how do you tell the server you don't want a timeout ? a special value ? like { -1; -1 } ? hm... good point i'll do it that way for now it's the best way to test it which way you mean now? keeping io_select as it is, add io_select_timeout yeah, I thought we agreed on that part... the question is just whether io_select_timeout should also handle the no-timeout variant going forward, or keep io_select for that. I'm really not sure maybe I'll form an opinion over time :-) but right now I'm undecided i say we keep io_select anyway it won't change much we can just change that at the end if we decide otherwise right even passing special values is ok with a carefully written hurd_condition_timedwait, it's very easy to add the timeouts :) antrik, braunr: I'm wondering, another solution is to add an io_probe, i.e. the server has to return an immediate result, and the client then just waits for all results, without timeout that'd be a mere addition in the glibc select() call: when timeout is 0, use that, and otherwise use the previous code the good point is that it looks nicer in fs.defs are there bad points? (I don't have the whole issues in the mind now, so I'm probably missing things) youpi: the bad point is duplicating the implementation maybe what duplication ? ah you mean for the select case yes although it would be pretty much the same that is, if probe only, don't enter the wait loop could that be just some ifs here and there? (though not making the code easier to read...) hm i'm not sure it's fine in that case oi_select_timeout looks ncier ideed :) my problem with the current implementation is having the timeout at the client side whereas the server side is doing the blocking I wonder how expensive a notification is, compared to blocking a blocking indeed needs a thread stack (and kernel thread stuff) with the kind of async ipc we have, it's still better to do it that way and all the code already exists having the timeout at the client side also have its advantage has* latency is more precise so the real problem is indeed the non blocking case only isn't it bound to kernel ticks anyway ? uh, not if your server sucks or is loaded for whatever reason ok, that's not what I understood by "precision" :) I'd rather call it robustness :) hm right there are several ways to do this, but the io_select_timeout one looks fine to me and is already well on its way and it's reliable (whereas i'm not sure about reliability if we keep the timeout at client side) btw make the timeout nanoseconds ?? pselect uses timespec, not timeval do we want pselect ? err, that's the only safe way with signals not only, no and poll is timespec also not only?? you mean ppol ppoll no, poll too by "the only safe way", I mean for select calls i understand the race issue ppoll is a gnu extension int poll(struct pollfd *fds, nfds_t nfds, int timeout); ah, right, I was also looking at ppoll any way we can use nanosecs most event loops use a pipe or a socketpair there's no reason not to youpi: I briefly considered special-casisg 0 timeouts last time we discussed this; but I concluded that it's probably better to handle all timeouts server-side I don't see why we should even discuss that and translate signals to writes into the pipe/socketpair antrik: ok you can't count on select() timout precision anyways a few ms more shouldn't hurt any sanely written program braunr: "most" doesn't mean "all" there *are* applications which use pselect well mach only handles millisedonds seconds and it's not going out of the standard mach is not the hurd if we change mach, we can still keep the hurd ipcs anyway agagin I reallyt don't see the point of the discussion is there anything *against* using nanoseconds? i chose the types specifically because of that :p but ok i can change again becaus what?? i chose to use mach's native time_value_t because it matches timeval nicely but it doesn't match timespec nicely no it doesn't should i add a hurd specific time_spec_t then ? "how do you tell the server you don't want a timeout ? a special value ? like { -1; -1 } ?" you meant infinite blocking? youpi: yes oh right, pselect is posix actually posix says that there can be limitations on the maximum timeout supported, which should be at least 31 days -1;-1 is thus fine yes which is why i could choose time_value_t (a struct of 2 integer_t) well, I'd say gnumach could grow a nanosecond-precision time value e.g. for clock_gettime precision and such [[clock_gettime]]. so you would prefer me adding the time_spec_t time to gnumach rather than the hurd ? well, if hurd RPCs are using mach types and there's no mach type for nanoseconds, it m akes sense to add one I don't know about the first part yes some hurd itnerfaces also use time_value_t in general, I don't think Hurd interfaces should rely on a Mach timevalue. it's really only meaningful when Mach is involved... we could even pass the time value as an opaque struct. don't really need an explicit MIG type for that. opaque ? an opaque type would be a step backward from multi-machine support ;) youpi: that's a sham anyways ;-) what? ah, using an opaque type, yes :) probably why my head bugged while reading that it wouldn't be fully opaque either. it would be two ints, right? even if Mach doesn't know what these two ints mean, it still could to byte order conversion, if we ever actually supported setups where it matters... so uh, should this new time_spec_t be added in gnumach or the hurd ? youpi: you're the maintainer, you decide :p *** antrik (~olaf@port-92-195-60-96.dynamic.qsc.de) has joined channel #hurd well, I don't like deciding when I didn't even have read fs.defs :) but I'd say the way forward is defining it in the hurd and put a comment "should be our own type" above use of the mach type ok *** antrik (~olaf@port-92-195-60-96.dynamic.qsc.de) has quit: Remote host closed the connection and, by the way, is using integer_t fine wrt the 64-bits port ? I believe we settled on keeping integer_t a 32bit integer, like xnu does *** elmig (~elmig@a89-155-34-142.cpe.netcabo.pt) has quit: Quit: leaving ok so it's not *** antrik (~olaf@port-92-195-60-96.dynamic.qsc.de) has joined channel #hurd uh well why "not" ? keeping it 32-bits for the 32-bits userspace hurd but i'm talking about a true 64-bits version wouldn't integer_t get 64-bits then ? I meant we settled on a no like xnu does xnu uses 32-bits integer_t even when userspace runs in 64-bits mode ? because things for which we'd need 64bits then are offset_t, vm_size_t, and such yes ok youpi: but then what is the type to use for long integers ? or uintptr_t braunr: uintptr_t the mig type i mean type memory_object_offset_t = uint64_t; (and size) well that's a 64-bits type well, yes natural_t and integer_t were supposed to have the processor word size probably I didn't understand your question if we remove that property, what else has it ? yes, but see rolands comment on this ah ? ah, no, he just says the same braunr: well, it's debatable whether the processor word size is really 64 bit on x86_64... all known compilers still consider int to be 32 bit (and int is the default word size) not really as in? the word size really is 64-bits the question concerns the data model with ILP32 and LP64, int is always 32-bits, and long gets the processor word size and those are the only ones current unices support (which is why long is used everywhere for this purpose instead of uintptr_t in linux) I don't think int is 32 bit on alpha? (and probably some other 64 bit arches) also, assuming we want to maintain the ability to support single system images, do we really want RPC with variable size types ? antrik: linux alpha's int is 32bit sparc64 too I don't know any 64bit port with 64bit int i wonder how posix will solve the year 2038 problem ;p time_t is a long the hope is that there'll be no 32bit systems by 2038 :) :) but yes, that matters to us number of seconds should not be just an int we can force a 64-bits type then i tend to think we should have no variable size type in any mig interface youpi: so, new hurd type, named time_spec_t, composed of two 64-bits signed integers braunr: i added that in my prototype of monotonic clock patch for gnumach oh braunr: well, 64bit is not needed for the nanosecond part right it will be aligned anyway :p I know uh, actually linux uses long there pinotree: i guess your patch is still in debian ? youpi: well yes youpi: why wouldn't it ? :) no, never applied braunr: because 64bit is not needed ah, i see what you mean oh, posix says longa ctually *exactly* long i'll use the same sizes so it fits nicely with timespec hm but timespec is only used at the client side glibc would simply move the timespec values into our hurd specific type (which can use 32-bits nanosecs) and servers would only use that type all right, i'll do it that way, unless there are additional comments next morning :) braunr: we never supported federations, and I'm pretty sure we never will. the remnants of network IPC code were ripped out some years ago. some of the Hurd interfaces use opaque structs too, so it wouldn't even work if it existed. as I said earlier, it's really all a sham as for the timespec type, I think it's easier to stick with the API definition at RPC level too #### IRC, freenode, #hurd, 2012-07-24 youpi: antrik: is vm_size_t an appropriate type for a c long ? (appropriate mig type) I wouldn't say so. while technically they are pretty much guaranteed to be the same, conceptually they are entirely different things -- it would be confusing at least to do it that way... antrik: well which one then ? :( braunr: no idea TBH antrik_: that should have been natural_t and integer_t so maybe we should new types to replace them braunr: actually, RPCs should never have nay machine-specific types... which makes me realise that a 1:1 translation to the POSIX definition is actually not possible if we want to follow the Mach ideals i agree (well, the original mach authors used natural_t in quite a bunch of places ..) the mig interfaces look extremely messy to me because of this type issue and i just want to move forward with my work now i could just use 2 integer_t, that would get converted in the massive future revamp of the interfaces for the 64-bits userspace or 2 64-bits types i'd like us to agree on one of the two not too late so i can continue #### IRC, freenode, #hurd, 2012-07-25 braunr: well, for actual kernel calls, machine-specific types are probably hard to avoid... the problem is when they are used in other RPCs antrik: i opted for a hurd specific time_data_t = struct[2] of int64 and going on with this for now once it works we'll finalize the types if needed I'm really not sure how to best handle such 32 vs. 64 bit issues in Hurd interfaces... you *could* consider time_t and long to be machine specific types well, they clearly are long is time_t isn't really didn't you say POSIX demands it to be longs? we could decide to make it 64 bits in all versions of the hurd no posix requires the nanoseconds field of timespec to be long the way i see it, i don't see any problem (other than a little bit of storage and performance) using 64-bits types here well, do we really want to use a machine-independent time format, if the POSIX interfaces we are mapping do not?... (perhaps we should; I'm just uncertain what's better in this case) this would require creating new types for that probably mach types for consistency to replace natural_t and integer_t now this concerns a totally different issue than select which is how we're gonna handle the 64-bits port because natural_t and integer_t are used almost everywhere indeed and we must think of 2 ports the 32-bits over 64-bits gnumach, and the complete 64-bits one what do we do for the interfaces that are explicitly 64 bit? what do you mean ? i'm not sure there is anything to do I mean what is done in the existing ones? like off64_t ? yeah they use int64 and unsigned64 OK. so we shouldn't have any trouble with that at least... braunr: were you adding a time_value_t in mach, but for nanoseconds? no i'm adding a time_data_t to the hurd for nanoseconds yes ah ok (maybe sure it is available in hurd/hurd_types.defs) yes it's there \o/ i mean, i didn't forget to add it there for now it's a struct[2] of int64 but we're not completely sure of that currently i'm teaching the hurd how to use timeouts cool which basically involves adding a time_data_t *timeout parameter to many functions and replacing hurd_condition_wait with hurd_condition_timedwait and making sure a timeout isn't an error on the return path * pinotree has a simplier idea for time_data_t: add a file_utimesns to fs.defs hmm, some functions have a nonblocking parameter i'm not sure if it's better to replace them with the timeout, or add the timeout parameter considering the functions involved may return EWOULDBLOCK for now i'll add a timeout parameter, so that the code requires as little modification as possible tell me your opinion on that please braunr: what functions? connq_listen in pflocal for example braunr: I don't really understand what you are talking about :-( some servers implement select this way : 1/ call a function in non-blocking mode, if it indicates data is available, return immediately 2/ call the same function, in blocking mode normally, with the new timeout parameter, non-blocking could be passed in the timeout parameter (with a timeout of 0) operating in non-blocking mode, i mean antrik: is it clear now ? :) i wonder how the hurd managed to grow so much code without a cond_timedwait function :/ i think i have finished my io_select_timeout patch on the hurd side :) a small step for the hurd, but a big one against vim latencies !! (which is the true reason i'm working on this haha) new hurd rbraun/io_select_timeout branch for those interested hm, my changes clashes hard with the debian pflocal patch by neal :/ clash* braunr: replace I'd say. no need to introduce redundancy; and code changes not affecting interfaces are cheap (in general, I'm always in favour of refactoring) antrik: replace what ? braunr: wow, didn't think moving the timeouts to server would be such a quick task :-) antrik: :) 16:57 < braunr> hmm, some functions have a nonblocking parameter 16:58 < braunr> i'm not sure if it's better to replace them with the timeout, or add the timeout parameter antrik: ah about that, ok #### IRC, freenode, #hurd, 2012-07-26 braunr: wrt your select_timeout branch, why not push only the time_data stuff to master? pinotree: we didn't agree on that yet ah better, with the correct ordering of io routines, my hurd boots :) and works too? :p so far yes i've spotted some issues in libpipe but nothing major i "only" have to adjust the client side select implementation now #### IRC, freenode, #hurd, 2012-07-27 io_select should remain a routine (i.e. synchronous) for server side stub code but should be asynchronous (send only) for client side stub code (since _hurs_select manually handles replies through a port set) ##### IRC, freenode, #hurd, 2013-02-09 io_select becomes a simpleroutine, except inside the hurd, where it's a routine to keep the receive and reply mig stub code (the server side) #### IRC, freenode, #hurd, 2012-07-28 why are there both REPLY_PORTS and IO_SELECT_REPLY_PORT macros in the hurd .. and for the select call only :( and doing the exact same thing unless i'm mistaken the reply port is required for select anyway .. i just want to squeeze them into a new IO_SELECT_SERVER macro i don't think i can maintain the use the existing io_select call as it is grr, the io_request/io_reply files aren't synced with the io.defs file calls like io_sigio_request seem totally unused yeah, that's a major shortcoming of MIG -- we shouldn't need to have separate request/reply defs they're not even used :/ i did something a bit ugly but it seems to do what i wanted #### IRC, freenode, #hurd, 2012-07-29 good, i have a working client-side select now i need to fix the servers a bit :x arg, my test cases work, but vim doesn't :(( i hate select :p ah good, my problems are caused by a deadlock because of my glibc changes ah yes, found my locking problem building my final libc now * braunr crosses fingers (the deadlock issue was of course a one liner) grr deadlocks again grmbl, my deadlock is in pfinet :/ my select_timeout code makes servers deadlock on the libports global lock :/ wtf.. youpi: it may be related to the failed asserttion deadlocking on mutex_unlock oO grr actually, mutex_unlock sends a message to notify other threads that the lock is ready and that's what is blocking .. i'm not sure it's a fundamental problem here it may simply be a corruption i have several (but not that many) threads blocked in mutex_unlock and one blocked in mutex_lcok i fail to see how my changes can create such a behaviour the weird thing is that i can't reproduce this with my test cases :/ only vim makes things crazy and i suppose it's related to the terminal (don't terminals relay select requests ?) when starting vim through ssh, pfinet deadlocks, and when starting it on the mach console, the console term deadlocks no help/hints when started with rpctrace? i only get assertions with rpctrace it's completely unusable for me gdb tells vim is indeed blocked in a select request and i can't see any in the remote servers :/ this is so weird .. when using vim with the unmodified c library, i clearly see the select call, and everything works fine .... 2e27: a1 c4 d2 b7 f7 mov 0xf7b7d2c4,%eax 2e2c: 62 (bad) 2e2d: f6 47 b6 69 testb $0x69,-0x4a(%edi) what's the "bad" line ?? ew, i think i understand my problem now the timeout makes blocking threads wake prematurely but on an mutex unlock, or a condition signal/broadcast, a message is still sent, as it is expected a thread is still waiting but the receiving thread, having returned sooner than expected from mach_msg, doesn't dequeue the message as vim does a lot of non blocking selects, this fills the message queue ... #### IRC, freenode, #hurd, 2012-07-30 hm nice, the problem i have with my hurd_condition_timedwait seems to also exist in libpthread [[!taglink open_issue_libpthread]]. although at a lesser degree (the implementation already correctly removes a thread that timed out from a condition queue, and there is a nice FIXME comment asking what to do with any stale wakeup message) and the only solution i can think of for now is to drain the message queue ah yes, i know have vim running with my io_select_timeout code :> but hum eating all cpu ah nice, an infinite loop in _hurd_critical_section_unlock grmbl braunr: But not this one? http://www.gnu.org/software/hurd/open_issues/fork_deadlock.html it looks similar, yes let me try again to compare in detail pretty much the same yes there is only one difference but i really don't think it matters (#3 _hurd_sigstate_lock (ss=0x2dff718) at hurdsig.c:173 instead of #3 _hurd_sigstate_lock (ss=0x1235008) at hurdsig.c:172) ok so we need to review jeremie's work tschwinge: thanks for pointing me at this the good thing with my patch is that i can reproduce in a few seconds consistently braunr: You're welcome. Great -- a reproducer! You might also build a glibc without his patches as a cross-test to see the issues goes away? right i hope they're easy to find :) Hmm, have you already done changes to glibc? Otherwise you might also simply use a Debian package from before? yes i have local changes to _hurd_select OK, too bad. braunr: debian/patches/hurd-i386/tg-hurdsig-*, I think. ok hmmmmm it may be related to my last patch on the select_timeout branch (i mean, this may be caused by what i mentioned earlier this morning) damn i can't build glibc without the signal disposition patches :( libpthread_sigmask.diff depends on it tschwinge: doesn't libpthread (as implemented in the debian glibc patches) depend on global signal dispositions ? i think i'll use an older glibc for now but hmm which one .. oh whatever, let's fix the deadlock, it's simpler and more productive anyway braunr: May be that you need to revert some libpthread patch, too. Or even take out the libpthread build completely (you don't need it for you current work, I think). braunr: Or, of course, you locate the deadlock. :-) hum, now why would __io_select_timeout return EMACH_SEND_INVALID_DEST :( the current glibc code just transparently reports any such error as a false positive oO hm nice, segfault through recursion "task foo destroying an invalid port bar" everywhere :(( i still have problems at the server side .. ok i think i have a solution for the "synchronization problem" (by this name, i refer to the way mutex and condition variables are implemented" (the problem being that, when a thread unblocks early, because of a timeout, another may still send a message to attempt it, which may fill up the message queue and make the sender block, causing a deadlock) s/attempt/attempt to wake/ Attempts to wake a dead thread? no attempt to wake an already active thread which won't dequeue the message because it's doing something else bddebian: i'm mentioning this because the problem potentially also exists in libpthread [[!taglink open_issue_libpthread]]. since the underlying algorithms are exactly the same (fortunately the time-out versions are not often used) for now :) for reference, my idea is to make the wake call truely non blocking, by setting a timeout of 0 i also limit the message queue size to 1, to limit the amount of spurious wakeups i'll be able to test that in 30 mins or so hum how can mach_msg block with a timeout of 0 ?? never mind :p unfortunately, my idea alone isn't enough for those interested in the problem, i've updated the analysis in my last commit (http://git.savannah.gnu.org/cgit/hurd/hurd.git/commit/?h=rbraun/select_timeout&id=40fe717ba9093c0c893d9ea44673e46a6f9e0c7d) #### IRC, freenode, #hurd, 2012-08-01 damn, i can't manage to make threads calling condition_wait to dequeue themselves from the condition queue :( (instead of the one sending the signal/broadcast) my changes on cthreads introduce 2 intrusive changes the first is that the wakeup port is limited to 1 port, and the wakeup operation is totally non blocking which is something we should probably add in any case the second is that condition_wait dequeues itself after blocking, instead of condition_signal/broadcast and this second change seems to introduce deadlocks, for reasons completely unknown to me :(( limited to 1 message* if anyone has an idea about why it is bad for a thread to remove itself from a condition/mutex queue, i'm all ears i'm hitting a wall :( antrik: if you have some motivation, can you review this please ? http://www.sceen.net/~rbraun/0001-Rework-condition-signal-broadcast.patch with this patch, i get threads blocked in condition_wait, apparently waiting for a wakeup that never comes (or was already consumed) and i don't understand why : :( braunr: The condition never happens? bddebian: it works without the patch, so i guess that's not the problem bddebian: hm, you could be right actually :p braunr: About what? :) 17:50 < bddebian> braunr: The condition never happens? although i doubt it again this problem is getting very very frustrating :( it frightens me because i don't see any flaw in the logic :( #### IRC, freenode, #hurd, 2012-08-02 ah, seems i found a reliable workaround to my deadlock issue, and more than a workaround, it should increase efficiency by reducing messaging * braunr happy congrats :) the downside is that we may have a problem with non blocking send calls :/ which are used for signals i mean, this could be a mach bug let's try running a complete hurd with the change arg, the boot doesn't complete with the patch .. :( grmbl, by changing only a few bits in crtheads, the boot process freezes in an infinite loop in somethign started after auth (/etc/hurd/runsystem i assume) #### IRC, freenode, #hurd, 2012-08-03 glibc actually makes some direct use of cthreads condition variables and my patch seems to work with servers in an already working hurd, but don't allow it to boot and the hang happens on bash, the first thing that doesn't come from the hurd package (i mean, during the boot sequence) which means we can't change cthreads headers (as some primitives are macros) *sigh* the thing is, i can't fix select until i have a condition_timedwait primitive and i can't add this primitive until either 1/ cthreads are fixed not to allow the inlining of its primitives, or 2/ the switch to pthreads is done which might take a loong time :p i'll have to rebuild a whole libc package with a fixed cthreads version let's do this pinotree: i see two __condition_wait calls in glibc, how is the double underscore handled ? where do you see it? sysdeps/mach/hurd/setpgid.c and sysdeps/mach/hurd/setsid.c i wonder if it's even used looks like we use posix/setsid.c now #ifdef noteven ? the two __condition_wait calls you pointed out are in such preprocessor block s but what does it mean ? no idea ok these two files should be definitely be used, they are found earlier in the vpath hum, posix/setsid.c is a nop stub i don't see anything defining "noteven" in glibc itself nor in hurd :( yes, most of the stuff in posix/, misc/, signal/, time/ are ENOSYS stubs, to be reimplemented in a sysdep hm, i may have made a small mistake in cthreads itself actually right when i try to debug using a subhurd, gdb tells me the blocked process is spinning in ld .. i mean ld.so and i can't see any debugging symbol some progress, it hangs at process_envvars eh i've partially traced my problem when a "normal" program starts, libc creates the signal thread early the main thread waits for the creation of this thread by polling its address (i.e. while (signal_thread == 0); ) for some reason, it is stuck in this loop cthread creation being actually governed by condition_wait/broadcast, it makes some sense braunr: When you say the "main" thread, do you mean the main thread of the program? bddebian: yes i think i've determined my mistake glibc has its own variants of the mutex primitives and i changed one :/ Ah it's good news for me :) hum no, that's not exactly what i described glibc has some stubs, but it's not the problem, the problem is that mutex_lock/unlock are macros, and i changed one of them so everything that used that macro inside glibc wasn't changed yes! my patched hurd now boots :) * braunr relieved this experience at least taught me that it's not possible to easily change the singly linked queues of thread (waiting for a mutex or a condition variable) :( for now, i'm using a linear search from the start so, not only does this patched hurd boot, but i was able to use aptitude, git, build a whole hurd, copy the whole thing, and remove everything, and it still runs fine (whereas usually it would fail very early) * braunr happy and vim works fine now? err, wait this patch does only one thing it alters the way condition_signal/broadcast and {hurd_,}condition_wait operate currently, condition_signal/broadcast dequeues threads from a condition queue and wake them my patch makes these functions only wake the target threads which dequeue themselves (a necessary requirement to allow clean timeout handling) the next step is to fix my hurd_condition_wait patch and reapply the whole hurd patch indotrucing io_select_timeout introducing* then i'll be able to tell you one side effect of my current changes is that the linear search required when a thread dequeues itself is ugly so it'll be an additional reason to help the pthreads porting effort (pthreads have the same sort of issues wrt to timeout handling, but threads are a doubly-linked lists, making it way easier to adjust) +on damn i'm happy 3 days on this stupid bug (which is actually responsible for what i initially feared to be a mach bug on non blocking sends) (and because of that, i worked on the code to make it sure that 1/ waking is truely non blocking and 2/ only one message is required for wakeups ) a simple flag is tested instead of sending in a non blocking way :) these improvments should be ported to pthreads some day [[!taglink open_issue_libpthread]] ahah ! view is now FAST ! braunr: what do you mean by 'view'? mel-: i mean the read-only version of vim aah i still have a few port leaks to fix and some polishing but basically, the non-blocking select issue seems fixed and with some luck, we should get unexpected speedups here and there so vim was considerable slow on the Hurd before? didn't know that. not exactly at first, it wasn't, but the non blocking select/poll calls misbehaved so a patch was introduced to make these block at least 1 ms then vim became slow, because it does a lot of non blocking select so another patch was introduced, not to set the 1ms timeout for a few programs youpi: darnassus is already running the patched hurd, which shows (as expected) that it can safely be used with an older libc i.e. servers with the additional io_select? yes k good :) and the modified cthreads which is the most intrusive change port leaks fixed braunr: Congrats:-D thanks it's not over yet :p tests, reviews, more tests, polishing, commits, packaging #### IRC, freenode, #hurd, 2012-08-04 grmbl, apt-get fails on select in my subhurd with the updated glibc otherwise it boots and runs fine fixed :) grmbl, there is a deadlock in pfinet with my patch deadlock fixed the sigstate and the condition locks must be taken at the same time, for some obscure reason explained in the cthreads code but when a thread awakes and dequeues itself from the condition queue, it only took the condition lock i noted in my todo list that this could create problems, but wanted to leave it as it is to really see it happen well, i saw :) the last commit of my hurd branch includes the 3 line fix these fixes will be required for libpthreads (pthread_mutex_timedlock and pthread_cond_timedwait) some day after the select bug is fixed, i'll probably work on that with you and thomas d #### IRC, freenode, #hurd, 2012-08-05 eh, i made dpkg-buildpackage use the patched c library, and it finished the build oO braunr: :) faked-tcp was blocked in a select call :/ (with the old libc i mean) with mine i just worked at the first attempt i'm not sure what it means it could mean that the patched hurd servers are not completely compatible with the current libc, for some weird corner cases the slowness of faked-tcp is apparently inherent to its implementation all right, let's put all these packages online eh, right when i upload them, i get a deadlock this one seems specific to pfinet only one deadlock so far, and the libc wasn't in sync with the hurd :/ damn, another deadlock as soon as i send a mail on bug-hurd :( grr thou shall not email aptitude seems to be a heavy user of select oh, it may be due to my script regularly chaning the system time or it may not be a deadlock, but simply the linear queue getting extremely large #### IRC, freenode, #hurd, 2012-08-06 i have bad news :( it seems there can be memory corruptions with my io_select patch i've just seen an auth server (!) spinning on a condition lock (the internal spin lock), probably because the condition was corrupted .. i guess it's simply because conditions embedded in dynamically allocated structures can be freed while there are still threads waiting ... so, yes the solution to my problem is simply to dequeue threads from both the waker when there is one, and the waiter when no wakeup message was received simple it's so obvious i wonder how i didn't think of it earlier :(- braunr: an elegant solution always seems obvious afterwards... ;-) antrik: let's hope this time, it's completely right good, my latest hurd packages seem fixed finally looks like i got another deadlock * braunr hangs himselg that, or again, condition queues can get very large (e.g. on thread storms) looks like this is the case yes after some time the system recovered :( which means a doubly linked list is required to avoid pathological behaviours arg it won't be easy at all to add a doubly linked list to condition variables :( actually, just a bit messy youpi: other than this linear search on dequeue, darnassus has been working fine so far k Mmm, you'd need to bump the abi soname if changing the condition structure layout :( youpi: how are we going to solve that ? well, either bump soname, or finish transition to libpthread :) it looks better to work on pthread now to avoid too many abi changes [[libpthread]]. #### IRC, freenode, #hurd, 2012-08-07 anyone knows of applications extensively using non-blocking networking functions ? (well, networking functions in a non-blocking way) rbraun_hurd: X perhaps? it's single-threaded, so I guess it must be pretty async ;-) thinking about it, perhaps it's the reason it works so poorly on Hurd... it does ? ah maybe at the client side, right hm no, the client side is synchronous oh by the way, i can use gitk on darnassys i wonder if it's because of the select fix rbraun_hurd: If you want, you could also have a look if there's any improvement for these: http://www.gnu.org/software/hurd/open_issues/select.html (elinks), http://www.gnu.org/software/hurd/open_issues/dbus.html, http://www.gnu.org/software/hurd/open_issues/runit.html rbraun_hurd: And congratulations, again! :-) tschwinge: too bad it can't be merged before the pthread port :( rbraun_hurd: I was talking about server. most clients are probably sync. antrik: i guessed :) (thought certainly not all... multithreaded clients are not really supported with xlib IIRC) but i didn't have much trouble with X tried something pushing a lot of data? like, say, glxgears? :-) why not the problem with tests involving "a lot of data" is that it can easily degenerate into a livelock yeah, sounds about right (with the current patch i mean) the symptoms I got were general jerkiness, with occasional long hangs that applies to about everything on the hurd so it didn't alarm me another interesting testcase is freeciv-gtk... it reporducibly caused a thread explosion after idling for some time -- though I don't remember the details; and never managed to come up with a way to track down how this happens... dbus is more worthwhile pinotree: hwo do i test that ? eh? pinotree: you once mentioned dbus had trouble with non blocking selects it does a poll() with a 0s timeout that's the non blocking select part, yes you'll need also fixes for the socket credentials though, otherwise it won't work ootb right but, isn't it already used somehow ? rbraun_hurd: uhm... none of the non-X applications I use expose a visible jerkiness/long hangs pattern... though that may well be a result of general load patterns rather than X I guess antrik: that's my feeling antrik: heavy communication channels, unoptimal scheduling, lack of scalability, they're clearly responsible for the generally perceived "jerkiness" of the system again, I can't say I observe "general jerkiness". apart from slow I/O the system behaves rather normally for the things I do I'm pretty sure the X jerkiness *is* caused by the socket communication which of course might be a scheduling issue but it seems perfectly possible that it *is* related to the select implementation at least worth a try I'd say sure there is still some work to do on it though the client side changes i did could be optimized a bit more (but i'm afraid it would lead to ugly things like 2 timeout parameters in the io_select_timeout call, one for the client side, the other for the servers, eh) #### IRC, freenode, #hurd, 2012-08-07 when running gitk on [darnassus], yesterday, i could push the CPU to 100% by simply moving the mouse in the window :p (but it may also be caused by the select fix) braunr: that cursor might be "normal" antrik: what do you mean ? the 100% CPU antrik: yes i got that, but what would make it normal ? antrik: right i get similar behaviour on linux actually (not 100% because two threads are spread on different cores, but their cpu usage add up to 100%) antrik: so you think as long as there are events to process, the x client is running thath would mean latencies are small enough to allow that, which is actually a very good thing hehe... sound kinda funny :-) this linear search on dequeue is a real pain :/ #### IRC, freenode, #hurd, 2012-08-09 `screen` doesn't close a window/hangs after exiting the shell. the screen issue seems linked to select :p tschwinge: the term server may not correctly implement it tschwinge: the problem looks related to the term consoles not dying http://www.gnu.org/software/hurd/open_issues/term_blocking.html [[Term_blocking]]. ### IRC, freenode, #hurd, 2012-12-05 well if i'm unable to build my own packages, i'll send you the one line patch i wrote that fixes select/poll for the case where there is only one descriptor (the current code calls mach_msg twice, each time with the same timeout, doubling the total wait time when there is no event) #### IRC, freenode, #hurd, 2012-12-06 damn, my eglibc patch breaks select :x i guess i'll just simplify the code by using the same path for both single fd and multiple fd calls at least, the patch does fix the case i wanted it to .. :) htop and ping act at the right regular interval my select patch is : /* Now wait for reply messages. */ - if (!err && got == 0) + if (!err && got == 0 && firstfd != -1 && firstfd != lastfd) basically, when there is a single fd, the code calls io_select with a timeout and later calls mach_msg with the same timeout effectively making the maximum wait time twice what it should be ouch which is why htop and ping are "laggy" and perhaps also why fakeroot is when building libc well when building packages my patch avoids entering the mach_msg call if there is only one fd (my failed attempt didn't have the firstfd != -1 check, leading to the 0 fd case skipping mach_msg too, which is wrong since in that case there is just no wait, making applications use select/poll for sleeping consume all cpu) the second is a fix in select (yet another) for the case where a single fd is passed in which case there is one timeout directly passed in the io_select call, but then yet another in the mach_msg call that waits for replies this can account for the slowness of a bunch of select/poll users #### IRC, freenode, #hurd, 2012-12-07 finally, my select patch works :) #### IRC, freenode, #hurd, 2012-12-08 for those interested, i pushed my eglibc packages that include this little select/poll timeout fix on my debian repository deb http://ftp.sceen.net/debian-hurd experimental/ reports are welcome, i'm especially interested in potential regressions #### IRC, freenode, #hurd, 2012-12-10 I have verified your double timeout bug in hurdselect.c. Since I'm also working on hurdselect I have a few questions about where the timeouts in mach_msg and io_select are implemented. Have a big problem to trace them down to actual code: mig magic again? yes see hurd/io.defs, io_select includes a waittime timeout: natural_t; parameter waittime is mig magic that tells the client side not to wait more than the timeout and in _hurd_select, you can see these lines : err = __io_select (d[i].io_port, d[i].reply_port, /* Poll only if there's a single descriptor. */ (firstfd == lastfd) ? to : 0, to being the timeout previously computed "to" and later, when waiting for replies : while ((msgerr = __mach_msg (&msg.head, MACH_RCV_MSG | options, 0, sizeof msg, portset, to, MACH_PORT_NULL)) == MACH_MSG_SUCCESS) the same timeout is used hope it helps Additional stuff on io-select question is at http://paste.debian.net/215401/ Sorry, should have posted it before you comment, but was disturbed. 14:13 < braunr> waittime is mig magic that tells the client side not to wait more than the timeout the waittime argument is a client argument only that's one of the main source of problems with select/poll, and the one i fixed 6 months ago so there is no relation to the third argument of the client call and the third argument of the server code? no the 3rd argument at server side is undoubtedly the 4th at client side here but for the fourth argument there is? i think i've just answered that when in doubt, check the code generated by mig when building glibc as I said before, I have verified the timeout bug you solved. which code to look for RPC_*? should be easy to guess is it the same with mach_msg()? No explicit usage of the timeout there either. in the code for the function I mean. gnu_srs: mach_msg is a low level system call see http://www.gnu.org/software/hurd/gnumach-doc/Mach-Message-Call.html#Mach-Message-Call found the definition of __io_select in: RPC_io_select.c, thanks. so the client code to look for wrt RPC_ is in hurd/*.defs? what about the gnumach/*/include/*.defs? a final question: why use a timeout if there is a single FD for the __io_select call, not when there are more than one? well, the code is obviously buggy, so don't expect me to justify wrong code but i suppose the idea was : if there is only one fd, perform a classical synchronous RPC, whereas if there are more use a heavyweight portset and additional code to receive replies exim4 didn't get fixed by the libc patch, unfortunately yes i noticed gdb can't attach correctly to exim, so it's probably something completely different i'll try the non intrusive mode ##### IRC, freenode, #hurd, 2013-01-26 ah great, one of the recent fixes (probably select-eintr or setitimer) fixed exim4 :) #### IRC, freenode, #hurd, 2012-12-11 braunr: What is the technical difference of having the delay at io_select compared to mach_msg for one FD? gnu_srs1: it's a slight optimization instead of doing a send and a receive, the same mach_msg call is used for both (for L4 guys it wouldn't be considered a slight optimization :)) #### IRC, freenode, #hurd, 2012-12-17 tschwinge: http://git.savannah.gnu.org/cgit/hurd/glibc.git/log/?h=rbraun/select_timeout_for_one_fd gnu_srs: talking about that, can you explain : "- The pure delay case is much faster now, making it necessary to introduce a delay of 1 msec when the timeout parameter is set to zero. " I meant poll with zero delay needs a delay to make sure the file descriptors are ready. Testing it now. for me, the "pure delay" case is the case where there is no file descriptor when the timeout is 0 is the non-blocking case and yes, you need 1ms for the non-blocking case when there are file descriptors sorry bad wording (again) (note however that this last "requirement" is very hurd specific, and due to a design issue) the work i did six months ago fixes it, but depends on pthreads for correct performances (or rather, a thread library change, but changing cthreads was both difficult and pointless) also, i intend to work on io_poll as a replacement for io_select, that fixes the "message storm" (i love these names) caused by dead-name notifications resulting from the way io_select works #### IRC, freenode, #hurd, 2012-12-19 tschwinge: i've tested the glibc rbraun/select_timeout_for_one_fd branch for a few days on darnassus now, and nothing wrong to report #### IRC, freenode, #hurd, 2012-12-20 braunr: so, shall I commit the single hurd select timeout fix to the debian package? youpi: i'd say so yes #### IRC, freenode, #hurd, 2013-01-03 gnu_srs: sorry, i don't understand your poll_timeout patch it basically reverts mine for poll only but why ? braunr: It does not revert your select patch, if there is one FD the timeout is at io_select, if not one the timeout is at mach_msg but why does it only concern poll ? (and why didn't i do it this way in the first place ?) (or maybe i did ?) there are problems with a timeout of zero for poll, depending on the implementation the FDs can result in not being ready. but that's also true with select the cases I've tested only have problems for poll, not select we'll have to create test cases for both but your solution doesn't hold anyway our current workaround for this class of problems is to set a lower bound on the timeout to 1 (which comes from a debian specific patch) see the test code i sent, http://lists.gnu.org/archive/html/bug-hurd/2012-12/msg00043.html, test_poll+select.c the patch might be incomplete though i know, but your solution is still wrong see debian/patches/hurd-i386/local-select.diff in the debian eglibc package and in that message I have introduced a minimum timeout for poll of 1ms. yes but you shouldn't this is a *known* bug, and for now we have a distribution-specific patch in other words, we can't commit that crap upstream well, according to youpi there is a need for a communication to flag when the FDs are ready, not yet implemented. i'm not sure what you mean by that I don't understand what you refer to there is a need for a full round-trip even in the non blocking case which is implemented in one of my hurd branches, but awaits pthreads integration for decent scalability the only difference between poll and select is that select can stop the loop on error, while poll needs to continue youpi: don't you think the glibc select patch is incomplete ? incomplete in what direction? the minimum 1ms delay is a completely bogus workaround youpi: http://lists.gnu.org/archive/html/bug-hurd/2012-11/msg00001.html so I wouldn't say it's even completing anything :) hm no never mind, it's not i thought it missed some cases where the delay made sense, but no the timeout can only be 0 if the timeout parameter is non NULL gnu_srs: during your tests, do you run with the debian eglibc package (including your changes), or from the git glibc ? I run with -37, -38, with my minimum poll changes, my 3 cases, and 3 case-poll updates. so you do have the debian patches so you normally have this 1ms hack which means you shouldn't need to make the poll case special A admit the 1ms patch is not possible to submit upstream, but it makes things work (and youpi use it for vim) i'll try to reproduce your ntpdate problem with -38 when i have some time uh, no, vim actually doesn't use the hack :p gnu_srs: it's the contrary: we have to avoid it for vim if (strcmp(program_invocation_short_name, "vi") && strcmp(program_invocation_short_name, "vim") && strcmp(program_invocation_short_name, "vimdiff") && !to) to = 1; that does what we are saying strcmp returns 0 on equality aha, OK then I don't have that hack in my code. I have tested vim a little, but cannot judge, since I'm not a vi user. you don't ? you should have it if the package patches were correctly applied Maybe somebody else could compile a libc with the 3-split code to test it out? that's another issue I mean the patch I sent to the list, there the vi{m} hack is not present. well obviously but i'm asking about the poll_timeout one A, OK, it's very easy to test that version too but currently -38 maybe has a regression due to some other patch. that's another thing we're interested in Unfortunately it takes a _long_ time to build a new version of libc (several hours...) -38 is already built yes, but removing patches one by one and rebuilding. but then, the "regression" you mention concerns a package that wasn't really working before removing ? ah, to identify the trouble carrying one ntpdate works with -37, no problem... but not with -38 again, trace it with -38 to see on what it blocks as I wrote yesterday gdb hangs the box hard and rpctrace bugs out, any ideas? printf gdb from a subhurd I'm suspecting the setitimer patch: Without it gdb ntpdate does not freeze hard any longer, bt full: http://paste.debian.net/221491/ Program received signal SIGINT, Interrupt. 0x010477cc in mach_msg_trap () at /usr/src/kernels/eglibc/eglibc-2.13/build-tree/hurd-i386-libc/mach/mach_msg_trap.S:2 2 kernel_trap (__mach_msg_trap,-25,7) (gdb) thread apply all bt full Thread 6 (Thread 3158.6): #0 0x010477cc in mach_msg_trap () at /usr/src/kernels/eglibc/eglibc-2.13/build-tree/hurd-i386-libc/mach/mach_msg_trap.S:2 No locals. #1 0x01047fc9 in __mach_msg (msg=0x52fd4, option=1282, send_size=0, rcv_size=0, rcv_name=132, timeout=100, notify=0) at msg.c:110 ret = #2 0x010ec3a8 in timer_thread () at ../sysdeps/mach/hurd/setitimer.c:90 err = msg = {header = {msgh_bits = 4608, msgh_size = 32, msgh_remote_port = 0, msgh_local_port = 132, msgh_seqno = 78, msgh_id = 23100}, return_code = 17744699} setitimer.c:90 err = __mach_msg (&msg.header, MACH_RCV_MSG|MACH_RCV_TIMEOUT|MACH_RCV_INTERRUPT, 0, 0, _hurd_itimer_port, _hurd_itimerval.it_value.tv_sec * 1000 + _hurd_itimerval.it_value.tv_usec / 1000, MACH_PORT_NULL); [[alarm_setitimer]]. gdb ? i thought ntpdate was the program freezing the freeze is due to -38 yes we know that but why do you say "gdb ntpdate" instead of "ntpdate" ? yes, ntpdate freezes: without gdb kill -9 is OK, with gdb it freezes hard (with setitimer pacth). we don't care much about the kill behaviour ntpdate does indeed makes direct calls to setitimer without the setitimer patch: without gdb ntpdate freezes (C-c is OK), with gdb C-c gives the paste above sorry i don't understand *what* is the problem ? there are two of them ntpdate freezing gdb freezing ok he's saying gdb freezing is due to the setitimer patch yes that's what i understand now what he said earlier made me think ntpdate was freezing with -38 better: ntpdate hangs, gdb ntpdate freezes (with the setitimer patch) what's the behaviour in -37, and then what is the behaviour with -38 ? (of both actions, so your answer should give us four behaviours) gnu_srs: what is the difference between "hangs" and "freezes" ? -37 no problem, both with and without gdb. you mean ntpdate doesn't freeze with -37, and does with -38 ? hangs: kill -9 is sufficient, freezes: reboot, checking file system etc and i really mean ntpdate, not gdb whatever the ntpdate hang (without the setitimer patch) in -38 can be due to the poll stuff: Have to check further with my poll patch... #### IRC, freenode, #hurd, 2013-01-04 Summary of the eglibc-2.13-38 issues: without the unsubmitted-setitimer_fix.diff patch and with my poll_timeout.patch fix in http://lists.gnu.org/archive/html/bug-hurd/2012-12/msg00042.html ntpdate works again :) please consider reworking the setitimer patch and add a poll case in hurdselect.c:-D Additional info: vim prefers to use select before poll,. With the proposed changes (small,3-split), only poll is affected by the 1ms default timeout, i.e. the current vi hack is no longer needed. gnu_srs: the setitimer patch looks fine, and has real grounds gnu_srs: your poll_timeout doesn't so unless you can explain where the problem comes from, we shouldn't remove the setitimer patch and add yours in addition 09:30 < gnu_srs> only poll is affected by the 1ms default timeout, i.e. the current vi hack is no longer needed. that sentence is complete nonsense poll and select are implemented using the same rpc, which means they're both broken if the vi hack isn't needed, it means you broke every poll user btw, i think your ntpdate issue is very similar to the gitk one gitk currently doesn't work because of select/poll it does work fine with my hurd select branch though which clearly shows a more thorough change is required, and your hacks won't do any good (you may "fix" ntpdate, and break many other things) braunr: Why don't you try ntpdate yourself on -38 (none of my patches applied) you're missing the point the real problem is the way select/poll is implemented, both at client *and* server sides 09:30 etc: The current implementation is slower than the 3-way patch. Therefore it in not needed in the current implementation (I didn't propose that either) hacks at the client side only are pointless, whatever you do slower ? it's not about performance but correctness your hack *can't* solve the select/poll correctness issue yes, slower on my kvm boxes... so it's normal that ntpdate and other applications such as gitk are broken if you try to fix it by playing with the timeout, you'll just break the applications that were fixed in the past by playing with the timeout another way can you understand that ? forget the timeout default, it's a side issue. the *real* select/poll issue is that non blocking calls (timeout=0) don't have the time to make a full round trip at the server no it's not, it's the whole problem some applications work with a higher timeout, some like gitk don't ntpdate might act just the same yes of course, and I have not addressed this problem either, I'm mostly interested in the 3-way split. well, it looks like you're trying to .. to be able to submit my poll patches (not yet published) i suggest you postpone these changes until the underlying implementation works i strongly suspect the split to be useless we shouldn't need completely different paths just for this conformance issue so wait until select/poll is fixed, then test again Read the POSIX stuff: poll and select are different. i know their expected behaviour is that's what needs to be addressed but you can't do that now, because there are other bugs in the way so you'll have a hard time making sure your changes do fix your issues, because the problems might be cause by the other problems since you are the one who knows best, why don't you implement everything yourself. well, i did and i'm just waiting for the pthreads hurd to spread before adapting my select branch [[libpthread]]. it won't fix the conformance issue, but it will fix the underlying implementation (the rpc) and then you'll have reliable results for the tests you're currently doing why not even trying out the cases I found to have problems?? because i now know why you're observing what you're observing i don't need my eyes to see it to actually imagine it clerly when i start gitk and it's hanging, i'm not thinking 'oh my, i need to hack glibc select/poll !!!' because i know the problem i know what needs to be done, i know how to do it, it will be done in time please try to think the same way .. you're fixing problems by pure guessing, without understanding what's really happenning (10:59:17 AM) braunr: your hack *can't* solve the select/poll correctness issue: which hack? "please consider removing setitimer because it blocks my ntpdate" gnu_srs: all your select related patches the poll_timeout, the 3-way split, they just can't changes need to be made at the server side too you *may* have fixed the conformance issue related to what is returned, but since it get mixed with the underlying implementation problems, your tests aren't reliable well some of the test code is from gnulib, their code is not reliable? their results aren't why is that so hard to understand for you ? (11:08:05 AM) braunr: "please consider removing setitimer because it blocks my ntpdate": It's not my ntpdate, it's a program that fails to run on -38, but does on -37! it doesn't mean glibc -37 is right it just means the ntpdate case seems to be handled correctly a correct implementation is *ALWAYS* correct if there is one wrong case, it's not, and we know our select/poll implementation is wrong no of course not, and the ntpdate implementation is not correct? file a bug upstream then. you're starting to say stupid things again ntpdate and gnulib tests can't be right if they use code that isn't right it doesn't mean they'll always fail either, the programs that fail are those for which select/poll behaves wrong same thing for setitimer btw we know it was wrong, and i think it was never working actually where are the missing test cases then? maybe you should publish correct code so we can try it out? i have, but there are dependencies that prevent using it right now which is why i'm waiting for pthreads hurd to spread pthreads provide the necessary requirements for select to be correctly implemented at server side well conformance with your code could be tested on Linux, kFreeBSD, etc ? i'm not writing test units /code/test code/ the problem is *NOT* the test code the problem is some of our system calls it's the same for ntpdate and gitk and all other users then the gnulib, ntpdate, gitk code is _not_ wrong no, but their execution is, and thus their results which is ok, they're tests they're here precisely to tell us if one case works they must all pass to hope their subject is right so, again, since there are several problems with our low level calls, you may have fixed one, but still suffer from others so even if you did fix something, you may not consider the test failure as an indication that your fix is wrong but if you try to make your changes fix everything just to have results that look valid, it's certain to fail, since you only fix the client side, and it's *known* the server side must be changed too do you consider unsubmitted-single-hurdselect-timeout.diff and local-select.diff a hack or not? the first isn't, since it fixes the correctness of the call for one case, at the cost of some performance the second clearly is which is the difference between unsubmitted-* and local-* and my proposal to modify the first is a hack? yes it reverts a valid change to "make things work" whereas we know the previous behaviour was wrong that's close to the definition of a hack "make things work" meaning breaking some applications? yes in this case, those using poll with one file descriptor and expecting a timeout, not twice the timeout well, your change isn't really a revert hum yes actually it is the timeout is correct no, it looks correct how did you test it ? and same question as yesterday: why only poll ? see the code I mentioned before no i won't look it doesn't explain anything I have not found any problems with select, only poll (yes, this is a user perspective) that's what i call "pure guessing" you just can't explain why it fixes things because you don't know you don't understand what's really happening in the system there is a good idea in your change but merely for performance, not correctness (your change makes it possible to save the mach_msg receive if the io_select blocked, which is good, but not required) See also [[alarm_setitimer]]. #### IRC, freenode, #hurd, 2013-01-22 youpi: Maybe it's overkill to have a separate case for DELAY; but it enhances readability (and simplifies a lot too) but it reduces factorization if select is already supposed to behave the same way as delay, there is no need for a separate code OK; I'll make a two-way split then. What about POLL and nfds=0, timeout !=0? gnu_srs: handle nfds=0 as a pure timeout as the linux man page describes it makes sense, and as other popular systems do it, it's better to do it the same way and i disagree with you, factorization doesn't imply less readability So you agree with me to have a special case for DELAY? Coding style is a matter of taste: for me case a: case b: etc is more readable than "if then elseif then else ..." it's not coding style avoiding duplication is almost always best whatever the style i don't see the need for a special delay case it's the same mach_msg call (for now) gnu_srs: i'd say the only reason to duplicate is when you can't do otherwise ways of coding then... And I agree with the idea of avoiding code duplication, ever heard of Literate Programming we'll need a "special case" when the timeout is handled at the server side, but it's like two lines .. #### IRC, freenode, #hurd, 2013-02-11 braunr: the libpthread hurd_cond_timedwait_np looks good to me ##### IRC, freenode, #hurd, 2013-02-15 braunr: does cond_timedwait_np depend on the cancellation fix? yes ok the timeout fix so I also have to pull that into my glibc build (i fixed cancellation too because the cleanup routine had to be adjusted anyway ) ah, and I need the patches hurd package too if unsure, you can check my packages ok, not for tonight then i listed the additional patches in the changelog yep, I'll probably use them #### IRC, freenode, #hurd, 2013-02-11 braunr: I don't understand one change in glibc: - err = __io_select (d[i].io_port, d[i].reply_port, 0, &type); + err = __io_select (d[i].io_port, d[i].reply_port, type); youpi: the waittime parameter ahs been removed has* where? when? in the hurd branch in the defs? yes I don't see this change only the addition of io_select_timeout hum also, io_select_timeout should be documented along io_select in hurd.texi be6e5b86bdb9055b01ab929cb6b6eec49521ef93 Selectively compile io_select{,_timeout} as a routine * hurd/io.defs (io_select_timeout): Declare as a routine if _HURD_IO_SELECT_ROUTINE is defined, or a simpleroutine otherwise. (io_select): Likewise. In addition, remove the waittime timeout parameter. ah, it's in another commit yes, perhaps misplaced that's the kind of thing i want to polish my main issue currently is that time_data_t is passed by value i'm trying to pass it by address I don't know the details of routine vs simpleroutine it made sense for me to remove the waittime parameter at the same time as adding the _HURD_IO_SELECT_ROUTINE macro, since waittime is what allows glibc to use a synchronous RPC in an asynchronous way is it only a matter of timeout parameter? simpleroutine sends a message routine sends and receives by having a waittime parameter, _hurd_select could make io_select send a message and return before having a reply ah, that's why in glibc you replaced MACH_RCV_TIMED_OUT by 0 yes it seems a bit odd to have a two-face call it is can't we just keep it as such? no damn well we could, but it really wouldn't make any sense why not? because the way select is implemented implies io_select doesn't expect a reply (except for the single df case but that's an optimization) fd* that's how it is already, yes? yes well yes and no that's complicated :) there are two passes let me check before saying anything ;p :) in the io_select(timeout=0) case, can it ever happen that we receive an answer? i don't think it is you mean non blocking right ? not infinite timeout I mean calling io_select with the timeout parameter being set to 0 so yes, non blocking no, i think we always get MACH_RCV_TIMED_OUT for me non-blocking can mean a lot of things :) ok i was thinking mach_msg here ok so, let's not consider the single fd case the first pass simply calls io_select with a timeout 0 to send messages I don't think it's useful to try to optimize it it'd only lead to bugs :) me neither yes (as was shown :) ) what seems useful to me however is to optimize the io_select call with a waittime parameter, the generated code is an RPC (send | receive) whereas, as a simpleroutine, it becomes a simple send ok my concern is that, as you change it, you change the API of the __io_select() function (from libhurduser) yes but glibc is the only user and actually no i mean i change the api at the client side only that's what I mean remember that io.Defs is almost full "full" ? i'm almost certain it becomes full with io_select_timeout there is a practical limit of 100 calls per interface iirc since the reply identifiers are request + 100 are we at it already? i remember i had problems with it so probably but anyway, I'm not thinking about introducing yet another RPC but get a reasonable state of io_select i'l have to check that limit it looks wrong now or was it 50 i don't remember :/ i understand but what i can guarantee is that, while the api changes at the client side, it doesn't at the server side ideally, the client api of io_select could be left as it is, and libc use it as a simpleroutine sure, I understand that which means glibc, whether patched or not, still works fine with that call yes it could that's merely a performance optimization my concern is that an API depends on the presence of _HURD_IO_SELECT_ROUTINE, and backward compatibility being brought by defining it! :) yes i personally don't mind much I'd rather avoid the clutter what do you mean ? anything that avoids this situation like just using timeout = 0 well, in that case, we'll have both a useless timeout at the client side and another call for truely passing a timeout that's also weird how so a useless timeout at the client side? 22:39 < youpi> - err = __io_select (d[i].io_port, d[i].reply_port, 0, &type); 0 here is the waittime parameter that's a 0-timeout and it will have to be 0 yes that's confusing ah, you mean the two io_select calls? yes but isn't that necessary for the several-fd case, anyway? ? if the io_select calls are simple routines, this useless waittime parameter can just be omitted like i did don't we *have* to make several calls when we select on several fds? suure but i don't see how it's related well then I don't see what optimization you are doing then except dropping a parameter which does not bring much to my standard :) a simpleroutine makes mach_msg take a much shorter path that the 0-timeout doesn't take? yes it's a send | receive ok, but that's why I asked before so there are a bunch of additional checks until the timeout is handled whether timeout=0 means we can't get a receive and thus the kernel could optimize that's not the same thing :) ok it's a longer path to the same result I'd really rather see glibc building its own private simpleroutine version of io_select iirc we already have such kind of thing ok well there are io_request and io_reply defs but i haven't seen them used anywhere but agreed, we should do that braunr: the prototype for io_select seems bogus in the io_request, id_tag is no more since ages :) youpi: yes youpi: i'll recreate my hurd branch with only one commit without the routine/simpleroutine hack and with time_data_t passed by address and perhaps other very minor changes braunr: the firstfd == -1 test needs a comment or better, i'll create a v2 branch to make it easy to compare them ok braunr: actually it's also the other branch of the if which needs a comment: "we rely on servers implementing the timeout" youpi: ok - (msg.success.result & SELECT_ALL) == 0) why removing that test? you also need to document the difference between got and ready hm i'll have to remember i wrote this code like a year ago :) almost AIUI, got is the number of replies but i think it has to do with error handling and + if (d[i].type) + ++ready; while ready is the number of successful replies is what replaces it youpi: yes the poll wrapper already normalizes the timeout parameter to _hurd_select no you probably don't the whole point of the patch is to remove this ugly hack youpi: ok so 23:24 < youpi> - (msg.success.result & SELECT_ALL) == 0) when a request times out ah, right we could get a result with no event and no error and this is what makes got != ready tell that to the source, not me :) sure :) i'm also saying it to myself ... :) right, using io_select_request() is only an optimization, which we can do later what i currently do is remove the waittime parameter from io_select what we'll do instead (soon) is let the parameter there to keep the API unchancged but always use a waittime of 0 to make the mach_msg call non blocking then we'll try to get the io_request/io_reply definitions back so we can have simpleroutines (send only) version of the io RPCs and we'll use io_select_request (without a waittime) youpi: is that what you understood too ? yes (and we can do that later) gnu_srs: does it make more sense for you ? this change is quite sparsed so it's not easy to get the big picture sparse* it requires changes in libpthread, the hurd, and glibc the libpthread change can be almost forgotten it's just yet another cond_foo function :) well not if he's building his own packages right ok, apart from the io_select_request() and documenting the newer io_select_timeout(), the changes seem good to me youpi: actually, a send | timeout takes the slow path in mach_msg and i actually wonder if send | receive | timeout = 0 can get a valid reply from the server but the select code already handles that so it shouldn't be much of a problem k ##### IRC, freenode, #hurd, 2013-02-12 hum io_select_timeout actually has to be a simpleroutine at the client side :/ grmbl ah? otherwise it blocks how so? routines wait for replies even with timeout 0? there is no waittime for io_select_timeout adding one would be really weird oh, sorry, I thought you were talking about io_select it would be more interesting to directly use io_select_timeout_request but this means additional and separate work to make the request/reply defs up to date and used personally i don't mind, but it matters for wheezy youpi: i suppose it's not difficult to add .defs to glibc, is it ? i mean, make glibc build the stub code it's probably not difficult indeed ok then it's better to do that first yes there's faultexec for instance in hurd/Makefile ok or rather, apparently it'd be simply user-interfaces it'll probably be linked into libhurduser but with an odd-enough name it shouldn't matter youpi: adding io_request to the list does indeed build the RPCs :) i'll write a patch to sync io/io_reply/io_request youpi: oh by the way, i'm having a small issue with the io_{reply,request} interfaces the generated headers both share the same enclosing macro (_io_user) so i'm getting compiler warning s we could fix that quickly in mig, couldn't we? youpi: i suppose, yes, just mentioning ##### IRC, freenode, #hurd, 2013-02-19 in the hurdselect.c code, I'd rather see it td[0]. rather than td-> ok otherwise it's frownprone (it has just made me frown :) ) yes, that looked odd to me too, but at the same time, i didn't want it to seem to contain several elements I prefer it to look like there could be several elements (and then the reader has to find out how many, i.e. 1), rather than it to look like the pointer is not initialized right I'll also rather move that code further so the preparation can set timeout to 0 (needed for poll) how about turning your branch into a tg branch? feel free to add your modifications on top of it sure ok I'll handle these then youpi: i made an updated changelog entry in the io_select_timeout_v3 branch could you rather commit that to the t/io_select_timeout branch I've just created? i mean, i did that a few days ago (in the .topmsg file) ah k ##### IRC, freenode, #hurd, 2013-02-26 youpi: i've just pushed a rbraun/select_timeout_pthread_v4 branch in the hurd repository that includes the changes we discussed yesterday untested, but easy to compare with the previous version ##### IRC, freenode, #hurd, 2013-02-27 braunr: io_select_timeout seems to be working fine here braunr: I feel like uploading them to debian-ports, what do you think? youpi: the packages i rebuild last night work fine too # See Also See also [[select_bogus_fd]] and [[select_vs_signals]].