diff options
author | Thomas Schwinge <thomas@schwinge.name> | 2011-07-08 18:02:38 +0200 |
---|---|---|
committer | Thomas Schwinge <thomas@schwinge.name> | 2011-07-08 18:02:38 +0200 |
commit | 9f26077b50c2c621d3b73ffadb1010f0bcc52380 (patch) | |
tree | a4ab1c4df7889c5038ede3427995a73fd406168b | |
parent | 67d369a640f64a7076bbd1605b712c41879bd94f (diff) |
IRC.
-rw-r--r-- | open_issues/libpthread_pthread_key_create_reuse.mdwn | 47 | ||||
-rw-r--r-- | open_issues/rework_gnumach_ipc_spaces.mdwn | 220 |
2 files changed, 267 insertions, 0 deletions
diff --git a/open_issues/libpthread_pthread_key_create_reuse.mdwn b/open_issues/libpthread_pthread_key_create_reuse.mdwn new file mode 100644 index 00000000..11c9a821 --- /dev/null +++ b/open_issues/libpthread_pthread_key_create_reuse.mdwn @@ -0,0 +1,47 @@ +[[!meta copyright="Copyright © 2011 Free Software Foundation, Inc."]] + +[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable +id="license" text="Permission is granted to copy, distribute and/or modify this +document under the terms of the GNU Free Documentation License, Version 1.2 or +any later version published by the Free Software Foundation; with no Invariant +Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license +is included in the section entitled [[GNU Free Documentation +License|/fdl]]."]]"""]] + +[[!meta title="libpthread: pthread_key_create, reuse"]] + +IRC, FreeNode, #hurd, 2011-07-02 + +[[!tag open_issue_libpthread]] + + < pinotree> hm, maybe i found a libpthread bug + * pinotree tries a testcase + < pinotree> yesssss, found the bug :) + < pinotree> youpi: it's a problem of the key reuse in pthread_key_create() + < youpi> it doesn't reset it? + < youpi> were you looking at the licq issue? + < pinotree> no, gtest + < youpi> k + < youpi> licq has a failing threadspecific issue + < youpi> [ FAILED ] ThreadSpecificData.dataDeletedWhenThreadExits + < pinotree> basically, pthread_key_delete() does not delete the key values + from the "thread_specifics" ihash + < pinotree> but those were new keys, so i'm not sure it is allowed to + return values of previous keys? + < pinotree> after all, the actual key value is an implementation detail, + applications shouldn't care about it being reused + < pinotree> (imho) + < youpi> Upon key creation, the value NULL shall be associated with the new + key in all active threads. + < youpi> ok, so we have to clear it in all threads + < youpi> that's a pity + < pinotree> or just remove the entry from the hash on key removal + < youpi> pinotree: from all the hashes, you mean? + < pinotree> youpi: from how i see it, adding a snippet like + http://paste.debian.net/121690/ in pthread_key_delete() should do the job + < youpi> that only drops from the current thread + < pinotree> ah hm, other threads + < youpi> we need to drop from all threads + < youpi> that's the pity part + < pinotree> youpi: the licq case could look like a similar issue, at a + veeery quick glance diff --git a/open_issues/rework_gnumach_ipc_spaces.mdwn b/open_issues/rework_gnumach_ipc_spaces.mdwn index b7cda227..f5d40abd 100644 --- a/open_issues/rework_gnumach_ipc_spaces.mdwn +++ b/open_issues/rework_gnumach_ipc_spaces.mdwn @@ -318,3 +318,223 @@ IRC, freenode, #hurd, 2011-04-23 <antrik> braunr: idr? <braunr> antrik: a data structure used to map integers to pointers <braunr> http://fxr.watson.org/fxr/source/lib/idr.c?v=linux-2.6 + +2011-06-18 + + < braunr> hmm, i'm having a problem with integrating my radix tree code in + gnumach + < braunr> inserting into such a tree can trigger memory allocation + < braunr> so commonly, the tree i loaded with nodes before insertion, + usually if it requires strong locking + < braunr> ipc spaces are locked using "simple locks" (which are spin locks) + < braunr> but spin locks are noops on UP, and gnumach is always UP .. + < braunr> so, should i still include preloading code, even if it'll end up + dead code ? + < antrik> hm... I think we discussed this before; but isn't gnumach + supposed to be SMP-capable, minus bugs?... + < braunr> it is + < braunr> but ofc, if i choose not to include preloading, i'll write + #errors so that the day gnumach is built for SMP again, such support will + be included + < antrik> oh, sorry, I think I misread. what is UP? + < braunr> uniprocessor + < antrik> well, if it's just bugs forcing the current UP state, I think + saying that gnumach is always UP is a stretch... + < braunr> sure, but it's a practical consideration + < antrik> does the locking complicate stuff? or is it just performance + considerations? + < braunr> no it's about correctness and completeness + < braunr> if you don't preload a tree before locking + < braunr> and memory allocation occurs while you're holding a simple lock + < braunr> and memory allocation requires the kernel to sleep + < braunr> you're screwed + < braunr> but i hate the idea of including code that won't be used and + which won't be easy to test + < braunr> so i'm wondering if it's ok for now to just put this in a TODO + comment and write it when the time is right + < braunr> or if i should spens the week adding this and tweaking the + userspace implementation to "emulate" spin locks + < antrik> well, it's tricky situation. on one hand, it seems stupid to + include handling for something that presently isn't used, and it's not + clear when it will. on the other hand, I'd rather not see additional + problems introduced that will make fixing SMP even harder... + < braunr> that's why i'm asking here + < antrik> of course, you could resolve this question by fixing SMP + first... ;-) + < braunr> ew + < antrik> well, I guess it would be best first to make the code work... and + we can still decide about the locking thing before it goes mainline I'd + say? + < braunr> "make the code work" ? + < antrik> I mean make gnumach work with your radix tree code + < braunr> without preloading then + < antrik> yeah... as a first step... I guess adding it later won't be any + harder than adding it right now? + < braunr> not much + < braunr> testing is what requires time really + +2011-06-27 + + < braunr> ok, here is the radix tree code: + http://git.sceen.net/rbraun/libbraunr.git/ + < braunr> the preloading stuff will be added in the kernel only, as it's + really pointless and not easily doable in userspace + < youpi> preloading? + < braunr> youpi: yes, preloading + < braunr> radix trees allocate external nodes + < youpi> well, providing a url at some random time of some random day is + not a great way to get eyes on it :) + < braunr> and ipc spaces are locked when inserting/allocating names + < braunr> we normally don't need preloading in gnumach + < braunr> since there is no preemption nor SMP + < braunr> but in case someone changes that, i'd like the code to be mostly + ready + < braunr> and correctly handle those ugly simple locks + < braunr> youpi: is what i say clear enough or do you need more background + on what is done ? + < youpi> about preloading? + < braunr> yes + < youpi> I guess it means allocating nodes in advance? + < braunr> yes + < youpi> k + < braunr> before locking the ipc spaces + +2011-06-28 + + < braunr> antrik: i think i won't write the code for the preloading stuff + actually + < braunr> antrik: it's not very difficult, but i really hate the idea of + not being able to reliably test it + < braunr> antrik: and i'd rather concentrate on integrating the radix tree + code in gnu mach now + < braunr> (i've already removed much code, even some files which weren't + actually used before my changes !) + < braunr> hmm, i won't be able not to write the preloading code after all + < antrik> braunr: not able not to write? how's that? + < braunr> antrik: it's actually required + < braunr> there are three functions, ipc_entry_get, ipc_entry_alloc, and + ipc_entry_grow_table + < braunr> ipc_entry_get cannot allocate memory + < braunr> if it fails, ipc_entry_grow_table is called, which will allocate + memory + < braunr> ipc_entry_alloc calls both of them depending on the result of + ipc_entry_get + < braunr> this is the equivalent of the preloading thing i had in mind + < braunr> not a bad thing after all + < braunr> the only thing i'm afraid of are the "optimized" version of those + ipc functions in te so-called fast paths + < braunr> i'm afraid if i don't deal right with those, the kernel may end + up using mostly slow paths + < braunr> but considering the purpose of those fast paths was merely to + avoid the overhead of function calls and some locking functions, it + shouldn't be that bad + < braunr> this is such a mess eh + < antrik> hurray microoptimisations ;-) + < braunr> there, the preload functions are done, easy :) + < antrik> braunr: seems you spent less time implementing it than pondering + whether you should implement it ;-) + < braunr> well, i couldn't implement it correctly before knowing what + should have been done exactly + < braunr> and there are still other problems :/ + < braunr> and the other problems make me reconsider if this was useful at + all eh + < braunr> youpi: i'm unable to find where ipc tree entries are released + except in ipc_entry_alloc_name(), which could mean they're leaked ... + < braunr> youpi: would you have time to take a look ? + < youpi> they aren't in ipc_entry_dealloc() ? + < braunr> no ..... + < youpi> it's not so unprobable that they're only freed when the task quits + < braunr> i don't see that either + < braunr> i only see them released in ipc_entry_alloc_name() + < braunr> so maybe they're reused + < braunr> but i'm not sure about that when reading the code + < braunr> oh wait, yes, they are :/ + < braunr> my bad + < youpi> in the ipc_splay_tree_* fucntions I guess? + < braunr> yes + < braunr> it's just surprsing to see them allocated outside the tree code + only + < braunr> but released in both the entry and the splay tree code ... + +2011-06-29 + + < braunr> hmm i missed an important thing :/ + < braunr> and it's scary + < braunr> it looks like the splay tree is mainly used when names are + provided + < braunr> whereas the entry table is used when names are allocated + < braunr> which means the table is the main ipc data structure, even for + tasks with lots of rights + < braunr> i can make my root ext2fs have more than 10k rights, and i see + the ipc table table grow along that number ... + < braunr> now thetable has 15k+ entries + < braunr> IOW there is no point to put the radix tree code in gnumach :( + < antrik> braunr: what do you mean by "provided" and "allocated"? + < antrik> and what is that table you are talking about? + < braunr> antrik: provided means the user space tasks gives the name of the + new right + < braunr> antrik: allocated means the kernel generates it + < braunr> antrik: the table i'm talking about is is_table in struct + ipc_space + < braunr> 55 * Every space has a non-NULL is_table with + is_table_size entries. + < braunr> 56 * A space may have a NULL is_tree. is_tree_small + records the + < braunr> 57 * number of entries in the tree that, if the table were + to grow + < braunr> 58 * to the next larger size, would move from the tree to + the table. + < braunr> here is the description which mislead me (in addition of the + obscure code) + < braunr> 50 * Spaces hold capabilities for ipc_object_t's (ports + and port sets). + < braunr> 51 * Each ipc_entry_t records a capability. Most + capabilities have + < braunr> 52 * small names, and the entries are elements of a table. + < braunr> 53 * Capabilities can have large names, and a splay tree + holds + < braunr> 54 * those entries. The cutoff point between the table + and the tree + < braunr> 55 * is adjusted dynamically to minimize memory + consumption. + < antrik> ah, so the rights with a low name are in a linear table, and only + those with "random" high names are stored in the splay tree instead? + < antrik> seems a rather complex design... I guess though there isn't much + room for further optimisation there :-( + < antrik> (well, except for code size optimisation -- which could in fact + make a considerable difference...) + < braunr> well there are problems with this approach, but most don't + concern performance + < braunr> when the table gets big (close to the page size or more), it gets + remapped when reallocated + < braunr> which will incur some penalty because of the tlb + < braunr> but it's annoying even for small tables + < braunr> the initial table size is 4 entries, and from what i can see, + most tables are 128 entries wide when tasks are destroyed + < braunr> an obvious simple optimization is to set a larger default size + < braunr> the same applies for the dead name tables + < braunr> those reallocations are a pain, and they're due to this design + < braunr> they can also fail because of fragmentation + < braunr> there would be a point to radix trees if they would replace all + that, and not just the splay tree + < braunr> but this would cause a lot of changes in a lot of places, and in + particular the "optimized" fast paths i mentioned yesterday + < braunr> we'll see how they perform in x15 :> + < braunr> there is a slight noticeable improvement when increasing the + initial size of the entry table + < antrik> braunr: well, if you use them in a completely different + implementation, there will be no way of telling whether they make a + difference + < antrik> how did you test the improvement? + < braunr> antrik: no actually it's completely negligeable + < braunr> hm + < braunr> is that a valid word ? :) + < braunr> negligible + < braunr> youpi: did you see my comments about the ipc stuff this earlier + today ? + < braunr> youpi: well to make things short, when port names are allocated, + the right they refer to is allocated from the ipc table + < braunr> youpi: the splay tree is only used for user provided names + < braunr> youpi: i had tables as large as the number of rights in a space + (i could easily reach 20k) + < braunr> youpi: whereas the splay trees had at most ~40 entries .. |