summaryrefslogtreecommitdiff
path: root/hurd/libihash.mdwn
diff options
context:
space:
mode:
authorThomas Schwinge <thomas@codesourcery.com>2014-02-26 15:20:53 +0100
committerThomas Schwinge <thomas@codesourcery.com>2014-02-26 15:20:53 +0100
commit93060a3967ef66873d6246b0b1228c57aed2d9e4 (patch)
tree0dda55d9eaa0fdf687acc80ac2329bdf42c6a652 /hurd/libihash.mdwn
parentca63bd2d33b3d28eabd50ad58577b52a1fc9eba0 (diff)
parentc4ad3f73033c7e0511c3e7df961e1232cc503478 (diff)
Merge remote-tracking branch 'feldtkeller.SCHWINGE/master'
Conflicts: news/2011-q2.mdwn open_issues/glibc.mdwn open_issues/versioning.mdwn
Diffstat (limited to 'hurd/libihash.mdwn')
-rw-r--r--hurd/libihash.mdwn54
1 files changed, 39 insertions, 15 deletions
diff --git a/hurd/libihash.mdwn b/hurd/libihash.mdwn
index be20fa58..57588d55 100644
--- a/hurd/libihash.mdwn
+++ b/hurd/libihash.mdwn
@@ -1,4 +1,4 @@
-[[!meta copyright="Copyright © 2009, 2010, 2011, 2012, 2013 Free Software
+[[!meta copyright="Copyright © 2009, 2010, 2011, 2012, 2013, 2014 Free Software
Foundation, Inc."]]
[[!meta license="""[[!toggle id="license" text="GFDL 1.2+"]][[!toggleable
@@ -22,23 +22,44 @@ License|/fdl]]."]]"""]]
# Open Issues
- * [[viengoos libhurd-ihash|microkernel/viengoos/projects/new_hash_function]]
+## Collisions
- IRC, freenode, #hurd, 2008/2009
+Viengoos: [[microkernel/viengoos/projects/new_hash_function]].
- <neal> so, we need a new ihash implementation
- <neal> marcusb: When 80% full, the collision rate is very high.
- <neal> marcusb: I tested using 512mb / 4096 entries
- <neal> marcusb: Changing the load factor to 30% resulted in my program
- running more than an order of magnitude faster.
- <marcusb> yeah, it shouldn't get so full
- <marcusb> don't we do an exponential back-off in the array size?
- <marcusb> of course it's clear we can do much better
- <marcusb> the ihash algo is very simple
- <marcusb> I'm not even sure it makes much sense to have a generic
- library
- * [[community/gsoc/project_ideas/object_lookups]]
+### IRC, freenode, #hurd, 2008/2009
+
+ <neal> so, we need a new ihash implementation
+ <neal> marcusb: When 80% full, the collision rate is very high.
+ <neal> marcusb: I tested using 512mb / 4096 entries
+ <neal> marcusb: Changing the load factor to 30% resulted in my program
+ running more than an order of magnitude faster.
+ <marcusb> yeah, it shouldn't get so full
+ <marcusb> don't we do an exponential back-off in the array size?
+ <marcusb> of course it's clear we can do much better
+ <marcusb> the ihash algo is very simple
+ <marcusb> I'm not even sure it makes much sense to have a generic
+ library
+
+
+## Reader-Writer Locks
+
+### IRC, freenode, #hurd, 2013-12-09
+
+ <teythoon> btw, why don't we use rwlocks for serializing access to our
+ hash tables ?
+ <braunr> teythoon: we definitely could
+ <teythoon> ok
+ <braunr> teythoon: we definitely could use rcu *whistles*
+ <teythoon> should we ?
+ <braunr> i don't know
+ <teythoon> yeah, ofc
+ <braunr> rwlocks have some overhead compared to mutexes
+ <braunr> and our mutexes are already quite expensive
+ <braunr> our condition variables are also not optimized
+
+
+# [[community/gsoc/project_ideas/Object_Lookups]]
# Alternatives?
@@ -64,3 +85,6 @@ License|/fdl]]."]]"""]]
* <http://www.azillionmonkeys.com/qed/hash.html>
* CCAN's htable, idtree
+
+ * Not actually use a hashing data structure; see [[libports]], *Open Issues*,
+ *IRC, freenode, #hurd, 2013-11-14*.