Hello,
In my previous version of ParallelHashList even if i have used
lock striping , synchronizing access to the counter that computes
the number of entries in the hashtable was reintroducing the scalability
problem of exclusive locking, this counter was called a hot field because
every mutative operation needs to access it. In version 1.31, parallelhashlist
maintains an independant counter , that counts the number
of entries , for each segment of the hashtable and it uses a lock
for each counter and this is better for scability. also i have changed
parallelhashlist to use only 100 lightweight MREWs (multiple-readers -exclusive-writer)
this will lower the memory consumption and this allows multiple threads
to write and read concurently. I am using lock striping with
100 lightweight MREWs (multiple-readers -exclusive-writer) and
this allows up to 100 parallel writes, but this upper bound on
parallel writes will not affect parallel reads, so this will give a good
performance and a good scalability .
Description:
A parallel HashList with O(1) best case and O(log(n)) worst case
access that uses lock striping and 100 lightweight MREWs
(multiple-readers -exclusive-writer) , this allows multiple threads
to write and read concurently. I am using lock striping with
100 lightweight MREWs (multiple-readers -exclusive-writer) and
this allows up to 100 parallel writes, but this upper bound on
parallel writes will not affect parallel reads, also parallelhashlist
maintains an independant counter , that counts the number of
entries , for each segment of the hashtable and uses a lock for
each counter, this also for better scalability.
You can download parallelhashlist version 1.31 from:
http://pages.videotron.com/aminer/Language: FPC Pascal v2.2.0+ / Delphi 5+:
http://www.freepascal.org/Operating Systems: Win , Linux and Mac (x86).
and please take a look at the benchmarks here:
http://pages.videotron.com/aminer/parallelhashlist/queue.htmNote: When i have done those benchmarks , there was not enough/much items
organized as a self-balancing tree in the individual chains of the
hashtable, so , almost all the items was found and inserted in O(1) , so
the parallel part in the Amdahl equation was not much bigger compared
to to the serial part. But you will notice in pratice that as soon as you will have
more items on the chains of the Hashtable, organized as self-balancing tree,
with a worst case log(n) , the parallel part will become bigger in the Amdahl
equation and you will have better performance and scalability than the numbers
in the graph of the benchmarks
...
Thank you.
Amine Moulay Ramdane.