Recent

Author Topic: A new and scalable FIFO fair lock  (Read 29712 times)

aminer

  • Hero Member
  • *****
  • Posts: 956
Re: A fast concurrent FIFO queue version 1.0
« Reply #15 on: April 23, 2014, 01:27:16 am »
Hello,


You have to know that a TicketSpinlock with a proportional backoff
has a problem , if you use 4 threads on 4 cores the Ticketspinlock
will do very well its job, but if you use more threads than the
number of cores the TicketSpinlock will not scale, this problem
do not happen with a simple Spinlock with a backoff , a simple
Spinlock with a backoff will scale beautifully even if the number of threads is greater than the number of cores, that's why i have
used a simple Spinlock with a backoff inside my SemaMonitor and used
it inside this fast concurrent FIFO queue that is giving
a throughput of 6.4 millions transactions per second even if i am using my SemaMonitor with it, just test it yourself and you will notice
that it's very fast, and the pushed and the poped items will still be done in a FIFO order, so i think that this fast concurrent FIFO queue is great.


You can download this fast concurrent FIFO queue from:

http://pages.videotron.com/aminer/



Thank you,
Amine Moulay Ramdane.





aminer

  • Hero Member
  • *****
  • Posts: 956
Re: A fast concurrent FIFO queue version 1.0
« Reply #16 on: April 23, 2014, 01:43:05 am »
Hello,

Question:

Why Amine must we use this fast concurrent FIFO queue
and not simply use a two lock concurrent FIFO queue,
that means a lock around the push() and a lock around the pop() ?


Answer:

With the two lock method, that means a lock around the push() and a lock around the pop(), you will get less parallelism , but this fast FIFO queue that i am presenting to you have more parallelism and it is very fast.


You can download this fast concurrent FIFO queue version 1.0 from:

http://pages.videotron.com/aminer/


Thank you,
Amine Moulay Ramdane.





aminer

  • Hero Member
  • *****
  • Posts: 956
Re: A fast concurrent FIFO queue version 1.0
« Reply #17 on: April 23, 2014, 02:21:20 am »

Hello,


Question:

Amine , we are kind of lost, what is this waitfree 
concurrent FIFO queue , and lockfree concurrent FIFO queue ?


Answer:

We have to be smart and understand that waitfree and lockfree
algorithms maximize the parallelism, but the lockfree
suffers from starvation and it's less efficient than
other algorithms whith a scenario of high contention and the lockfree
algorithm can not be used in realtime systems when you need
the process to complete any operation in a finite number of steps of finite time, the lockfree algorithm can not do that, and what about
waitfree ? as i have said the waitfree algorithm do maximize the parallelism and it is also starvation-free and it finishes
an operatoin such us the push() and a pop() of a concurrent FIFO queue
in a finite number of steps , and the waitfree is efficient in a scenario with high contention , so i have presented to you a
concurrent FIFO queue that is waitfree, but the number of threads
has to be fixed to calculate the number of steps that the push()
and pop() takes in a realtime time system.


I have also presented to you a faster concurrent FIFO queue
that is not waitfree but as i said before it has more parallelism than the two lock algorithm and the items will be pushed and popped in
a FIFO order and it scales well even if the number of threads are greater than the number of cores.

You can download all those concurrent FIFO queues from my website:

http://pages.videotron.com/aminer/




Thank you,
Amine Moulay Ramdane.



aminer

  • Hero Member
  • *****
  • Posts: 956
Re: A fast concurrent FIFO queue version 1.0
« Reply #18 on: April 23, 2014, 08:40:54 pm »
Hello,

About fast concurrent FIFO queue 1.0 , as you have noticed i have
used a Ticket Spinlock with a proportional backoff in my other concurrent FIFO queues, but it has giving a Throughtput of 3.2 millions of transactions per second on my 2.4GHz Quadcore, that's a decent throughtput, but since i was using a Ticket Spinlock in my other concurrent FIFO queues they can not scale to a number of threads greater than the number of cores, the 3.2 millions of transactions was scored when the number of threads are equal to the number of cores, this is the weakness of the Ticket Spinlock and the array based lock and the queue locks such us the MCS and CLH locks, they do not scale
when the number of threads are greater to the number of cores,
so i have decided to use a Spinlock with a backoff inside my SemaMonitor , so the backoff has amortized greatly the cache-line transfers , so even if i am using my SemaMonitor inside this fast concurrent FIFO queue , this has giving me a throughput of 6.4 millions per second on my 2.4 GHz Quadcore, and that's a very good throughput,   other than that this simple spinlock with a backoff will permit my concurrent FIFO queue to scale even if the number of thread is greater than the number of cores. But as you have noticed since i am using a backoff inside this simple spinlock that is used by my SemaMonitor, so in high contention some threads can be stopped with a backoff from time to time allowing other threads to run and this can be a problem for some other scenarios that must not allow this to happen, but if you want to solve this problem just use my other concurrent FIFO queues.


You can download all those concurrent FIFO queues from my website:

http://pages.videotron.com/aminer/




Thank you,
Amine Moulay Ramdane.



aminer

  • Hero Member
  • *****
  • Posts: 956
Re: A fast concurrent FIFO queue version 1.0
« Reply #19 on: April 23, 2014, 09:19:09 pm »
Hello,


I have studied deeply the concurrent FIFO queue using the bakery algorithm of Chriss Thomasson, and i have wrote it in Object Pascal
and i have done all the alignment and the cache padding etc.
and i have noticed that it scored 4.8 millions of transactions
on the pop() side on my 2.4 GHz Quadcore , so even if it scored more on the push() side, in a scenario of high contention the throughput will be limited by the throughput of pop() side, so the Chriss Thomasson
conurrent FIFO queue is limited to 4.8 millions of transactions in high contention on my 2.4 GHz Quadcore... but here is the great news: i have
added to this concurrent FIFO queue my SemaMonitor using a simple spinlock with a backoff an it has scored this time 6.4 millions per second on the pop() side on my 2.4 GHz Quadcore, and that's more than the Chriss Thomasson concurrent FIFO queue that uses the bakery algorithm...

So i think that this fast concurrent FIFO queue that uses my SemaMonitor is still great and useful...

You can download all those concurrent FIFO queues from my website:

http://pages.videotron.com/aminer/




Thank you,
Amine Moulay Ramdane.





aminer

  • Hero Member
  • *****
  • Posts: 956
Re: A fast concurrent FIFO queue version 1.0
« Reply #20 on: April 23, 2014, 09:45:09 pm »

Hello,

Here is the Chriss Thomasson concurrent FIFO queue that
uses the bakery algorithm:

https://groups.google.com/forum/#!topic/lock-free/acjQ3-89abE/discussion



Thank you,
Amine Moulay Ramdane.


aminer

  • Hero Member
  • *****
  • Posts: 956
A fast concurrent FIFO Queue that uses a two locks algorithm...
« Reply #21 on: April 24, 2014, 08:36:26 pm »

Hello,

A fast concurrent FIFO Queue that uses a two locks algorithm...


Authors: Amine Moulay Ramdane


Description:

A concurrent FIFO queue that satisfies many requirements: it is FIFO fair, it minimizes efficiently the cache-coherence traffic and it is energy efficient on the pop() side: when there is no items in the queue it will not spin-wait , but it will block wait on my SemaMonitor.

You have 3 options for setting the kind of locks, just look inside defines.inc , if you want to set it for my array based lock called AMLock just uncomment the option AMLock inside defines.inc, if you want to set it for Ticket Spinlock just uncomment the option TicketSpinlock ,If you want to set it for Spinlock just uncomment the option Spinlock, the Spinlock gives better performance under contention it scored 12.5 millions of transactions per second on my 2.4 GHz Quadcore, the Ticket Spinlock option scored 3.2 millions of transactions per second on my 2.4 GHz Quadcore, the Spinlock scaled even if the number of threads are greater than the number of cores, the TicketSpinlock and AMLock don't scale when the number of threads are greater than the number of cores, the Ticket Spinlock and scalable AMLock are optimal when the number of threads are equal to the number of cores.

The size of the queue must be passed to the constructor and it must be a power of 2.


You can download this fast concurrent FIFO Queue from:

http://pages.videotron.com/aminer/


Please take a look a the test.pas Object Pascal demo inside the zipfile, compile and run it...

Language: FPC Pascal v2.2.0+ / Delphi 7+: http://www.freepascal.org/

Operating Systems: Windows, Mac OSX , Linux , Unix...


Required FPC switches: -O3 -Sd -dFPC -dFreePascal

-Sd for delphi mode....

{$DEFINE CPU32} and {$DEFINE Windows32} for 32 bit systems

{$DEFINE CPU64} and {$DEFINE Windows64} for 64 bit systems




Thank you,
Amine Moulay Ramdane.


aminer

  • Hero Member
  • *****
  • Posts: 956
Re: A fast concurrent FIFO Queue that uses a two locks algorithm...
« Reply #22 on: April 24, 2014, 08:48:17 pm »

Hello,

As you have noticed i have implemented a fast concurrent FIFO queue
that uses a two locks algorithm and that uses my SemaMonitor,
i have benchmarked my concurrent FIFO queue using my SemaMonitor(
it is a combination of a semaphore and an eventcount), and and
i have also benchmarked it using the windows semaphore, and i have noticed that using my SemaMonitor my concurrent FIFO queue is 2x times faster than using the windows semaphore.


So hope you will be satisfied with this fast concurrent FIFO queue,
cause it satifies many requirements.

You can download this fast concurrent FIFO Queue from:

http://pages.videotron.com/aminer/


Thank you,
Amine Moulay Ramdane.
 

aminer

  • Hero Member
  • *****
  • Posts: 956
A new algorithm of very fast concurrrent FIFO queue
« Reply #23 on: April 24, 2014, 11:55:14 pm »
Hello,

Chriss Thomasson have not allowed me to make his concurrent FIFO queue
that uses the bakery algorithm available to the Object pascal community,
but don't worry i have thought all the day about that , and i have finally been able to invente a new and very fast concurrent FIFO queue that is better than the two locks algorithm and that is better than my other concurrent FIFO queue that uses a Ticket mechanism on the pop() side, my new algorithm and invention eliminates the Ticket mechanism and uses only an atomic increment on the push() side, this allows my new concurrent FIFO queue to scale even if the number of threads are greater than the number of cores, on the pop() side it uses only a CAS and i have benchmarked it and it's very fast, it is as fast as the Chriss Thomasson concurrent FIFO queue that uses the bakery algorithm,
and you have to know that my new  and very fast concurrent FIFO queue
have more parallelism that the two locks algorithm, so that's a much
better invention than the two locks algorithm.

A very fast concurrent FIFO Queue version 1.0

Authors: Amine Moulay Ramdane

Description:

A very fast concurrent FIFO queue that satisfies many requirements: it has more parallelism than the two locks algorithm, it is FIFO fair, it minimizes efficiently the cache-coherence traffic and it is energy efficient on the pop() side if you set the wait parameter to true in the construtor: when there is no items in the queue it will not spin-wait , but it will block wait on my SemaMonitor, and when the wait parameter of the constructor is set to false it uses only an atomic increment on the push() side and a CAS on the pop() side, so it's very fast.

You have 3 options for setting the kind of locks, just look inside defines.inc , if you want to set it for my array based lock called AMLock just uncomment the option AMLock inside defines.inc, if you want to set it for Ticket Spinlock just uncomment the option TicketSpinlock ,If you want to set it for Spinlock just uncomment the option Spinlock, the Spinlock gives better performance under contention it scored 6.4 millions of transactions per second on my 2.4 GHz Quadcore, the Ticket Spinlock option scored 2.6 millions of transactions per second on my 2.4 GHz Quadcore, the Spinlock scaled even if the number of threads are greater than the number of cores, the TicketSpinlock and AMLock don't scale when the number of threads are greater than the number of cores, the Ticket Spinlock
and scalable AMLock are optimal when the number of threads are equal to the number of cores, and when the wait parameter of the constructor is false it scales even if the number of threads are greater than the number of cores.

The size of the queue must be passed to the constructor and it must be a power of 2.

You can download my this new invention and new algorithm of a very fast concurrent FIFO queue from:

http://pages.videotron.com/aminer/

Please take a look a the test.pas Object Pascal demo inside the zipfile, compile and run it...

Language: FPC Pascal v2.2.0+ / Delphi 7+: http://www.freepascal.org/

Operating Systems: Windows, Mac OSX , Linux , Unix...


Required FPC switches: -O3 -Sd -dFPC -dFreePascal

-Sd for delphi mode....

{$DEFINE CPU32} and {$DEFINE Windows32} for 32 bit systems

{$DEFINE CPU64} and {$DEFINE Windows64} for 64 bit systems


Thank you, 
Amine Moulay Ramdane.

aminer

  • Hero Member
  • *****
  • Posts: 956
Re: A new algorithm of very fast concurrrent FIFO queue
« Reply #24 on: April 25, 2014, 01:04:59 am »
Hello,

After i have invented all my variants of my scalable RWLock,
and after i have invented a new algorithm of a very fast concurrent FIFO queue, and after i have invented AMLock a scalable array based lock,and after i have invented my Semcondvar a combination of a semaphore and a conditional variable, and after i have invented SemaMonitor a combination of a semaphore and a eventcount etc. i am right now thinking about my next invention , so stay tunned my dear
programmers...


You can download all my programming inventions from:

http://pages.videotron.com/aminer/


You are free to use them and to port them to other languages...




Thank you for your time,

Amine Moulay Ramdane.





aminer

  • Hero Member
  • *****
  • Posts: 956
Re: A new algorithm of very fast concurrrent FIFO queue
« Reply #25 on: April 25, 2014, 02:16:41 am »
Hello,

Question:

Amine, how can we know that your algorithm of your
new concurrent FIFO queue works ? is it formally proved ?


Answer:

You can simply reason about the algorithm to prove that it is
correct , i will do it in front of you...

You can download the source code of the algorithm from here:

http://pages.videotron.com/aminer/

We begin by the pop() method, its source code look like this:


===

function TWQueue.push(tm : long):boolean;
var lastHead,newtemp:long;
i,j:integer;
begin

if getlength >= fsize
  then
      begin
          result:=false;
          exit;
      end;

newTemp:=LockedIncLong(head);
lastHead:=newtemp-1;

repeat
asm pause end;
until fcount1^[lastHead and fMask].flag=0;
setObject(lastHead,tm);
fcount1^[lastHead and fMask].flag:=1;
if fwait then sema.signal;
result:=true;
end;

===


you have to know that "fsize" is fixed to the length of the queue
minus a "margin" that is equal to 1000 , that means that it's limited to 1000 threads in total that can run at the same time the push(), but you can vary the margin to higher the number of threads that can
run the push at the same time...

So if getlength equal fsize-1 that means that 1000 threads can cross
because "if getlength >= fsize" can be false at the same time, but
even if it is false at the same time , there a margin of 1000 threads,
so my reasonning is correct here.

Now we look at the rest of the code...

Every cell of the array based queue look like this:

type cell = record
    obj:long;
    flag:long;
    {$IFDEF CPU32}
    cache:typecache2;
    {$ENDIF CPU32}
    {$IFDEF CPU64}
    cache:typecache3;
    {$ENDIF CPU64}
    end;

It's cache padded to a cache-line size and the array is aligned on
64 bytes so that to avoid false-sharing...

After that we increment the "head" like this with an atomic increment...

newTemp:=LockedIncLong(head);
lastHead:=newtemp-1;

and if fcount1^[lastHead and fMask].flag=0 that means there is no item
in the cell , we will write the item on the cell by doing this:

setObject(lastHead,tm);

and after that we set the flag of the cell to 1 so that the pop()
can read from it when it's set to 1 like this:

fcount1^[lastHead and fMask].flag:=1;

so as you have noticed i have reasonned and explained to you the push() side, and i think my reasonning is correct here.


Now we will look at the pop() side, here is the code of the pop() side:

===

function TWQueue.pop(var obj:long):boolean;

var lastTail,newtemp,newtemp1,newtemp2 : long;
begin

if fwait then sema.wait;

repeat

newtemp1:=tail;
newtemp2:=newtemp1+1;

if newtemp2<=head then
 else
  begin
   result:=false;
   exit;
  end;
if CAS(tail,newtemp1,newtemp2) then break;
sleep(0);
until false;

lastTail:=newtemp1;
repeat
asm pause end;
until fcount1^[lastTail and fMask].flag=1;
obj:=getObject(lastTail);
fcount1^[lastTail and fMask].flag:=0;
result:=true;

end;

==

The very first thing to know on the pop() side is that we must increment the "tail" everytime but how can we increment the tail without going over the "head", this is why in my invention i have used a lockfree mechanism like this

==

repeat

newtemp1:=tail;
newtemp2:=newtemp1+1;

if newtemp2<=head then
 else
  begin
   result:=false;
   exit;
  end;
if CAS(tail,newtemp1,newtemp2) then break;
sleep(0);
until false;

==

It's like in lockfree algorithms , i have to copy and memories
the "tail" into the newtemp1 variable and after that increment newtemp1 , so if newtemps1+1 is less or equal to "head" that means we are correct and we have not  go over the head , after that with a lockfree mechanism we will test with a CAS that tail is still equal to newtemp1 to be able to increment the "tail" , so my reasonnning is correct here.

After that we have to wait to see if there is an item in the cell by
doing this:

repeat
asm pause end;
until fcount1^[lastTail and fMask].flag=1;


If there is an item on the cell we will get it
and set the flag of the cell to 0 to say to
the push() threads that they can push() to this cell by doing this:


obj:=getObject(lastTail);
fcount1^[lastTail and fMask].flag:=0;



So as you have noticed this is my new algorithm
and i have presented to you my reasonning to prove
to you that my algorithm is correct, and i think
my algorithm is correct.



Thank you,
Amine Moulay Ramdane.

aminer

  • Hero Member
  • *****
  • Posts: 956
Re: A new algorithm of very fast concurrrent FIFO queue
« Reply #26 on: April 25, 2014, 02:58:20 am »
Hello...

I want to share with you this beautiful song from morocco, it look like my new algorithm of my concurrent FIFO queue,
i was born in morocco and i am a morrocan...

https://www.youtube.com/watch?v=EDW4fQ-68mA&hd=1


Thank you,
Amine Moulay Ramdane.

aminer

  • Hero Member
  • *****
  • Posts: 956
Here is my new algorithm
« Reply #27 on: April 25, 2014, 10:04:22 pm »
Hello...


Please read all the following post to understand better my new algorithm...

I have presented to you yesterday my algorithm of
a concurrent FIFO queue , but it was not working correctly
on the pop() side, look at the pop():

==
function TWQueue.pop(var obj:long):boolean;

var lastTail,newtemp,newtemp1,newtemp2 : long;
begin

if fwait then sema.wait;

result:=true;

repeat

newtemp1:=tail;
newtemp2:=newtemp1+1;

if newtemp2<=head then
 else
  begin
   result:=false;
   exit;
  end;
if CAS(tail,newtemp1,newtemp2) then break;
sleep(0);
until false;

lastTail:=newtemp1;
repeat
if fcount1^[lastTail and fMask].flag=1 then break;
sleep(0);
until false;
obj:=getObject(lastTail);
fcount1^[lastTail and fMask].flag:=0;

end;
==

I am updating with a CAS first and getting the item  after
that, that's not correct cause another popping thread can get in between and that will not work, so i have decided to invente another algorithm that solves this problem, here it's:

We begin by the pop() method, its source code look like this:


===

function TWQueue.push(tm : long):boolean;
var lastHead,newtemp:long;
i,j:integer;
begin

if getlength >= fsize
  then
      begin
          result:=false;
          exit;
      end;

newTemp:=LockedIncLong(head);
lastHead:=newtemp-1;

repeat
asm pause end;
until fcount1^[lastHead and fMask].flag=0;
setObject(lastHead,tm);
fcount1^[lastHead and fMask].flag:=1;
if fwait then sema.signal;
result:=true;
end;

===


you have to know that "fsize" is fixed to the length of the queue
minus a "margin" that is equal to 1000 , that means that it's limited to 1000 threads in total that can run at the same time the push(), but you can vary the margin to higher the number of threads that can
run the push() at the same time...

So if getlength equal fsize-1 that means that 1000 threads can cross
because "if getlength >= fsize" can be false at the same time, but
even if it is false at the same time , there a margin of 1000 threads,
so my reasonning is correct here.

Now we look at the rest of the code...

Every cell of the array based queue look like this:

type cell = record
    obj:long;
    flag:long;
    {$IFDEF CPU32}
    cache:typecache2;
    {$ENDIF CPU32}
    {$IFDEF CPU64}
    cache:typecache3;
    {$ENDIF CPU64}
    end;

It's cache padded to a cache-line size and the array is aligned on
64 bytes so that to avoid false-sharing...

After that we increment the "head" like this with an atomic increment...

newTemp:=LockedIncLong(head);
lastHead:=newtemp-1;

and if fcount1^[lastHead and fMask].flag=0 that means there is no item
in the cell , we will write the item on the cell by doing this:

setObject(lastHead,tm);

and after that we set the flag of the cell to 1 so that the pop()
can read from it when it's set to 1 like this:

fcount1^[lastHead and fMask].flag:=1;

so as you have noticed i have reasonned and explained to you the push() side, and i think my reasonning is correct here.


Now here is the new pop() method...

==
function TWQueue.pop(var obj:long):boolean;

var lastTail,newtemp1,newtemp2: long;
    i,t:integer;
begin

if fwait then sema.wait;

result:=true;

repeat

newtemp1:=tail;
newtemp2:=newtemp1+1;

if newtemp2<=head then
 else
  begin
   result:=false;
   exit;
  end;
repeat
if fcount1^[newtemp1 and fMask].flag=1 then break;
sleep(0);
until ((false) or (newtemp1<>tail)) ;

obj:=getObject(newtemp1);
fcount1^[newtemp1 and fMask].flag:=0;
if CAS(tail,newtemp1,newtemp1+1) then break;
sleep(0);
t:=backoff.delay;
for i:=0 to 40*t do asm pause end;
until false;
end;


==


So as you have noticed we have to reason about this algorithm
to prove that all is correct, so follow with me please...

The very first thing to know on the pop() side is that we must increment the "tail" everytime but how can we increment the tail without going over the "head", this is why in my invention i have used a lockfree mechanism , It's like in lockfree algorithms , i have to copy and memories the "tail" into the newtemp1 variable and after that increment newtemp1 , so if newtemp1+1 is less or equal to "head" that means we are correct and we have not gone over the head , after that with a lockfree mechanism we will test with a CAS that tail is still equal to newtemp1 to be able to increment the "tail" , so my reasonning is correct here, but you have to know that the threads on the push() side will set the flag to 1 in there corresponding cells after they have put there items, so in the the pop() side we are testing with a "repeat until()" that fcount1^[newtemp1 and fMask].flag1 equal 1 if it's equal to 1 we continue , after that we get our item from the cell and we set fcount1^[newtemp1 and fMask].flag1 to 0 , so notice with me that even if we set fcount1^[newtemp1 and fMask].flag1 to 0 before incrementing tail, that's not a problem cause we have a margin of 1000
on the push() side so the  threads on the push() side will stop pushing  when "if getlength >= fsize" and fsize is equal to the length of the queue minus a "margin" of 1000.   

So as you have noticed i have reasonned about my algorithm an i think
my algorithm is correct now.


Finally i have benchmarked my algorithm without using my SemaMonitor
and it's giving 2x times more throughtput on the pop() side than the
Chriss Thomasson concurrent Queue that uses a the bakery algorithm,
my new algorithm has scored 10 millions of pop() transactions per second on my 2.4 GHz Quadcore, and 6.4 millions of push() transaction per second. That's very fast, and it scales better even if the number of threads are greater than the number of cores, and it has more parallelism than the two locks algorithm.


You can download my new algorithm of a very fast concurrent FIFO queue
from:

http://pages.videotron.com/aminer/



Thank you,
Amine Moulay Ramdane.

aminer

  • Hero Member
  • *****
  • Posts: 956
Re: Here is my new algorithm
« Reply #28 on: April 25, 2014, 11:34:29 pm »
Hello...

Read Chriss Thomasson here and my reply follows:

https://groups.google.com/forum/#!topic/comp.programming.threads/Fxr7v-RF9Q4

Hello Chriss Thomasson, you have suggested to be to use Relay Detector,
but you have to know that a good reasonning also is efficient, it's
like formal proof, what you have to do is concentrate on the very important and difficult parts, but the easy parts are easy to proof...

What is the difficult parts to proof in my algorithm ?

Lokk inside the source code at:

http://pages.videotron.com/aminer/

The first difficult part is the lockfree mechanism...

So notice that my lockfree mechanism to not go over the "head", i have
just explain it to you that it's working perfectly, and since
i know very well what is a lockfree algorithms , you can be confident
with my lockfree mechanism that it is working perfectly, the other difficult part is when i have set fcount1^[newtemp1 and fMask].flag to 0 inside the lockfree transaction before incrementing "tail" inside the pop() side, we can proof this one easily cause the push() side will not go beyong "fsize+magin" that means fsize+1000, the other part is the push() side and it's easy to proof and i will not proof it cause i know that it is working perfectly, so i think that you can be confident with my algorithm, i think that it's correct.

You can download my new algorithm of a very fast concurrent FIFO queue
from:

http://pages.videotron.com/aminer/


Thank you,
Amine Moulay Ramdane.



 

aminer

  • Hero Member
  • *****
  • Posts: 956
Re: Here is my new algorithm
« Reply #29 on: April 26, 2014, 12:16:27 am »

Hello,


Here is my webpage that explains my new algorithm, i have cleaned all typos, and i think my explanation is clearer now...

http://pages.videotron.com/aminer/CQueue1.htm



Thank you,
Amine Moulay Ramadane.


 

TinyPortal © 2005-2018