Recent

Author Topic: Can this algorithm be made more efficient?  (Read 18684 times)

segfault

  • Full Member
  • ***
  • Posts: 107
Can this algorithm be made more efficient?
« on: July 04, 2018, 12:00:46 pm »
The algorithm computes something called the "common language effect size" (cles) in statistics. The core of it is simple : given 2 lists of numbers or "scores" you compare all pairs in the lists and if the score in list1 is higher than the score in list2 you add 1 to the numerator of the effect size fraction and if the scores are equal you add 0.5 to the numerator. Finally you divide the numerator by the total number of comparisons (length(list1) *  length(list2)).

Code: Pascal  [Select][+][-]
  1. const
  2.    n1 = 12;
  3.    n2 = 12;
  4.    set1 : array[1..n1] of real = (18.7,20.3,20.8,18.3,16.4,16.8,
  5.                                   17.2,19.1,17.9,19.8,18.2,19.1);
  6.    set2 : array[1..n2] of real = (17.6,21.2,19.1,17.5,16.9,16.4,
  7.                                   17.7,19.2,17.5,21.4,17.6,18.8);
  8. var
  9.   i,j   : integer;
  10.   num,
  11.   cles  : real;
  12. begin
  13.   num := 0;
  14.   for i := 1 to n1 do
  15.     for j := 1 to n2 do
  16.       if set1[i] > set2[j] then
  17.         num := num + 1  
  18.       else if set1[i] = set2[j] then
  19.         num := num + 0.5;
  20.   cles := num / (n1 * n2);
  21.   writeln('Effect Size : ', cles:3:2)
  22. end.

Two versions of this algorithm are available on Github in Python. The above algorithm is included but is described as "naive", however not knowing Python I don't understand how the other more efficient one works. With just a small number of values in the lists it's not too bad but with thousands it will be very slow. Any suggestions? Thanks.

RayoGlauco

  • Full Member
  • ***
  • Posts: 176
  • Beers: 1567
Re: Can this algorithm be made more efficient?
« Reply #1 on: July 04, 2018, 12:49:09 pm »
It is very likely that ordering the two lists before starting to compare will help a lot to apply a more efficient algorithm.
To err is human, but to really mess things up, you need a computer.

Nitorami

  • Sr. Member
  • ****
  • Posts: 481
Re: Can this algorithm be made more efficient?
« Reply #2 on: July 04, 2018, 01:45:19 pm »
The naive approach determines the CL exactly by comparing each pair of samples. As the CL is a statistical measure, this should not normally be required. If you know the statistical distribution of your samples, you can apply closed formula based on mean and standard deviation, both of which can rather cheaply be calculated. See here
http://core.ecu.edu/psyc/wuenschk/docs30/CL.pdf

segfault

  • Full Member
  • ***
  • Posts: 107
Re: Can this algorithm be made more efficient?
« Reply #3 on: July 04, 2018, 06:23:21 pm »
The naive approach determines the CL exactly by comparing each pair of samples. As the CL is a statistical measure, this should not normally be required. If you know the statistical distribution of your samples, you can apply closed formula based on mean and standard deviation, both of which can rather cheaply be calculated. See here
http://core.ecu.edu/psyc/wuenschk/docs30/CL.pdf

Thanks, but my data usually isn't normal, so I need a nonparametric approach.

segfault

  • Full Member
  • ***
  • Posts: 107
Re: Can this algorithm be made more efficient?
« Reply #4 on: July 04, 2018, 06:23:47 pm »
It is very likely that ordering the two lists before starting to compare will help a lot to apply a more efficient algorithm.

I'll give it a try, thanks.

engkin

  • Hero Member
  • *****
  • Posts: 3112
Re: Can this algorithm be made more efficient?
« Reply #5 on: July 04, 2018, 06:50:58 pm »
To me this:
Code: Pascal  [Select][+][-]
  1.   for i := 1 to n1 do
  2.     for j := 1 to n2 do
  3.       if set1[i] > set2[j] then
  4.         num := num + 1  
  5.       else if set1[i] = set2[j] then
  6.         num := num + 0.5;

looks equivalent to:
Code: Pascal  [Select][+][-]
  1.   for i := 1 to n1 do
  2.     for j := 1 to n2 do
  3.       num := num + 0.5 + sign(set1[i] - set2[j])/2;

or:
Code: Pascal  [Select][+][-]
  1.   for i := 1 to n1 do
  2.     for j := 1 to n2 do
  3.       num := num + 1 + sign(set1[i] - set2[j]);
  4.   num := num * 0.5;

or:
Code: Pascal  [Select][+][-]
  1.   for i := 1 to n1 do
  2.     for j := 1 to n2 do
  3.       num := num + sign(set1[i] - set2[j]);
  4.   cles := 0.5*(num + (n1*n2)) / (n1 * n2);

Notice that num, now, does not need to be real. And sign could be optimized for the used floating point type.

1stEdit:
If the two sets were ordered, as RayoGlauco had suggested, I think you might not need the inner loop. IDK!

2ndEdit:
Actually you only need to sort *one* of the two sets.
« Last Edit: July 04, 2018, 07:06:15 pm by engkin »

howardpc

  • Hero Member
  • *****
  • Posts: 4144
Re: Can this algorithm be made more efficient?
« Reply #6 on: July 04, 2018, 07:13:54 pm »
It is very likely that ordering the two lists before starting to compare will help a lot to apply a more efficient algorithm.
I don't agree. Sorting the lists will just waste time and make no difference at all, since it does not reduce the number of brute-force comparisons needed, it simply does the comparisons in a different order, completely unnecessarily.I was quite wrong - sorting clearly makes an immediate difference.

Presumably "efficient" here means "faster". In the absence of an alternative to the brute-force algorithm, I see two possible ways to speed up the brute force approach.
One is to speed up the comparisons. For instance, if all data are reals with one decimal place precision you could convert all the incoming data to integers ten times the size and do integer comparisons (which for loops of tens of thousands I would expect to be slightly quicker). You then reduce your final answer by the scale factor.

Another (probably only worth the effort for tens of thousands) would be to reduce the number of comparisons by listing duplicates (here you would need to sort the lists), and caching the comparison results. This would avoid the need to repeat a series of comparisons if they had already been made, since the outcome would be known and could be looked up. But this is very data-dependent. If your data has few duplicates (or none) then this approach would gain no speed, but rather make things slower because of all the set-up preliminaries needed.
« Last Edit: July 04, 2018, 10:15:10 pm by howardpc »

Martin_fr

  • Administrator
  • Hero Member
  • *
  • Posts: 9792
  • Debugger - SynEdit - and more
    • wiki
Re: Can this algorithm be made more efficient?
« Reply #7 on: July 04, 2018, 08:05:04 pm »
I would expect it to be possible to solve the problem in O(n + m) if both lists are sorted. (I know it should be written just O(n) )

sorted low to high:
- start at first n, comparing to each m, as long as n <= m
- if next n is equal to current n, re-use last iterations result
- if next n is bigger than current n, then m continues from last pos (all previous m are known smaller / adjust result accordingly)
both datasets are traversed exactly once.

Sorting can be done in O(n log n)

The current tasks runs at O(n*m), which is considerable worse than 2 sorts, and an optimized compare.
« Last Edit: July 04, 2018, 08:08:04 pm by Martin_fr »

VTwin

  • Hero Member
  • *****
  • Posts: 1215
  • Former Turbo Pascal 3 user
Re: Can this algorithm be made more efficient?
« Reply #8 on: July 04, 2018, 08:23:46 pm »
One sort drops the comparison count in this example from 144 to 92:

Code: Pascal  [Select][+][-]
  1. function cles2(x1, x2: Vector; var cnt: integer): double;
  2. var
  3.   i, j, n1, n2: integer;
  4.   num : double;
  5. begin
  6.   n1 := Length(x1);
  7.   n2 := Length(x2);
  8.   cnt := 0;
  9.   Sort(x2);
  10.   num := 0;
  11.   for i := 0 to n1-1 do begin
  12.     for j := 0 to n2-1 do begin
  13.       cnt += 1;
  14.       if x1[i] > x2[j] then
  15.         num := num + 1
  16.       else if x1[i] = x2[j] then
  17.         num := num + 0.5
  18.       else
  19.         break;
  20.     end;
  21.   end;
  22.   result := num / (n1 * n2);
  23. end;

Note this is for zero based vector (dynamic array). I did not attempt optimizing with integer addition as per engkin, or a second sort as Martin suggests. The count does not include those required for sorting (in my case, heapsort).

How much more or less costly is Sign(x1 - x2) versus (x1 > x2)?
« Last Edit: July 04, 2018, 08:44:57 pm by VTwin »
“Talk is cheap. Show me the code.” -Linus Torvalds

Free Pascal Compiler 3.2.2
macOS 12.1: Lazarus 2.2.6 (64 bit Cocoa M1)
Ubuntu 18.04.3: Lazarus 2.2.6 (64 bit on VBox)
Windows 7 Pro SP1: Lazarus 2.2.6 (64 bit on VBox)

Abelisto

  • Jr. Member
  • **
  • Posts: 91
Re: Can this algorithm be made more efficient?
« Reply #9 on: July 04, 2018, 10:52:10 pm »
Really nice topic!  :)

Merging and then sorting both arrays into single one makes it (at least on my tests) much faster. Note that the QuickSortSet3 procedure modified a bit to suggest that value from the set2 less then equal values from the set1.
OS: Linux Mint + MATE, Compiler: FPC trunk (yes, I am risky!), IDE: Lazarus trunk

VTwin

  • Hero Member
  • *****
  • Posts: 1215
  • Former Turbo Pascal 3 user
Re: Can this algorithm be made more efficient?
« Reply #10 on: July 04, 2018, 11:30:28 pm »
Here is a better version following Martin's idea:

Code: Pascal  [Select][+][-]
  1. function cles4(x1, x2: Vector; var cnt: integer): double;
  2. var
  3.   i, j, j0, n1, n2: integer;
  4.   incr, sum, num: double;
  5. begin
  6.   Sort(x1); // or require
  7.   Sort(x2);
  8.   n1 := Length(x1);
  9.   n2 := Length(x2);
  10.   cnt := 0;
  11.   num := 0.0;
  12.   i := 0;
  13.   j := 0;
  14.   j0 := 0;
  15.   while i < n1 do begin
  16.     j := j0;
  17.     sum := j;
  18.     while j < n2 do begin
  19.       cnt += 1;
  20.       if x1[i] > x2[j] then
  21.         incr := 1.0
  22.       else if x1[i] = x2[j] then
  23.         incr := 0.5
  24.       else begin
  25.         j += 1;
  26.         break;
  27.       end;
  28.       sum += incr;
  29.       j += 1;
  30.     end;
  31.     num += sum;
  32.     i += 1;
  33.     if i < n1 then begin
  34.       if (x1[i] = x1[i-1]) then begin
  35.         cnt += 1;
  36.         num += sum;
  37.         i += 1;
  38.       end else
  39.         j0 := j-1;
  40.     end;
  41.   end;
  42.   result := num / n1;
  43.   result := result / n2;
  44. end;

24 compares, including line 35 (edited).  n + m instead of n * m! :)

EDIT:  line 42-43 to avoid integer overflow
« Last Edit: July 06, 2018, 01:01:33 am by VTwin »
“Talk is cheap. Show me the code.” -Linus Torvalds

Free Pascal Compiler 3.2.2
macOS 12.1: Lazarus 2.2.6 (64 bit Cocoa M1)
Ubuntu 18.04.3: Lazarus 2.2.6 (64 bit on VBox)
Windows 7 Pro SP1: Lazarus 2.2.6 (64 bit on VBox)

Martin_fr

  • Administrator
  • Hero Member
  • *
  • Posts: 9792
  • Debugger - SynEdit - and more
    • wiki
Re: Can this algorithm be made more efficient?
« Reply #11 on: July 05, 2018, 12:30:58 am »
24 compares, including line 35 (edited).  n + m instead of n * m! :)

The real difference will come if you work with large data.
Say both lists have 1.000 elements.

Originally that would be 1.000.000 compares.

But:
sort (1000 * 10 {= log2 1000}): 10000
sort (1000 * 10 {= log2 1000}): 10000
1000+1000 = 2000
altogether: 22.000

That is a lot less. It is approx 1/40
(ever if one compare may now take a bit longer)

and double the size of both lists:
Originally that would be 4.000.000 compares.
Sorted takes 48.000
approx 1/80


And that is based on the sorting worst case. Many sorting methods will have a worst case of O(n log n), but average at an even better level.

VTwin

  • Hero Member
  • *****
  • Posts: 1215
  • Former Turbo Pascal 3 user
Re: Can this algorithm be made more efficient?
« Reply #12 on: July 05, 2018, 01:27:31 am »
Yes, that is huge. Not that I have had occasion to use this particular statistic, but I commonly calculate statistics on bootstrap arrays of 5000 or more data points. Minimizing iterations is always a good place to start. I am impressed by your insight into this problem.

And that is based on the sorting worst case. Many sorting methods will have a worst case of O(n log n), but average at an even better level.

I generally use the heapsort, which has a worst case of O(n log n). It is also in place, and is easy to code. I understand the quicksort can be faster, but I've never had an issue throwing big arrays at the heapsort.
“Talk is cheap. Show me the code.” -Linus Torvalds

Free Pascal Compiler 3.2.2
macOS 12.1: Lazarus 2.2.6 (64 bit Cocoa M1)
Ubuntu 18.04.3: Lazarus 2.2.6 (64 bit on VBox)
Windows 7 Pro SP1: Lazarus 2.2.6 (64 bit on VBox)

segfault

  • Full Member
  • ***
  • Posts: 107
Re: Can this algorithm be made more efficient?
« Reply #13 on: July 05, 2018, 10:21:50 am »
Wow! thanks guys. I admit I was sceptical that sorting would make any difference, since, as howardpc initially suggested, on the face of it  of sorting wouldn't reduce the number of comparisons needed. How wrong I was!

Is there a sorting unit so that I can just call quicksort (or heapsort) instead of writing it from scratch? I had a look in the docs but couldn't see anything...

For those who might be interested in knowing more about the statistic I've attached a paper which describes it in  detail and some extensions of it. The authors have also made available a suite of programs written in R, and also a separate program for calculating the confidence interval (also attached).  If your data is normally distributed there's a much quicker way of calculating it (as Nitorami mentioned). See :

https://www.leeds.ac.uk/educol/documents/00002182.htm


taazz

  • Hero Member
  • *****
  • Posts: 5368
Re: Can this algorithm be made more efficient?
« Reply #14 on: July 05, 2018, 11:48:19 am »
Wow! thanks guys. I admit I was sceptical that sorting would make any difference, since, as howardpc initially suggested, on the face of it  of sorting wouldn't reduce the number of comparisons needed. How wrong I was!

Is there a sorting unit so that I can just call quicksort (or heapsort) instead of writing it from scratch?
here is one of many reference implementation I keep don't know how efficient it is, it is a good starting point though.
Code: Pascal  [Select][+][-]
  1. Procedure QuickSort ( Left, Right :LongInt var X :Array of Longint);
  2. Var
  3.   i, j : LongInt;
  4.   tmp, pivot : LongInt;         { tmp & pivot are the same type as the elements of array }
  5. begin
  6.   i:=Left;
  7.   j:=Right;
  8.   pivot := X[(Left + Right) shr 1]; // pivot := X[(Left + Rigth) div 2]
  9.   Repeat
  10.     While pivot > X[i] Do i:=i+1;
  11.     While pivot < X[j] Do j:=j-1;
  12.     If i<=j Then Begin
  13.       tmp:=X[i];
  14.       X[i]:=X[j];
  15.       X[j]:=tmp;
  16.       j:=j-1;
  17.       i:=i+1;
  18.     End;
  19.   Until i>j;
  20.   If Left<j Then QuickSort(Left,j, X);
  21.   If i<Right Then QuickSort(i,Right, X);
  22. end;
  23.  
Good judgement is the result of experience … Experience is the result of bad judgement.

OS : Windows 7 64 bit
Laz: Lazarus 1.4.4 FPC 2.6.4 i386-win32-win32/win64

 

TinyPortal © 2005-2018