### Author Topic: Can this algorithm be made more efficient?  (Read 7557 times)

#### segfault

• Jr. Member
• Posts: 62
##### Re: Can this algorithm be made more efficient?
« Reply #15 on: July 05, 2018, 01:14:25 pm »
Thanks taazz.

#### 440bx

• Sr. Member
• Posts: 487
##### Re: Can this algorithm be made more efficient?
« Reply #16 on: July 05, 2018, 02:30:20 pm »
You got a lot of good and very complete suggestions.   I'd like to add a couple of them.

I recommend you get a copy of Robert Sedgewick's Algorithms book.  If you get the 2nd edition, which you can probably get very cheap since it is 2 editions behind, the algorithms are presented in Pascal pseudocode.  That makes it really easy and quick to implement the algorithms he presents.  He also discusses the running time of the algorithm, the worst case and any other details that one should be aware of when choosing an algorithm.   Of course, every flavor of sort is included along with a lucid presentation of the pros and cons of each.  Normal people breathe air, programmers breathe algorithms

A PDF copy of Sedgewick's 4th edition (C pseudocode) of the book is available for free on github and many educational institutions.  Can't say the same thing for the 2nd edition, which is my favorite.

In addition to the above, which you weren't asking for, I'm under the impression that the problem you present may be susceptible of further optimizations.  Since my experience with the problem is limited to the code you presented, the following suggestions are simply food for thought, IOW, you need to determine if they apply to what you want to do.

The suggestion you were given of sorting the arrays is a good one.  If both arrays are sorted then instead of using a - short circuited - linear search of an array element into the other, it looks like it would be possible to use a slightly modified binary search.  If there is no match, the crossed indexes would represent the number of elements that are greater and lower than the value you searched for.  Knowing this it might be possible to simply calculate cnt, num and sum. Same thing if there is a match, except that there is an element that is equal and the calculation can be adjusted accordingly.

Another thing that comes to mind which, might be a possibility to further optimize the algorithm is to not use real numbers.  The numbers you show in your array are well within the range of a 32 bit integer.  Comparing integers is faster than comparing reals.   If integers are usable then it may be possible to tweak the algorithm to use XMM instructions which would allow you to compare multiple values at once.   This last thought might end up complicating things to a point that it isn't worth it but, it's worth looking into it just in case.

Lastly, while Quicksort is often the fastest sorting algorithm, its worst case is O(n)^2.  Avoiding the worst case requires more complex pivot value selection algorithms and the performance of Quicksort can degrade very quickly if pivot selection is less than optimal.  Heapsort is usually not as fast but, it is less "temperamental" than Quicksort.    There is reasonably good pseudocode for a Heapsort in Wikipedia.  There are plenty of binary search examples around (there is likely one in Wikipedia... one stop shop.)

Hopefully, there is something in this message you can use.

« Last Edit: July 05, 2018, 02:39:29 pm by 440bx »

#### taazz

• Hero Member
• Posts: 5363
##### Re: Can this algorithm be made more efficient?
« Reply #17 on: July 05, 2018, 02:57:47 pm »
here have a heapsort as well just in case
Code: Pascal  [Select]
1.
2. procedure BottomUpHeapsort(var Data: array of integer);
3.
4.    procedure Sink(Index, Arraylength: integer);
5.    var
6.       item, leftChild, sinkIndex, rightChild, parent: integer;
7.       done: boolean;
8.    begin
9.       sinkIndex := index;
10.       item := Data[index];
11.       done := False;
12.
13.       while not done do begin // search sink-path and move up all items
14.          leftChild := ((sinkIndex) * 2) + 1;
15.          rightChild := ((sinkIndex + 1) * 2);
16.
17.          if rightChild <= Arraylength then begin
18.             if Data[leftChild] < Data[rightChild] then begin
19.                Data[sinkIndex] := Data[rightChild];
20.                sinkIndex := rightChild;
21.             end
22.             else begin
23.                Data[sinkIndex] := Data[leftChild];
24.                sinkIndex := leftChild;
25.             end;
26.          end
27.          else begin
28.             done := True;
29.
30.             if leftChild <= Arraylength then begin
31.                Data[sinkIndex] := Data[leftChild];
32.                sinkIndex := leftChild;
33.             end;
34.          end;
35.       end;
36.
37.       // move up current Item
38.       Data[sinkIndex] := item;
39.       done := False;
40.
41.       while not done do begin
42.          parent := Trunc((sinkIndex - 1) / 2);
43.          if (Data[parent] < Data[sinkIndex]) and (parent >= Index) then begin
44.             item := Data[parent];
45.             Data[parent] := Data[sinkIndex];
46.             Data[sinkIndex] := item;
47.             sinkIndex := parent;
48.          end
49.          else
50.             done := True;
51.       end;
52.    end;
53.
54. var
55.    x, b: integer;
56. begin
57.    // first make it a Heap
58.    for x := Trunc((High(Data) - 1) / 2) downto Low(Data) do
59.       sink(x, High(Data));
60.
61.    // do the ButtomUpHeap sort
62.    for x := High(Data) downto Low(Data) + 1 do begin
63.       b := Data[x];
64.       Data[x] := Data[Low(Data)];
65.       Data[Low(Data)] := b;
66.       sink(Low(Data), x - 1);
67.    end;
68. end;
69.
70. //alternative
71.
72. program HeapSortDemo;
73.
74. type
75.   TIntArray = array[4..15] of integer;
76.
77. var
78.   data: TIntArray;
79.   i: integer;
80.
81. procedure siftDown(var a: TIntArray; start, ende: integer);
82.   var
83.     root, child, swap: integer;
84.   begin
85.     root := start;
86.     while root * 2 - start + 1 <= ende do
87.     begin
88.       child := root * 2 - start + 1;
89.       if (child + 1 <= ende) and (a[child] < a[child + 1]) then
90.         inc(child);
91.       if a[root] < a[child] then
92.       begin
93.    swap     := a[root];
94.         a[root]  := a[child];
95.         a[child] := swap;
96.         root := child;
97.       end
98.       else
99.         exit;
100.     end;
101.   end;
102.
103. procedure heapify(var a: TIntArray);
104.   var
105.     start, count: integer;
106.   begin
107.     count := length(a);
108.     start := low(a) + count div 2 - 1;
109.     while start >= low(a) do
110.     begin
111.       siftdown(a, start, high(a));
112.       dec(start);
113.     end;
114.   end;
115.
116. procedure heapSort(var a: TIntArray);
117.   var
118.     ende, swap: integer;
119.   begin
120.     heapify(a);
121.     ende := high(a);
122.     while ende > low(a) do
123.     begin
124.       swap := a[low(a)];
125.       a[low(a)] := a[ende];
126.       a[ende] := swap;
127.       dec(ende);
128.       siftdown(a, low(a), ende);
129.     end;
130.   end;
131.
132. begin
133.   Randomize;
134.   writeln('The data before sorting:');
135.   for i := low(data) to high(data) do
136.   begin
137.     data[i] := Random(high(data));
138.     write(data[i]:4);
139.   end;
140.   writeln;
141.   heapSort(data);
142.   writeln('The data after sorting:');
143.   for i := low(data) to high(data) do
144.   begin
145.     write(data[i]:4);
146.   end;
147.   writeln;
148. end.
Good judgement is the result of experience … Experience is the result of bad judgement.

OS : Windows 7 64 bit
Laz: Lazarus 1.4.4 FPC 2.6.4 i386-win32-win32/win64

#### Martin_fr

• Hero Member
• Posts: 5109
##### Re: Can this algorithm be made more efficient?
« Reply #18 on: July 05, 2018, 03:11:29 pm »
The binary search idea sound interesting, but it is not always better.

I assume, you mean instead of iterating over the inner array, you binary search the point for each outer (boundaries for the binary search determined by the result of the previous search).

Worst case you need to access each element in the inner array (eg the outer is 2,4,6,8,... the inner is 1,3,5,7,...), then the binary search becomes O(n log n) "outer*log inner" instead of O(n). "outer + inner"

The only case, where the binary can save a little, is if the 2 arrays are of grossly different size.
Outer array has 100 elements / Inner has 10.000 elements -> then you do 100 bin searches of O(log n) (which for 10.000 is 14) makes 1400 instead of 10.000.

So it is only for special cases.
« Last Edit: July 05, 2018, 03:16:15 pm by Martin_fr »

#### 440bx

• Sr. Member
• Posts: 487
##### Re: Can this algorithm be made more efficient?
« Reply #19 on: July 05, 2018, 04:55:50 pm »
The binary search idea sound interesting, but it is not always better.

I assume, you mean instead of iterating over the inner array, you binary search the point for each outer (boundaries for the binary search determined by the result of the previous search).

I would very rarely claim that a particular algorithm is _always_ better than another.

In his particular case, if (that's a big if) a binary search is applicable instead of a linear search, then the binary search will be faster even if the arrays have a dissimilar number of elements.  What is critical is the number of elements, not the differential between the two.

For instance, with two arrays of 100 elements each, a linear search of one into the other will yield an average of 100 * 50 comparisons to do all the processing he needs to do.  That is 5000 comparisons.

Using a binary search, doing the searches will need _at most_ 14 * 100 comparisons, that's a nice saving.

Now, if the arrays were only 20 to 30 elements each, I'd consider the speed improvement too small to justify the added complexity of a binary search compared to the simplicity of a linear search.  (presuming his application only has to do the search once per execution.)

In the case you present of 100 elements in one array and 10,000 in the other. The 100 element array can be binary-searched in _at most_ 7 comparisons and the 10,000 element array in 14 comparisons.  The choice of which array to search is easy to determine.

10,000 * 7     = 70,000  when searching the small array  vs

14       * 100  = 1400    when searching the large array

and that is the worst case in both instances.

It's the number of elements in the arrays that is important not the difference in the number of elements between the arrays.  Of course, as the above calculation clearly shows, the direction can be very important when the differential in the number of elements is substantial  (more than double.)

Anyway, only the OP knows for sure if the binary search is applicable to the problem he is trying to solve but, if it can be used, it would result in a nice speed improvement.

Also, I would not use the result of any previous search.  There is no need for that.  Every search yields a value for the upper index and lower index (used in the binary search), the value of the indexes at the end of the search are sufficient to determine if there was a match and, if there wasn't a match, the two values (lower and upper) indicate where the not found values falls in.  That _seems_ to be enough to calculate the value of the variables by using a simple multiplication for the result of that search accumulated onto the results of the previous searches (instead of 1 accumulation per element in the linear search.)

The OP will figure out if this could work for him.

#### Martin_fr

• Hero Member
• Posts: 5109
##### Re: Can this algorithm be made more efficient?
« Reply #20 on: July 05, 2018, 06:03:41 pm »
We are only talking about sorted lists, so in all examples we can ignore the work to sort them, because that is the same in each approach.

Also what must be compared are the Big O figures. They tell you how the resource usage goes up, when the input grows.

It is not uncommon that the algorithm with the better Big O performance, will do worse for small input sets. But usually you don't care, as it is only about milliseconds. But with large datasets the diff can be minutes, even hours.

For instance, with two arrays of 100 elements each, a linear search of one into the other will yield an average of 100 * 50 comparisons to do all the processing he needs to do.  That is 5000 comparisons.

Using a binary search, doing the searches will need _at most_ 14 * 100 comparisons, that's a nice saving.
Sorry, but with 2 sorted lists, the linear search does 200 (100 + 100) comparisons. See my first post on that. The inner array is traversed overall just once (and NOT once per outer iteration).

So that compares 200 (linear) to 1400 (binary)

--------------------------
If you compare yours to the original (unoptimized) solution, then yes that is a saving. But then (forgive me) that is pointless. Since you need a sorted list, you may as well go for the solution with 2 sorted lists (as in my initial post). It is simpler to implement, and faster. (Except for the case with grossly differently sized lists)

Quote
Now, if the arrays were only 20 to 30 elements each, I'd consider the speed improvement too small to justify the added complexity of a binary search compared to the simplicity of a linear search.  (presuming his application only has to do the search once per execution.)
I think we both agree, small sets don't matter. They are fast enough either way.

Quote
In the case you present of 100 elements in one array and 10,000 in the other. The 100 element array can be binary-searched in _at most_ 7 comparisons and the 10,000 element array in 14 comparisons.  The choice of which array to search is easy to determine.

10,000 * 7     = 70,000  when searching the small array  vs

14       * 100  = 1400    when searching the large array

and that is the worst case in both instances.

It's the number of elements in the arrays that is important not the difference in the number of elements between the arrays.  Of course, as the above calculation clearly shows, the direction can be very important when the differential in the number of elements is substantial  (more than double.)
Reread my post. I did those numbers.

And I did the numbers for both arrays having the same size. (Well I did the Big O).

It shows clearly that the binary search is way worse, if the arrays are the same size.

Both arrays at 1000 entries (comparisons AFTER sorting):
- non binary: 2000 comparisons (1000 + 1000 each list traversed exactly once)
- binary: 10000 comparisons
(10 needed to search 1000, needs to be done for each of the 1000 outer)
(actually it can be optimized to around 9000, still too much)

#### VTwin

• Hero Member
• Posts: 573
• Former Turbo Pascal 3 user
##### Re: Can this algorithm be made more efficient?
« Reply #21 on: July 05, 2018, 06:22:02 pm »
Wow! thanks guys. I admit I was sceptical that sorting would make any difference, since, as howardpc initially suggested, on the face of it  of sorting wouldn't reduce the number of comparisons needed. How wrong I was!

Is there a sorting unit so that I can just call quicksort (or heapsort) instead of writing it from scratch? I had a look in the docs but couldn't see anything...

You are welcome. I had fun working on it. Here is a heapsort:

Code: Pascal  [Select]
1. { HeapSort
2.   Sorts array a in ascending order using Heap sort algorithm.
3.   See Paulo Roma, Date: 22/04/2008.
4.   http://en.wikipedia.org/wiki/Heapsort
5.   http://www2.hawaii.edu/~copley/665/HSApplet.html }
6. procedure HeapSort(var a: Vector);
7. var
8.   n: integer;
9.
10.   procedure Swap(var a, b: double);
11.   var
12.     temp: double;
13.   begin
14.     temp := a;
15.     a := b;
16.     b := temp;
17.   end;
18.
19.   procedure SiftDown(start, last: integer); // var a
20.   var
21.     root, child: integer;
22.   begin
23.     root := start;
24.     while (root * 2 + 1 <= last) do begin
25.       child := root * 2 + 1;
26.       if (child < last) and (a[child] < a[child+1]) then
27.         child := child + 1;
28.       if a[root] < a[child] then begin
29.         Swap(a[root], a[child]);
30.         root := child;
31.       end else
32.         break;
33.     end;
34.   end;
35.
36.   procedure Heapify; // var a
37.   var
38.     start: integer;
39.   begin
40.     start := (n-1) div 2;
41.     while (start >= 0) do begin
42.       SiftDown(start, n-1);
43.       start := start - 1;
44.     end;
45.   end;
46.
47. var
48.   last: integer;
49. begin
50.   n := Length(a);
51.   Heapify;
52.   last := n - 1;
53.   while (last > 0) do begin
54.     Swap(a[last], a[0]);
55.     last := last - 1;
56.     SiftDown (0, last);
57.   end;
58. end;
59.

Vector = array of double;
“Talk is cheap. Show me the code.” -Linus Torvalds

macOS 10.11.6: Lazarus 2.0.1 svn 60414 (64 bit Cocoa)
Ubuntu 18.04.2: Lazarus 2.0.0 (64 bit on VBox)
Windows 7 Pro SP1: Lazarus 2.0.0 (64 bit on VBox)

#### 440bx

• Sr. Member
• Posts: 487
##### Re: Can this algorithm be made more efficient?
« Reply #22 on: July 05, 2018, 06:57:13 pm »
Also what must be compared are the Big O figures. They tell you how the resource usage goes up, when the input grows.
...
It is not uncommon that the algorithm with the better Big O performance, will do worse for small input sets. But usually you don't care, as it is only about milliseconds. But with large datasets the diff can be minutes, even hours.

That's why I used the actual values.  To show that with tables of 10,000 and 100, it does make a difference.  Admittedly, at current processor speeds of 3Ghz and up, the difference won't be very large but, it will grow very quickly (the square) as the number of elements increases.

Sorry, but with 2 sorted lists, the linear search does 200 (100 + 100) comparisons. See my first post on that. The inner array is traversed overall just once (and NOT once per outer iteration).

So that compares 200 (linear) to 1400 (binary)

We are obviously not seeing the problem the same way.  In the OP's post he's got one loop inside another and the algorithm may check the entire for every value in the other array (that's the worst case but, it should not be ignored just because it is the worst case.)  I don't see how you claim a value of 200 when every element of one array has to be looked up in the other.

Since you need a sorted list, you may as well go for the solution with 2 sorted lists (as in my initial post). It is simpler to implement, and faster.

I agree with the above.  It's obvious that, as long as the lists have any substantial number of elements, sorting both make sense.  That said, if a binary search is used, then only one needs to be sorted, the one that will be searched.

(Except for the case with grossly differently sized lists)

I don't know why you insist on that.  Except in extreme cases such as 1 or 2 elements in one list and 1000 in the other, the differential in the element count only affects the decision of which list should be the one to be searched.

I think we both agree, small sets don't matter. They are fast enough either way.
Yes, we do agree on that.

Reread my post. I did those numbers.

I saw that, I just elaborated it with actual element counts to make the point clear.

It shows clearly that the binary search is way worse, if the arrays are the same size.

I don't see any foundation for that rather surprising conclusion unless you want to consider two tables with less than 10 or 20 elements each or some extreme case where one table only has 1 or 2 elements.

I believe the source of our diverging views is that I still see elements of one array being searched in another.  The process is basically what a selection sort does.  It will have O(n)^2 which will quickly deteriorate performance as the total number of elements increases.  To make sure the number of elements is understood, I am operating under the impression that the OP is going to deal with tables of 100 elements or more.  If that isn't the case and, if the number is actually significantly lower than that then a linear search _may_ offer better performance.

One thing that I want to reiterate is, by the looks of it, it really looks like the algorithm could use a binary search but, this is a conclusion that depends on the size of the arrays.  The OP gives the impression that the size of the arrays will be of some substance otherwise he would not have asked about optimizing it.

Peace.

#### VTwin

• Hero Member
• Posts: 573
• Former Turbo Pascal 3 user
##### Re: Can this algorithm be made more efficient?
« Reply #23 on: July 05, 2018, 07:17:46 pm »
Regarding further optimizations of cles4, I ran a few tests on arrays of random numbers. Example results with array sizes, cl, and time (including the heapsorts)  in milliseconds:

1e3, 0.515014, 0 ms
1e4, 0.503789, 4
1e5, 0.500842, 46
1e6, 0.499887, 654
1e7, 0.499971, 11250

Ii does start to lag above about [EDIT] a hundred thousand array elements, but even a million elements can be compared in less than one second. I  suspect it should be fine for most applications.

Note that I edited the algorithm to avoid integer overflow for big arrays.

EDIT: I accidentally sorted each array twice the numbers are better now.
EDIT: Optimization level 3 gives better numbers, changed to reflect.
« Last Edit: July 06, 2018, 01:10:53 am by VTwin »
“Talk is cheap. Show me the code.” -Linus Torvalds

macOS 10.11.6: Lazarus 2.0.1 svn 60414 (64 bit Cocoa)
Ubuntu 18.04.2: Lazarus 2.0.0 (64 bit on VBox)
Windows 7 Pro SP1: Lazarus 2.0.0 (64 bit on VBox)

#### segfault

• Jr. Member
• Posts: 62
##### Re: Can this algorithm be made more efficient?
« Reply #24 on: July 05, 2018, 07:20:11 pm »
I recommend you get a copy of Robert Sedgewick's Algorithms book.  If you get the 2nd edition, which you can probably get very cheap since it is 2 editions behind, the algorithms are presented in Pascal pseudocode.  That makes it really easy and quick to implement the algorithms he presents.

This is great, thanks! I managed to find a good copy of the 2nd edition on amazon for just a few quid.

@ VTwin, thanks for posting the heapsort.

#### 440bx

• Sr. Member
• Posts: 487
##### Re: Can this algorithm be made more efficient?
« Reply #25 on: July 05, 2018, 07:34:09 pm »
This is great, thanks! I managed to find a good copy of the 2nd edition on amazon for just a few quid.

That's a great book. I'm pleased you found a copy.  It's likely to become your second right arm
« Last Edit: July 05, 2018, 08:48:53 pm by 440bx »

#### Martin_fr

• Hero Member
• Posts: 5109
##### Re: Can this algorithm be made more efficient?
« Reply #26 on: July 05, 2018, 08:33:10 pm »
To make it clear (and I thought I already did that) I do not compare for small input data.... All my statements will hold true for input with thousands or millions of entries.

I agree with the above.  It's obvious that, as long as the lists have any substantial number of elements, sorting both make sense.  That said, if a binary search is used, then only one needs to be sorted, the one that will be searched.

Ok, what are you comparing against?

The original calculation with NO sort at all?
If yes, then on that the binary search is an improvement, yes.

The one I described here http://forum.lazarus-ide.org/index.php/topic,41781.msg290546.html#msg290546
Quote
sorted low to high:
- start at first n, comparing to each m, as long as n <= m
- if next n is equal to current n, re-use last iterations result
- if next n is bigger than current n, then m continues from last pos (all previous m are known smaller / adjust result accordingly)
both datasets are traversed exactly once.

If so then explain to me how with the numbers I already calculated:
Quote
Both arrays at 1000 entries (comparisons AFTER sorting):
- non binary: 2000 comparisons (1000 + 1000 each list traversed exactly once)
- binary: 10000 comparisons
(10 needed to search 1000, needs to be done for each of the 1000 outer)
(actually it can be optimized to around 9000, still too much)
How is the binary search with 10000 comparison faster, than the flat search with only 2000.

If you grow the number of elements the diff will become larger, with the binary search growing faster.

#### juanba

• Newbie
• Posts: 3
##### Re: Can this algorithm be made more efficient?
« Reply #27 on: July 05, 2018, 08:47:01 pm »
Hi.
Since this is the first time I post to this forum (or any other forum for that matter) I can only hope that everything goes well.
I have attached a file with a function which seems to work and is two or three times faster than the original. Its main features are:
- Sorts both arrays to reduce the number of comparisons.
- Rather than doing the floating point calculations inside the loop it counts how many comparisons result in set1 being greater than set2 and how many result in equal values. The calculation of num is done at the end.
Code: Pascal  [Select]
1. function CalcNum2: Real;
2. const
3.    n1 = 12;
4.    n2 = 12;
5. var i, k, Aux_k: integer;
6.     num, cles: Real;
7.     mayor, igual: integer;
8. begin
9.   QuickSort(Set1, 1, n1);
10.   QuickSort(Set2, 1, n2);
11.   i := 1;                              // Set1 index
12.   k := 1;                              // Set2 index
13.   mayor := 0;                          // number of comparisons resulting in >
14.   igual := 0;                          // number of comparisons resulting in =
15.   repeat
16.     while (i <= n1) and (Set1[i] < Set2[k]) do
17.       Inc(i);
18.     if i > n1 then
19.       break;
20.                                        // if i > n1, we are done
21.                                        // Otherwise find the greatest k for wich
22.                                        // still holds that set1[i] >= set2[k]
23.     while (k <= n2) and (Set1[i] >= Set2[k]) do
24.       Inc(k);
25.                                        // Move back 1 position
26.     Dec(k);
27.                                        // Travel through all values of k
28.                                        // for which set1[i] >= set2[k] down to 1
29.     for Aux_k := k downto 1 do
30.       if Set1[i] > Set2[Aux_k] then
31.         Inc(mayor)                     // Assumed to be faster than Real arithmetic
32.       else
33.         Inc(igual);
34.                                        // Move to the next i
35.     Inc(i);
36.   until i > n1;
37.   num := igual * 0.5 + mayor;
38.   cles := num / (n1 * n2);
39.   Result := cles;
40. end;
41.
42.

The QuickSort algorithm can be found, as it has been pointed out in Sedgewick and iirc in Niklaus Wirth's Algorithms + Data Structures.

#### Martin_fr

• Hero Member
• Posts: 5109
##### Re: Can this algorithm be made more efficient?
« Reply #28 on: July 05, 2018, 09:41:15 pm »
Hi.
Since this is the first time I post to this forum (or any other forum for that matter) I can only hope that everything goes well.
I have attached a file with a function which seems to work and is two or three times faster than the original. Its main features are:
- Sorts both arrays to reduce the number of comparisons.
- Rather than doing the floating point calculations inside the loop it counts how many comparisons result in set1 being greater than set2 and how many result in equal values. The calculation of num is done at the end.

Welcome.

Actually good idea.
There were mention of converting the input data in the arrays (but that may not be an option).

But doing the sum as integer is a good idea.
You can do it with just one counter too: Add 1 (instead of 0.5) and 2 (instead of 1.0) and in the end divide by 2.

#### 440bx

• Sr. Member
• Posts: 487
##### Re: Can this algorithm be made more efficient?
« Reply #29 on: July 05, 2018, 09:51:05 pm »
@Juanba

Welcome to the forums. Nice of you to post some code on your very first post.

@Martin_fr

I have to admit - with some embarrassment on my part - that I only quickly glanced at your suggestion and missed the fact that you save the result of the previous search.   You are indeed correct that your solution will likely be faster than a binary search _as long as_ the number of elements doesn't go "too high".

What do I mean by "too high" ?... you have to sort _both_ arrays to implement your solution.  At O(n log n) and 1000 elements in each array that's 3000 compares that I save by sorting just one array.  That additional  (n log n) grows significantly faster than the number of additional comparisons that will be required in a binary search.   IOW, you are absolutely correct that for less than a specific number of array elements (in the ballpark of 50,000) your suggestion will usually perform better.

All that said, once the number of elements in one of the arrays exceeds that number, your algorithm suffers.  It has a particularly vexing downside, its performance degrades significantly as the number of array elements increases, 2 sorts instead of 1, algorithm is more complex and, it doesn't gracefully handle the case you mention of one of the arrays having quite a few more elements than the other.  (now I see why brought that up.)

In contrast, the binary search is extremely stable and really simple, it takes doubling the number of elements in the array being searched to add just 1 comparison.

Anyway, your calculations were absolutely correct.  I didn't understand them because I failed to notice a critical part of your suggestion.

Personally, if I were programming the solution, I would stick to the binary search because, in spite of not having fully optimal performance for "small" numbers of array elements, the algorithm scales well as the number of elements increases and, it is simple.

Also and, probably a matter of personal programming style, the optimization you are using could be used with the binary search too but, I would not implement it because halving the search area only saves 1 comparison.

Sorry for missing a critical part of what you wrote and consequently missing your point until now.

But doing the sum as integer is a good idea.
You can do it with just one counter too: Add 1 (instead of 0.5) and 2 (instead of 1.0) and in the end divide by 2.

That is a nice optimization, simple, logical and sensible