The challenge is to sort an integer array containing only ones and zeros with the lowest complexity possible.
As always, I am interested in any improvements suggested but in this question I am most interested in whether the complexity that I have assigned to the algorithm is correct (as well as the lowest possible). I have included two implementations that have two different running times. I believe that they are still both the same complexity. The larger purpose in posting this is to improve or verify my understanding of Complexity and big oh notation.
I assign the complexity as O(n) to this first algorithm since it only looks at every value in the array once, and only once.
Algorithm I
/// <param name="value">A list of values to sort where all values are either one or zero.</param> /// <returns>Sorted value array.</returns> public static int[] SortOnesZeros(int[] value) { int j = value.Length - 1; int i = 0; while (i < j) { if (value[i] < 0 || value[i] > 1 || value[j] < 0 || value[j] > 1) { throw new Exception("Value array can only contain 1's and 0's."); } while (value[i] == 0) { i++; } while (value[j]== 1) { j--; } if (j > i && value[i] > value[j] ) { int temp = value[j]; value[j] = value[i]; value[i] = temp; j--; i++; } } return value; }
I have a second implementation that I believe is also O(n) though it is longer since it runs through n twice, but it is not nested. I believe this is true because O(2n) reduces to O(n) since multiplied constants are not considered in big-oh complexity analysis. It is also longer because it changes every value on the second pass. No need to improve on this algorithm, I believe that the first algorithm is already better.:
Algorithm II
public static int[] SortOnesZerosAlternate(int[] values) { int countOfZeros = 0; foreach(int value in values) { if (value == 0) { countOfZeros++; } if (value <0 || value > 1) { throw new Exception("Value array can only contain 1's and 0's."); } } for (int i = 0; i < countOfZeros; i++) { values[i] = 0; } for (int i = countOfZeros; i < values.Length; i++) { values[i] = 1; } return values; }
EDIT ***************************************
@superb rain is correct: The first algorithm has a mistake where, under the right conditions, it tries to sort values in the value array that are not actually on the list. This issue will occur on any list where all the values are the same. I have fixed this in my own code by making the change below:
while (i < j && value[i] == 0) { i++; } while (i < j && value[j]== 1) { j--; }
{0, 0}
.\$\endgroup\$n
be compensated by the very predictable element access ? Also, does anything like likely attribute exist in c# ? stackoverflow.com/questions/51797959/…\$\endgroup\$