Thomas Bushnell, BSG (thomb) wrote,
Thomas Bushnell, BSG

CS Problem

Just corrected the Wikipedia article on radix sort, which contained the (incorrect) statement that radix sort's linear performance is illusory, because there is also a factor for the number of digits. (It's incorrect, because comparison sorts also have the same factor, so radix sort is still better by a factor of log n.)

And that got me to remembering "funny sort" which was offered in the UNM CS curriculum's algorithms class back in the day (where "day" is about 1986).

It's a recursive algorithm, that, according to the lore, was created accidentally by a student who wanted to figure out a way to domerge sort and not require extra space. It works like this, on an array X of n elements:

1. Let p be about n/2.
2. Funny sort X[0..p-1].
3. Funny sort X[p..n]. So far, just like merge sort.
4. If X[0] > X[p], then swap them.
5. Funny sort X[1..n].

Step 4 is justified because either X[0] or X[p] must be the least element of the whole X, thanks to steps 2 and 3. So we clearly have a correct algorithm.

So the problem is: what's the time complexity of this algorithm in terms of comparisons? How many times is step 4 executed? It's pretty bad, but I can't recall the derivation. The recurrence is pretty weird.

[I found a reference!
I'm delighted that it's still there twenty years later. And, to boot, it is apparently "silly sort". Either the name changed, or my memory is incorrect. Any how, there is an argument there that the time complexity is "roughly O(n^(log n))".]
  • Post a new comment


    default userpic

    Your reply will be screened

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.