*also*have the same factor, so radix sort is still better by a factor of log

*n*.)

And that got me to remembering "funny sort" which was offered in the UNM CS curriculum's algorithms class back in the day (where "day" is about 1986).

It's a recursive algorithm, that, according to the lore, was created accidentally by a student who wanted to figure out a way to domerge sort and not require extra space. It works like this, on an array

*X*of

*n*elements:

1. Let p be about n/2.

2. Funny sort X[0..p-1].

3. Funny sort X[p..n]. So far, just like merge sort.

4. If X[0] > X[p], then swap them.

5. Funny sort X[1..n].

Step 4 is justified because either X[0] or X[p] must be the least element of the whole X, thanks to steps 2 and 3. So we clearly have a correct algorithm.

So the problem is: what's the time complexity of this algorithm in terms of comparisons? How many times is step 4 executed? It's pretty bad, but I can't recall the derivation. The recurrence is pretty weird.

[I found a reference! http://www.cs.unm.edu/~storm/miscPrograms.html.

I'm delighted that it's still there twenty years later. And, to boot, it is apparently "silly sort". Either the name changed, or my memory is incorrect. Any how, there is an argument there that the time complexity is "roughly O(n^(log n))".]

## Error

Your reply will be screened

Your IP address will be recorded

You must follow the Privacy Policy and Google Terms of use.