-
-
Notifications
You must be signed in to change notification settings - Fork 14.2k
Description
The current documentation for slice::sort_by_key and slice::sort_unstable_by_key method claims that for a slice of n elements, the method takes O(m n log(m n)) worst-case, where the key function is O(m).
Lines 257 to 258 in d28a464
| /// This sort is stable (i.e., does not reorder equal elements) and `O(m n log(m n))` | |
| /// worst-case, where the key function is `O(m)`. |
Although this is not wrong, this time complexity is not tight and can be replaced with O(m n log n) as far as my understanding.
Brief explanation relying on another doc claiming the time complexity of merge_sort, which slice::sort_by_key relies on, is O(n log n) worst-case;
Since its running time is O(n log n), merge_sort can call is_less only O(n log n) times. Each call of is_less makes 2 calls of the key function, so the total time complexity taken by the key function is O((2 n log n) * m) = O(m n log n).
Because other part of merge_sort takes O(n log n) as the doc states, its total running time is O(n log n + m n log n) = O(m n log n).
Exact same discussion applies to sort_unstable_by_key + sort::quicksort.