Definition
A sorting algorithm is an algorithm that puts elements of a list in a certain order.
Basics
The mostused orders in sorting algorithms are numerical order and lexicographical order. Efficient sorting is important to optimizing the use of other algorithms (such as search and merge algorithms) that require sorted lists to work correctly; it is also often useful for canonicalizing data and for producing humanreadable output. More formally, the output must satisfy two conditions:
 The output is in nondecreasing order (each element is no smaller than the previous element according to the desired total order);
 The output is a permutation, or reordering, of the input.
Since the dawn of computing, the sorting problem has attracted a great deal of research, perhaps due to the complexity of solving it efficiently despite its simple, familiar statement. For example, bubble sort was analyzed as early as 1956. Although many consider it a solved problem, useful new sorting algorithms are still being invented (for example, library sort was first published in 2004). Sorting algorithms are prevalent in introductory computer science classes, where the abundance of algorithms for the problem provides a gentle introduction to a variety of core algorithm concepts, such as big O notation, divide and conquer algorithms, data structures, randomized algorithms, best, worst and average case analysis, timespace tradeoffs, and lower bounds.
Classification
Sorting algorithms used in computer science are often classified by:
 Computational complexity (worst, average and best behaviour) of element comparisons in terms of the size of the list (n). For typical sorting algorithms good behavior is O(n log n) and bad behavior is Ω(n²). Ideal behavior for a sort is O(n). Sort algorithms which only use an abstract key comparison operation always need Ω(n log n) comparisons in the worst case.
 Computational complexity of swaps (for "in place" algorithms).
 Memory usage (and use of other computer resources). In particular, some sorting algorithms are "in place", such that only O(1) or O(log n) memory is needed beyond the items being sorted, while others need to create auxiliary locations for data to be temporarily stored.
 Recursion. Some algorithms are either recursive or non recursive, while others may be both (e.g., merge sort).
 Stability: stable sorting algorithms maintain the relative order of records with equal keys (i.e., values). See below for more information.
 Whether or not they are a comparison sort. A comparison sort examines the data only by comparing two elements with a comparison operator.
 General method: insertion, exchange, selection, merging, etc. Exchange sorts include bubble sort and quicksort. Selection sorts include shaker sort and heapsort.
Stability
Stable sorting algorithms maintain the relative order of records with equal keys (i.e., sort key values). That is, a sorting algorithm is stable if whenever there are two records R and S with the same key and with R appearing before S in the original list, R will appear before S in the sorted list.
When equal elements are indistinguishable, such as with integers, or
more generally, any data where the entire element is the key, stability
is not an issue. However, assume that the following pairs of numbers
are to be sorted by their first component:
(4, 1) (3, 7) (3, 1) (5, 6)
In this case, two different results are possible, one which
maintains the relative order of records with equal keys, and one which
does not:
(3, 7) (3, 1) (4, 1) (5, 6) (order maintained)
(3, 1) (3, 7) (4, 1) (5, 6) (order changed)
Unstable sorting algorithms may change the relative order of records
with equal keys, but stable sorting algorithms never do so. Unstable
sorting algorithms can be specially implemented to be stable. One way
of doing this is to artificially extend the key comparison, so that
comparisons between two objects with otherwise equal keys are decided
using the order of the entries in the original data order as a
tiebreaker. Remembering this order, however, often involves an
additional space cost.
Sorting based on a primary, secondary, tertiary, etc. sort key can
be done by any sorting method, taking all sort keys into account in
comparisons (in other words, using a single composite sort key). If a
sorting method is stable, it is also possible to sort multiple times,
each time with one sort key. In that case the keys need to be applied
in order of increasing priority.
Example: sorting pairs of numbers as above by first, then second component:
(4, 1) (3, 7) (3, 1) (4, 6) (original)
(4, 1) (3, 1) (4, 6) (3, 7) (after sorting by second component)
(3, 1) (3, 7) (4, 1) (4, 6) (after sorting by first component)
On the other hand:
(3, 7) (3, 1) (4, 1) (4, 6) (after sorting by first component)
(3, 1) (4, 1) (4, 6) (3, 7) (after sorting by second component,
order by first component is disrupted)
List of sorting algorithms
In this table, n is the number of records to be sorted. The
columns "Average" and "Worst" give the time complexity in each case,
under the assumption that the length of each key is constant, and that
therefore all comparisons, swaps, and other needed operations can
proceed in constant time. "Memory" denotes the amount of auxiliary
storage needed beyond that used by the list itself, under the same
assumption. These are all comparison sorts.
Name 
Average 
Worst 
Memory 
Stable 
Method 
Other notes 
Bubble sort 
— 
O(n²) 
O(1) 
Yes 
Exchanging 
Times are for best variant 
Cocktail sort 
— 
O(n²) 
O(1) 
Yes 
Exchanging 

Comb sort 
O(n log n) 
O(n log n) 
O(1) 
No 
Exchanging 
Small code size 
Gnome sort 
— 
O(n²) 
O(1) 
Yes 
Exchanging 

Selection sort 
O(n²) 
O(n²) 
O(1) 
No 
Selection 

Insertion sort 
O(n + d) 
O(n²) 
O(1) 
Yes 
Insertion 
d is the number of inversions, which is O(n²) 
Shell sort 
— 
O(n log² n) 
O(1) 
No 
Insertion 

Binary tree sort 
O(n log n) 
O(n log n) 
O(n) 
Yes 
Insertion 
When using a selfbalancing binary search tree 
Library sort 
O(n log n) 
O(n²) 
O(n) 
Yes 
Insertion 

Merge sort 
O(n log n) 
O(n log n) 
O(n) 
Yes 
Merging 

Inplace merge sort 
O(n log n) 
O(n log n) 
O(1) 
Yes 
Merging 

Heapsort 
O(n log n) 
O(n log n) 
O(1) 
No 
Selection 

Smoothsort 
— 
O(n log n) 
O(1) 
No 
Selection 

Quicksort 
O(n log n) 
O(n²) 
O(log n) 
No 
Partitioning 
Naïve variants use O(n) space 
Introsort 
O(n log n) 
O(n log n) 
O(log n) 
No 
Hybrid 
used in most implementations of STL 
Patience sorting 
— 
O(n²) 
O(n) 
No 
Insertion 
Finds all the longest increasing subsequences within O(n log n) 
This table describes sorting algorithms that are not comparison sorts. As such, they are not limited by a O(n log n) lower bound. Complexities below are in terms of n, the number of items to be sorted, k, the size of each key, and s,
the chunk size used by the implementation. Many of them are based on
the assumption that the key size is large enough that all entries have
unique key values, and hence that n << 2^{k}.
Name 
Average 
Worst 
Memory 
Stable 
n << 2^{k}? 
Notes 
Pigeonhole sort 
O(n+2^{k}) 
O(n+2^{k}) 
O(2^{k}) 
Yes 
Yes 

Bucket sort 
O(n·k) 
O(n²·k) 
O(n·k) 
Yes 
No 
Assumes uniform distribution of elements from the domain in the array. 
Counting sort 
O(n+2^{k}) 
O(n+2^{k}) 
O(n+2^{k}) 
Yes 
Yes 

LSD Radix sort 
O(n·k/s) 
O(n·k/s) 
O(n) 
Yes 
No 

MSD Radix sort 
O(n·k/s) 
O(n·(k/s)·2^{s}) 
O((k/s)·2^{s}) 
No 
No 

Spreadsort 
O(n·k/log(n)) 
O(n·(k  log(n))^{.5}) 
O(n) 
No 
No 
Asymptotics are based on the assumption that n << 2^{k}, but the algorithm does not require this. 
This table describes some sorting algorithms that are impractical
for reallife use due to extremely poor performance or a requirement
for specialized hardware.
Name 
Average 
Worst 
Memory 
Stable 
Comparison 
Other notes 
Bogosort 
O(n × n!) 
∞ 
O(1) 
No 
Yes 
Average time using Knuth shuffle 
Bozo sort 
O(n × n!) 
∞ 
O(1) 
No 
Yes 
Average time is asymptotically half that of bogosort 
Stooge sort 
O(n^{2.71}) 
O(n^{2.71}) 
O(log n) 
No 
Yes 

Bead sort 
N/A 
N/A 
— 
N/A 
No 
Requires specialized hardware 
Simple pancake sort 
O(n) 
O(n) 
O(log n) 
No 
Yes 
Count is number of flips. 
Sorting networks 
O(log n) 
O(log n) 
O(n•log n) 
Yes 
No 
Requires a custom circuit of size O(n•log n) 
Summaries of popular sorting algorithms
Bubble sort is a simple sorting algorithm. It works by repeatedly stepping through the list to be sorted, comparing each pair of adjacent items and swapping them if they are in the wrong order. The pass through the list is repeated until no swaps are needed, which indicates that the list is sorted. The algorithm gets its name from the way smaller elements "bubble" to the top of the list. Because it only uses comparisons to operate on elements, it is a comparison sort.
Selection sort is a simple sorting algorithm that improves on the performance of bubble sort. It works by first finding the smallest element using a linear scan and swapping it into the first position in the list, then finding the second smallest element by scanning the remaining elements, and so on. Selection sort is unique compared to almost any other algorithm in that its running time is not affected by the prior ordering of the list: it performs the same number of operations because of its simple structure. Selection sort also requires only n swaps, and hence just Θ(n) memory writes, which is optimal for any sorting algorithm. Thus it can be very attractive if writes are the most expensive operation, but otherwise selection sort will usually be outperformed by insertion sort or the more complicated algorithms.
Insertion sort is a simple sorting algorithm that is relatively efficient for small lists and mostlysorted lists, and often is used as part of more sophisticated algorithms. It works by taking elements from the list one by one and inserting them in their correct position into a new sorted list. In arrays, the new list and the remaining elements can share the array's space, but insertion is expensive, requiring shifting all following elements over by one. The insertion sort works just like its name suggests  it inserts each item into its proper place in the final list. The simplest implementation of this requires two list structures  the source list and the list into which sorted items are inserted. To save memory, most implementations use an inplace sort that works by moving the current item past the already sorted items and repeatedly swapping it with the preceding item until it is in place. Shell sort (see below) is a variant of insertion sort that is more efficient for larger lists. This method is much more efficient than the bubble sort, though it has more constraints.
Shell sort was invented by Donald Shell in 1959. It improves upon bubble sort and insertion sort by moving out of order elements more than one position at a time. One implementation can be described as arranging the data sequence in a twodimensional array and then sorting the columns of the array using insertion sort. Although this method is inefficient for large data sets, it is one of the fastest algorithms for sorting small numbers of elements (sets with less than 1000 or so elements). Another advantage of this algorithm is that it requires relatively small amounts of memory.
Merge sort takes advantage of the ease of merging already sorted lists into a new sorted list. It starts by comparing every two elements (i.e., 1 with 2, then 3 with 4...) and swapping them if the first should come after the second. It then merges each of the resulting lists of two into lists of four, then merges those lists of four, and so on; until at last two lists are merged into the final sorted list. Of the algorithms described here, this is the first that scales well to very large lists.
Heapsort is a much more efficient version of selection sort. It also works by determining the largest (or smallest) element of the list, placing that at the end (or beginning) of the list, then continuing with the rest of the list, but accomplishes this task efficiently by using a data structure called a heap, a special type of binary tree. Once the data list has been made into a heap, the root node is guaranteed to be the largest element. When it is removed and placed at the end of the list, the heap is rearranged so the largest element remaining moves to the root. Using the heap, finding the next largest element takes O(log n) time, instead of O(n) for a linear scan as in simple selection sort. This allows Heapsort to run in O(n log n) time.
Quicksort is a divide and conquer algorithm which relies on a partition operation: to partition an array, we choose an element, called a pivot, move all smaller elements before the pivot, and move all greater elements after it. This can be done efficiently in linear time and inplace. We then recursively sort the lesser and greater sublists. Efficient implementations of quicksort (with inplace partitioning) are typically unstable sorts and somewhat complex, but are among the fastest sorting algorithms in practice. Together with its modest O(log n) space usage, this makes quicksort one of the most popular sorting algorithms, available in many standard libraries. The most complex issue in quicksort is choosing a good pivot element; consistently poor choices of pivots can result in drastically slower (O(n²)) performance, but if at each step we choose the median as the pivot then it works in O(n log n).
Bucket sort, or bin sort, is a sorting algorithm that works by partitioning an array into a number of buckets. Each bucket is then sorted individually, either using a different sorting algorithm, or by recursively applying the bucket sorting algorithm. It is a distribution sort, and is a cousin of radix sort in the most to least significant digit flavour. Bucket sort is a generalization of pigeonhole sort. Since bucket sort is not a comparison sort, the Ω(n log n) lower bound is inapplicable. The computational complexity estimates involve the number of buckets.
Radix sort is an algorithm that sorts a list of fixedsize numbers of length k in O(n · k) time by treating them as bit strings. We first sort the list by the least significant bit while preserving their relative order using a stable sort. Then we sort them by the next bit, and so on from right to left, and the list will end up sorted. Most often, the counting sort algorithm is used to accomplish the bitwise sorting, since the number of values a bit can have is small.
Memory usage patterns and index sorting
When the size of the array to be sorted approaches or exceeds the available primary memory, so that (much slower) disk or swap space must be employed, the memory usage pattern of a sorting algorithm becomes important, and an algorithm that might have been fairly efficient when the array fit easily in RAM may become impractical. In this scenario, the total number of comparisons becomes (relatively) less important, and the number of times sections of memory must be copied or swapped to and from the disk can dominate the performance characteristics of an algorithm. Thus, the number of passes and the localization of comparisons can be more important than the raw number of comparisons, since comparisons of nearby elements to one another happen at system bus speed (or, with caching, even at CPU speed), which, compared to disk speed, is virtually instantaneous.
For example, the popular recursive quicksort algorithm provides quite reasonable performance with adequate RAM, but due to the recursive way that it copies portions of the array it becomes much less practical when the array does not fit in RAM, because it may cause a number of slow copy or move operations to and from disk. In that scenario, another algorithm may be preferable even if it requires more total comparisons.
One way to work around this problem, which works well when complex records (such as in a relational database) are being sorted by a relatively small key field, is to create an index into the array and then sort the index, rather than the entire array. (A sorted version of the entire array can then be produced with one pass, reading from the index, but often even that is unnecessary, as having the sorted index is adequate.) Because the index is much smaller than the entire array, it may fit easily in memory where the entire array would not, effectively eliminating the diskswapping problem. This procedure is sometimes called "tag sort".
Another technique for overcoming the memorysize problem is to combine two algorithms in a way that takes advantages of the strength of each to improve overall performance. For instance, the array might be subdivided into chunks of a size that will fit easily in RAM (say, a few thousand elements), the chunks sorted using an efficient algorithm (such as quicksort or heapsort), and the results merged as per mergesort. This is more efficient than just doing mergesort in the first place, but it requires less physical RAM (to be practical) than a full quicksort on the whole array.
Techniques can also be combined. For sorting very large sets of data that vastly exceed system memory, even the index may need to be sorted using an algorithm or combination of algorithms designed to perform reasonably with virtual memory, i.e., to reduce the amount of swapping required.
Topics of Interest
External sorting is a term for a class of sorting algorithms that can handle massive amounts of data. External sorting is required when the data being sorted do not fit into the main memory of a computing device (usually RAM) and instead they must reside in the slower external memory (usually a hard drive). External sorting typically uses a sortmerge strategy. In the sorting phase, chunks of data small enough to fit in main memory are read, sorted, and written out to a temporary file. In the merge phase, the sorted subfiles are combined into a single larger file.
Source: Wikipedia (All text is available under the terms of the GNU Free Documentation License and Creative Commons AttributionShareAlike License.)
