Welcome Gophers! In this lesson, we are going to be covering the Quick Sort algorithm — one of the most widely used sorting algorithms out there.
Overview
Quick Sort uses a technique called divide-and-conquer. The basic idea is dead simple:
- Pick a pivot — Choose an element from the array (often the last element, but it can be random).
- Partition — Rearrange the array so that everything smaller than the pivot goes to the left, and everything larger goes to the right.
- Recurse — Apply the same process to the left and right partitions.
Each time we partition, the pivot element ends up in its final sorted position. We keep recursing until every element has been a pivot and is in the right spot.
How It Looks
Here’s a visual of quick sort partitioning an array around a pivot:
Time Complexity
Quick Sort’s complexity depends on how well we pick our pivots:
- Best case: O(n log n) — when the pivot splits the array roughly in half each time
- Average case: O(n log n) — this is what you’ll see in practice
- Worst case: O(n²) — when the pivot is always the smallest or largest element (rare with random pivots)
The space complexity is O(log n) due to the recursive call stack.
When to Use Quick Sort
Quick Sort is an excellent default choice for general-purpose sorting. It’s typically faster than Merge Sort in practice because of better cache locality, even though they share the same average time complexity.
The main downside? It’s not stable — equal elements might get reordered. If you need stability, Merge Sort is the better pick.
