← Back to Visualizer

Expanded Documentation

Theme

Overview

VisualSorting is an interactive platform for learning and evaluating sorting algorithms. It combines real-time visualization, operation metrics, and controlled input generation so users can connect theoretical complexity with observed behavior.

Visual Feedback System

The visualizer uses a consistent color coding system throughout all modes:

Yellow - Comparison

When two elements are being compared to determine their relative order, they are highlighted in yellow. This represents the fundamental operation of comparison-based sorting algorithms.

Red - Swap/Write

When elements are being swapped or written to new positions, they flash red. This represents memory writes—often the most expensive operation in terms of performance.

Green - Sorted

Elements that have reached their final sorted position turn green. Watch as the green "wave" spreads through the array as the algorithm progresses.

Blue - Unsorted

Elements in their unsorted state appear in the default blue color. The density and height of these bars represent the values in your array.

Educational Philosophy

Beyond visual demonstration, the platform supports evidence-based reasoning about algorithm design:

  • Why some algorithms are faster: See how divide-and-conquer approaches tackle large problems efficiently
  • Trade-offs between algorithms: Observe how some use more comparisons but fewer swaps, or vice versa
  • Best and worst cases: Experiment with different data distributions to see when algorithms excel or struggle
  • Memory usage patterns: Understand why some algorithms need extra space while others work in-place

Recommended Learning Path

Start with simple algorithms (Bubble, Selection, Insertion) to understand basic concepts. Then progress to efficient algorithms (Merge, Quick, Heap) to see how computer scientists optimize sorting. Finally, explore exotic algorithms (Radix, Bucket, Tim) to appreciate specialized solutions.

Quick Start Guide

Use the following workflow to run your first reproducible comparison in under a minute.

Basic Workflow

  1. Generate a dataset: Create an initial array from the selected distribution.
  2. Select an algorithm: Choose the implementation you want to inspect.
  3. Set scale and speed: Tune array size and playback delay for your use case.
  4. Run the visualization: Start sorting and monitor live comparison/write counts.
  5. Repeat with controls held constant: Change one variable at a time for valid comparisons.

Adjusting Array Size

The size slider controls how many elements are in your array (default range: 10-400 elements). Consider these guidelines:

  • Small arrays (10-30): Best for learning. You can follow individual elements and understand each step.
  • Medium arrays (50-100): Good for comparing algorithm patterns. You'll see characteristic shapes emerge.
  • Large arrays (200-400): Best for appreciating efficiency differences. Slow algorithms become visibly slower.

Understanding Speed Settings

The speed slider controls the delay between visualization steps:

  • Slow (left): ~500ms between steps. Perfect for step-by-step analysis and teaching.
  • Medium (center): ~50ms between steps. Balanced view of algorithm behavior.
  • Fast (right): ~1ms between steps. Shows overall patterns quickly. May appear instant for efficient algorithms.

Performance Note: Very large arrays (300+) with slow algorithms (Bubble, Selection) at slow speeds may take several minutes to complete. Use the "Stop" button if needed.

Interface Tour

The interface is organized by task: visualization, stepwise inspection, comparative racing, benchmarking, custom implementation, and analytics.

Navigation Tabs

The top navigation bar provides access to six distinct modes:

Visualizer

The main sorting visualization mode. Watch algorithms sort arrays in real-time with full control over speed, size, and algorithm selection.

Step Mode

Execute sorting one step at a time. Perfect for understanding exactly what happens at each iteration. Includes pseudocode highlighting.

Compare

Race up to 25 algorithms against each other on identical data. See which algorithms finish first and compare their statistics.

Benchmark

Run performance tests without visualization delays. Get accurate timing data, comparison counts, and write-operation counts for objective comparisons.

Custom

Write your own sorting algorithms with JavaScript, Python, C#, or Blockly. The sandbox environment provides helper functions for comparisons, swap/write operations, and visualization.

Analytics

View complexity charts, session statistics, and access the algorithm encyclopedia. Export your session data for further analysis.

Statistics Panel

The floating statistics panel (top-left during visualization) shows:

  • Comparisons (Comp): How many times two elements have been compared. This is the primary metric for comparison-based sorts.
  • Writes (Swaps/Writes): How many times elements are moved or overwritten. Important for understanding memory-write cost.

Advanced Options Panel

Click "Advanced Options" to expand additional settings (covered in detail in the Features section).

All Features

Explore the full range of features available in the Advanced Options panel and throughout the application:

Array Distribution Types

Control how the initial array is generated to test algorithms under different conditions:

Random

Completely random values with uniform distribution. This is the "average case" for most algorithms and provides the most realistic general-purpose test.

Nearly Sorted

~90% of elements are in sorted order with ~10% randomly displaced. This is common in real-world data (e.g., mostly sorted logs with recent additions). Algorithms like Insertion Sort and Tim Sort excel here.

Reversed

Array is sorted in descending order—the worst case for many algorithms. Bubble Sort and Insertion Sort perform terribly here, while Quick Sort with poor pivot selection degrades to O(n²).

Few Unique

Only 5 distinct values repeated throughout the array. Tests how algorithms handle duplicates. Three-way partitioning algorithms (like Dutch National Flag) shine here.

Already Sorted

Array is already in perfect ascending order. The "best case" for adaptive algorithms. Bubble Sort and Insertion Sort achieve O(n) here, while non-adaptive algorithms still do full work.

Audio Features

Audio feedback provides an additional signal channel for operation cadence and value movement:

  • Pitch Mapping: Each element's value is mapped to a musical pitch. Higher values produce higher tones, lower values produce lower tones.
  • Comparison Sounds: Short tones play when elements are compared, creating a melodic pattern unique to each algorithm.
  • Completion Sound: A satisfying ascending scale plays when sorting completes.

Visual Options

  • Show Bar Values: Display the numeric value on each bar. Helpful for small arrays when you want to track specific numbers.
  • Rainbow Mode: Color bars based on their value (HSL color mapping). Creates beautiful visual patterns, especially during Radix and Bucket sorts.
  • 3D Mode: Add depth perspective to the bar visualization. Purely aesthetic but provides a fresh visual experience.
  • Colorblind Modes: Protanopia and Tritanopia-friendly color schemes ensure accessibility for users with color vision deficiencies.

Save/Load System

Preserve your arrays for consistent testing:

  • Save Array: Store the current array state to browser localStorage. Persists between sessions.
  • Load Array: Restore a previously saved array. Great for comparing how different algorithms handle identical data.

Per-Tab Onboarding Tours

Guided tours are tailored by tab and explain controls, expected outputs, and common workflows for each mode.

Dynamic WebGL Particle Background

The landing area includes an optimized WebGL particle layer for visual polish while keeping computational overhead low.

Advanced Audio Synthesizer

A configurable synthesis engine maps values to notes and applies envelope shaping for clearer operation feedback during playback.

Time-Scrubbing Engine

Execution history can be scrubbed forward or backward, enabling detailed inspection of local transitions and decision points.

More Customization

Customization options include accessibility palettes, alternate rendering styles, and save/load or import workflows for repeatable experiments.

Keyboard Shortcuts

Power users can control the visualizer entirely from the keyboard:

Key Action Available In
Space Start/Stop sorting Visualizer, Step Mode
G Generate new array Visualizer
S Stop active sort Visualizer
Right Arrow Next step Step Mode
Left Arrow Previous step Step Mode
R Reset to beginning Step Mode
P Play/Pause autoplay Step Mode
1-6 Switch tabs (1=Visualizer, 2=Step, etc.) Global
? Toggle keyboard hints Global

Mobile Users: Swipe left/right to navigate between tabs. Touch controls work throughout the interface.

Custom Arrays

Load your own data for sorting visualization instead of using randomly generated arrays. This feature is available in three modes with different size limits optimized for each use case.

Visualizer Mode (Max 500 Elements)

In the main Visualizer tab, expand "Advanced Options" to find the Custom Array section:

  • Text Input: Enter comma-separated numbers directly (e.g., 5, 3, 8, 1, 9, 2)
  • File Upload: Click "Upload File" to load from JSON, TXT, or CSV files
  • Automatic Normalization: Values are automatically scaled to 1-100% for proper visualization

Compare Mode (Max 500 Elements)

Race algorithms on your own data instead of random arrays:

  • Click the "📁 Load Array" button in the race controls
  • All racers will use identical copies of your custom array
  • Click the "✕" button to clear and return to random arrays

Benchmark Mode (Max 100,000 Elements)

For serious performance testing with real-world data:

  • Select "Custom Array" from the Distribution dropdown
  • Upload a file with up to 100,000 elements
  • Benchmarks run at full speed without visualization delays

Supported File Formats

JSON (.json)

Plain arrays: [5, 3, 8, 1]
Or objects with array property: {"data": [5, 3, 8, 1]}
Supports array, data, or values keys.

Text (.txt)

Numbers separated by any whitespace, commas, semicolons, or newlines:
5 3 8 1 or 5,3,8,1 or one number per line.

CSV (.csv)

Standard comma-separated values. Only numeric values are extracted; headers and non-numeric data are ignored.

Benchmarking

The Benchmark tab provides objective performance measurements by running algorithms at full speed in a Web Worker (background thread). This ensures accurate timing without UI rendering overhead.

How Benchmarking Works

  1. Array Generation: For each test run, a fresh array is generated according to your selected distribution (or your custom array is used)
  2. Identical Copies: Each algorithm receives an identical copy of this array, ensuring fair comparison
  3. Full-Speed Execution: Algorithms run without any visualization delays—measuring actual computational performance
  4. Statistics Collection: Time (milliseconds), comparisons, and write operations are tracked precisely
  5. Multiple Runs: If you specify multiple test runs, results are averaged for statistical significance

Configuration Options

Setting Range Recommendation
Array Size 10 - 100,000 1,000-10,000 for meaningful comparisons. Larger sizes expose O(n²) algorithm weaknesses.
Test Runs 1 - 50 3-5 runs provide good averages. More runs smooth out system variance.
Distribution Random, Nearly Sorted, Reversed, Few Unique, Sorted Test with "Random" first, then explore edge cases with other distributions.

Understanding the Score

Benchmark scores are based on relative speed: the fastest algorithm in a run gets 100, and others are scaled proportionally by average time.

Metric How It's Used Description
Execution Time Primary score driver Average wall-clock time to complete sorting. The most practical real-world metric.
Comparisons Reported in results Number of element comparisons. Reflects algorithmic efficiency for comparison-based sorts, but does not directly change score.
Swaps/Writes Reported in results Number of memory writes. Important for flash memory and to understand algorithm behavior, but does not directly change score.

Exporting Results

After benchmarking, export your results for further analysis:

  • CSV Export: Spreadsheet-compatible format for Excel, Google Sheets, or data analysis tools
  • JSON Export: Structured data format for programmatic analysis or archival
  • Export Average vs All: Choose to export averaged results or individual run data

⚠️ Large Array Warning: Arrays above 50,000 elements with slow algorithms (Stooge Sort, Bogo Sort) may freeze your browser. These are excluded by default—enable them manually only with small arrays.

Algorithm Race (Compare)

The Compare tab runs up to 25 algorithms on identical input arrays in synchronized panels, enabling direct visual comparison of completion speed and operation profile.

Features

  • Live Statistics: Each panel shows real-time comparison and write-operation counts.
  • Finish Detection: Panels turn green when their algorithm completes.
  • Ranking: Gold, silver, and bronze badges appear for the top 3 finishers.
  • Pause/Resume: Pause the race at any point to analyze current state.

Custom Algorithms

The Custom tab is a sandbox for authoring and validating user-defined sorting logic with worker-isolated execution and operation logging for deterministic playback.

Supported Languages

  • JavaScript: Runs natively via Web Workers.
  • Python: Backed by Pyodide (Python via WebAssembly).
  • C#: Experimental lightweight syntax translation in the worker runtime.
  • Blockly (Visual Logic): Build sorting logic visually with blocks for easier intuitive learning.

Available API

Function Description
compare(i, j) Highlights elements at indices i and j in yellow. Returns the result of comparing the two elements. Increments stats.
swap(i, j) Swaps elements at i and j directly in the main array. Also highlights them red.
set(i, val) Updates the value at index i to val. Used as a single write operation.
n or length The length of the array parameter.

Collapsible Console

The integrated console surfaces runtime logs and exception messages from the sandbox to support efficient debugging and iteration.

Bubble Sort

O(n²) O(1)

Repeatedly steps through the list, compares adjacent elements, and swaps them if they are in the wrong order. Named because smaller elements "bubble" to the top. A classic educational algorithm, but rarely used in practice due to poor cache locality.

✓ Stats

  • Best: O(n)
  • Worst: O(n²)

⚠ Info

  • Stable: Yes
  • In-Place: Yes

Selection Sort

O(n²) O(1)

Finds the minimum element from the unsorted part and puts it at the beginning. Simple but inefficient for large lists. Makes the minimum possible number of swaps, making it useful when write-operations are extremely expensive.

✓ Stats

  • Best: O(n²)
  • Worst: O(n²)

⚠ Info

  • Stable: No
  • In-Place: Yes

Double Selection Sort

O(n²) O(1)

Selects both the minimum and maximum in each pass, placing them at the beginning and end respectively. Approximately 2x fewer passes than standard selection sort. Optimizes the standard selection sort by simultaneous bounds-finding, cutting pass count in half.

✓ Stats

  • Best: O(n²)
  • Worst: O(n²)

⚠ Info

  • Stable: No
  • In-Place: Yes

Insertion Sort

O(n²) O(1)

Builds the sorted array one element at a time by inserting each element into its correct position. Very efficient for small or nearly sorted data. Highly efficient for nearly-sorted data and small arrays, often used as the base case for complex algorithms.

✓ Stats

  • Best: O(n)
  • Worst: O(n²)

⚠ Info

  • Stable: Yes
  • In-Place: Yes

Cocktail Shaker Sort

O(n²) O(1)

A variant of bubble sort that traverses the list in both directions alternately. Helps avoid the "turtle" problem (small values at the end). Eliminates the 'turtle' problem seen in Bubble Sort by traversing bidirectionally.

✓ Stats

  • Best: O(n)
  • Worst: O(n²)

⚠ Info

  • Stable: Yes
  • In-Place: Yes

Gnome Sort

O(n²) O(1)

Works like a garden gnome sorting flower pots — moves forward when things are in order, steps back to fix when they are not. Combines the simplicity of Bubble Sort with tracking mechanisms of Insertion Sort.

✓ Stats

  • Best: O(n)
  • Worst: O(n²)

⚠ Info

  • Stable: Yes
  • In-Place: Yes

Odd-Even Sort

O(n²) O(1)

Alternates between odd-indexed and even-indexed passes, comparing and swapping adjacent pairs. Originally designed for parallel processors. Lends itself exceptionally well to parallel processing architectures.

✓ Stats

  • Best: O(n)
  • Worst: O(n²)

⚠ Info

  • Stable: Yes
  • In-Place: Yes

Cycle Sort

O(n²) O(1)

Theoretically optimal in terms of writes. Minimizes the total number of memory writes by rotating elements into their correct positions. Theoretically optimal for minimizing memory writes, making it ideal for EEPROM or Flash memory.

✓ Stats

  • Best: O(n²)
  • Worst: O(n²)

⚠ Info

  • Stable: No
  • In-Place: Yes

Pancake Sort

O(n²) O(1)

Sorts by repeatedly flipping sub-arrays, like arranging pancakes by size. Only uses a "flip" operation (reversing a prefix of the array). Restricted to only performing prefix reversals (spatula flips).

✓ Stats

  • Best: O(n)
  • Worst: O(n²)

⚠ Info

  • Stable: No
  • In-Place: Yes

Comb Sort

O(n²/2^p) O(1)

Improves on bubble sort by using a gap that shrinks by factor 1.3 each pass. Eliminates turtles (small values near end) much faster. Uses an optimized shrink factor (typically 1.3) to efficiently eliminate small values near the end.

✓ Stats

  • Best: O(n log n)
  • Worst: O(n²)

⚠ Info

  • Stable: No
  • In-Place: Yes

Strand Sort

O(n²) O(n)

Pulls sorted "strands" from the input and merges them into the output list. Very efficient for partially sorted data. Particularly effective when dealing with linked lists rather than arrays.

✓ Stats

  • Best: O(n)
  • Worst: O(n²)

⚠ Info

  • Stable: Yes
  • In-Place: No

Bingo Sort

O(n^2) O(1)

Selection sort adapted for duplicates. A variant of selection sort that optimizes heavily for duplicate items.

✓ Stats

  • Best: O(n+k)
  • Worst: O(n^2)

⚠ Info

  • Stable: No
  • In-Place: Yes

Exchange Sort

O(n^2) O(1)

Systematic comparisons globally. Often confused with Bubble Sort, but performs systematic global comparisons instead of local ones.

✓ Stats

  • Best: O(n^2)
  • Worst: O(n^2)

⚠ Info

  • Stable: No
  • In-Place: Yes

Merge Sort

O(n log n) O(n)

Divides the array in half, sorts each half, then merges them. Guaranteed O(n log n) performance regardless of input. A textbook example of the divide-and-conquer paradigm, ensuring reliable O(N log N) performance.

✓ Stats

  • Best: O(n log n)
  • Worst: O(n log n)

⚠ Info

  • Stable: Yes
  • In-Place: No

3-Way Merge Sort

O(n log₃ n) O(n)

Splits into three parts instead of two. Slightly fewer comparisons than standard merge sort in theory. Reduces tree depth by branching into thrids, though offset by extra comparison overhead.

✓ Stats

  • Best: O(n log₃ n)
  • Worst: O(n log₃ n)

⚠ Info

  • Stable: Yes
  • In-Place: No

Quick Sort

O(n log n) O(log n)

Picks a pivot, partitions around it, and recursively sorts the partitions. One of the fastest general-purpose sorts in practice. Employs an elegant partitioning scheme that exhibits excellent cache locality.

✓ Stats

  • Best: O(n log n)
  • Worst: O(n²)

⚠ Info

  • Stable: No
  • In-Place: Yes

Dual-Pivot Quick Sort

O(n log n) O(log n)

Uses two pivots to partition into three segments. Used by Java Arrays.sort() for primitives. Fewer swaps on average. Significantly reduces recursive depth and handles duplicate keys robustly.

✓ Stats

  • Best: O(n log n)
  • Worst: O(n²)

⚠ Info

  • Stable: No
  • In-Place: Yes

IntroSort

O(n log n) O(log n)

Starts as quicksort, switches to heapsort if recursion gets too deep, and insertion sort for small partitions. Used by C++ STL. Achieves fast average performance while strictly capping worst-case limits.

✓ Stats

  • Best: O(n log n)
  • Worst: O(n log n)

⚠ Info

  • Stable: No
  • In-Place: Yes

Heap Sort

O(n log n) O(1)

Builds a max-heap from the array, then repeatedly extracts the maximum. Guaranteed O(n log n) with no extra memory. Converts the array into a maximum priority queue, completely avoiding extra memory usage.

✓ Stats

  • Best: O(n log n)
  • Worst: O(n log n)

⚠ Info

  • Stable: No
  • In-Place: Yes

Shell Sort

O(n^(4/3)) O(1)

Generalizes insertion sort to allow exchanges of elements far apart. Uses a decreasing sequence of gap values. Generalizes insertion sort by introducing a gap sequence to move items over long distances initially.

✓ Stats

  • Best: O(n log n)
  • Worst: O(n^(3/2))

⚠ Info

  • Stable: No
  • In-Place: Yes

Tim Sort

O(n log n) O(n)

Hybrid of merge sort and insertion sort. Finds natural runs, extends them, and merges with a sophisticated merge strategy. Used by Python and Java. Exploits natural runs in real-world data to perform incredibly fast sorting with minimal overhead.

✓ Stats

  • Best: O(n)
  • Worst: O(n log n)

⚠ Info

  • Stable: Yes
  • In-Place: No

Bitonic Sort

O(n log² n) O(1)

Creates bitonic sequences then merges them. Highly parallelizable — designed for hardware implementations and GPU sorting. Sorts independently of input data variations, maintaining a rigid sequence of comparisons.

✓ Stats

  • Best: O(n log² n)
  • Worst: O(n log² n)

⚠ Info

  • Stable: No
  • In-Place: Yes

Circle Sort

O(n log n log n) O(log n)

Compares elements equidistant from both ends, swapping if out of order. Repeats until no swaps needed. Simple and elegant. Operates iteratively by applying concentric comparisons, producing a non-obvious visual pattern.

✓ Stats

  • Best: O(n log n)
  • Worst: O(n log n log n)

⚠ Info

  • Stable: No
  • In-Place: Yes

Smoothsort

O(n log n) O(1)

A variation of heapsort. Operates via a Leonardo number heap structure to seamlessly degrade to O(N) for nearly sorted inputs.

✓ Stats

  • Best: O(n)
  • Worst: O(n log n)

⚠ Info

  • Stable: No
  • In-Place: Yes

Patience Sorting

O(n log n) O(n)

Distribution sort using card game patience rules. Constructs 'piles' obeying specific rules to naturally identify the longest increasing subsequence.

✓ Stats

  • Best: O(n)
  • Worst: O(n log n)

⚠ Info

  • Stable: No
  • In-Place: No

Tournament Sort

O(n log n) O(n)

Similar to heapsort, builds a tournament tree. Utilizes a binary tree framework to simulate a knockout tournament.

✓ Stats

  • Best: O(n log n)
  • Worst: O(n log n)

⚠ Info

  • Stable: No
  • In-Place: No

Tree Sort

O(n log n) O(n)

Builds a binary search tree and traverses it. A pure manifestation of Binary Search Tree construction and in-order traversal.

✓ Stats

  • Best: O(n log n)
  • Worst: O(n^2)

⚠ Info

  • Stable: Yes
  • In-Place: No

Block Sort

O(n log n) O(1)

Stable sort. Extracts strictly O(1) auxiliary space performance out of merge routines via complex rotations.

✓ Stats

  • Best: O(n)
  • Worst: O(n log n)

⚠ Info

  • Stable: Yes
  • In-Place: Yes

Pattern-Defeating Quicksort

O(n log n) O(log n)

Modern hybrid. A state-of-the-art unstructured hybrid combining dual-pivot Quick Sort with block swaps.

✓ Stats

  • Best: O(n)
  • Worst: O(n log n)

⚠ Info

  • Stable: No
  • In-Place: Yes

Library Sort

O(n log n) O(n)

Gapped insertion sort. Implements a strategy akin to a librarian leaving empty shelf spaces for future books, preventing cascading shifts.

✓ Stats

  • Best: O(n)
  • Worst: O(n^2)

⚠ Info

  • Stable: Yes
  • In-Place: No

Drop-Merge Sort

O(n log n) O(n)

Adaptive based on drops. Heavily adaptive algorithm that simply 'drops' out-of-order elements, sorts them, and merges them back.

✓ Stats

  • Best: O(n)
  • Worst: O(n log n)

⚠ Info

  • Stable: Yes
  • In-Place: No

Cartesian Tree Sort

O(n log n) O(n)

Min priority queue sort. Relies on a Cartesian tree builder to quickly determine minimum spanning elements.

✓ Stats

  • Best: O(n log n)
  • Worst: O(n log n)

⚠ Info

  • Stable: No
  • In-Place: No

Weak-Heap Sort

O(n log n) O(n)

Minimizes branches. Modifies the heap property to loosen structural constraints, slightly reducing comparison counts.

✓ Stats

  • Best: O(n log n)
  • Worst: O(n log n)

⚠ Info

  • Stable: No
  • In-Place: Yes

Ford-Johnson

O(n log n) O(n)

Merge-insertion for minimum comparisons. Aims for the theoretical absolute minimum comparisons for a given N.

✓ Stats

  • Best: O(n log n)
  • Worst: O(n log n)

⚠ Info

  • Stable: No
  • In-Place: No

Batchers Odd-Even Mergesort

O(n log^2 n) O(1)

Sorting network style. Transforms a sequential merge step into a parallelizable network of comparators.

✓ Stats

  • Best: O(n log^2 n)
  • Worst: O(n log^2 n)

⚠ Info

  • Stable: No
  • In-Place: Yes

Counting Sort

O(n+k) O(k)

Counts occurrences of each value and uses arithmetic to place elements. Very fast when the range k is not much larger than n. Abandons comparisons entirely by trading massive memory arrays for incredible linear time performance.

✓ Stats

  • Best: O(n+k)
  • Worst: O(n+k)

⚠ Info

  • Stable: Yes
  • In-Place: No

Radix Sort (LSD)

O(d(n+k)) O(n+k)

Sorts by processing digits from least significant to most significant, using counting sort as a subroutine. Separates processing into individual digit buckets, avoiding any direct whole-element comparisons.

✓ Stats

  • Best: O(d(n+k))
  • Worst: O(d(n+k))

⚠ Info

  • Stable: Yes
  • In-Place: No

Radix Sort (MSD)

O(d(n+k)) O(n+k)

Like LSD Radix Sort but processes from the most significant digit first. Can short-circuit on already-sorted data. Works top-down on digits, allowing early termination on variable-length keys.

✓ Stats

  • Best: O(d(n+k))
  • Worst: O(d(n+k))

⚠ Info

  • Stable: Yes
  • In-Place: No

Bucket Sort

O(n+k) O(n)

Distributes elements into buckets, sorts each bucket individually (usually with insertion sort), then concatenates. Splits elements across a set of uniformly distributed continuous numerical ranges.

✓ Stats

  • Best: O(n+k)
  • Worst: O(n²)

⚠ Info

  • Stable: Yes
  • In-Place: No

Gravity (Bead) Sort

O(S) O(n*m)

Simulates beads falling under gravity on an abacus. Each value is represented as a row of beads that drop into place. Simulates beads sliding down rods affected by gravity to naturally clump sorted states.

✓ Stats

  • Best: O(n)
  • Worst: O(S)

⚠ Info

  • Stable: No
  • In-Place: No

Pigeonhole Sort

O(n+N) O(N)

Distribution sort suitable when number of elements and range of possible values are roughly the same. A pure demonstration of the pigeonhole principle mapped to indexing spaces.

✓ Stats

  • Best: O(n+N)
  • Worst: O(n+N)

⚠ Info

  • Stable: Yes
  • In-Place: No

Flashsort

O(n) O(n)

Efficient distribution sort. Guesses element locations through statistical interpolation, yielding linear performance for uniform data.

✓ Stats

  • Best: O(n)
  • Worst: O(n^2)

⚠ Info

  • Stable: No
  • In-Place: Yes

Stooge Sort

O(n^2.71) O(log n)

Recursively sorts first 2/3, last 2/3, then first 2/3 again. Named after The Three Stooges. Extremely slow — purely educational. Employs an outrageously slow, recursive 2/3 overlap strategy resulting in O(N^2.7) time.

✓ Stats

  • Best: O(n^2.71)
  • Worst: O(n^2.71)

⚠ Info

  • Stable: No
  • In-Place: Yes

Sleep Sort

O(max(n)) O(n)

Sort by timeouts. Offloads the computational effort entirely to the OS scheduler by setting timers.

✓ Stats

  • Best: O(max(n))
  • Worst: O(max(n))

⚠ Info

  • Stable: No
  • In-Place: No

Slowsort

O(n^log n) O(n)

Multiply and surrender. Pessimizes a divide-and-conquer strategy, generating an aggressively inefficient O(N^(log N)) execution.

✓ Stats

  • Best: O(n^log n)
  • Worst: O(n^log n)

⚠ Info

  • Stable: No
  • In-Place: Yes

Stalin Sort

O(n) O(1)

Eliminates out of order elements. Ruthlessly executes any element that violates the sorted order. O(N), but destructive.

✓ Stats

  • Best: O(n)
  • Worst: O(n)

⚠ Info

  • Stable: Yes
  • In-Place: Yes

Bogo Sort

O(n*n!) O(1)

Randomly shuffles the array and checks if sorted. Expected time is factorial. Also known as "stupid sort" or "monkey sort". Use with very small arrays only. Continuously randomly shuffles the array until it happens to be sorted.

✓ Stats

  • Best: O(n)
  • Worst: O(∞)

⚠ Info

  • Stable: No
  • In-Place: Yes