Big O Notation
Understanding time and space complexity analysis for algorithms
What is Big O Notation?
Big O notation is a mathematical notation used to describe the performance or complexity of an algorithm. It specifically describes the worst-case scenario and helps us understand how the runtime or space requirements grow as the input size increases.
Big O notation gives us a high-level understanding of the time or space complexity of an algorithm, allowing us to compare different approaches and choose the most efficient one.
Why is Big O Important?
- 📊 Performance Prediction: Understand how your algorithm scales with larger inputs
- ⚖️ Algorithm Comparison: Compare different solutions objectively
- 🎯 Interview Essential: Critical for technical interviews at top companies
- 💡 Design Decisions: Make informed choices about data structures and algorithms
Common Time Complexities
Here are the most common Big O complexities, ordered from best to worst:
O(1) - Constant Time
The algorithm takes the same amount of time regardless of input size.
// Example: Array access by index
function getFirstElement(arr) {
return arr[0]; // Always takes the same time
}
// Example: Hash table lookup
const map = new Map([['key', 'value']]);
map.get('key'); // O(1) lookup
O(log n) - Logarithmic Time
The algorithm's time increases logarithmically with input size. Very efficient!
// Example: Binary search
function binarySearch(arr, target) {
let left = 0;
let right = arr.length - 1;
while (left <= right) {
const mid = Math.floor((left + right) / 2);
if (arr[mid] === target) return mid;
if (arr[mid] < target) left = mid + 1;
else right = mid - 1;
}
return -1;
}
// With each iteration, we cut the search space in half
// For 1000 items, only ~10 comparisons needed!
O(n) - Linear Time
The algorithm's time increases linearly with input size.
// Example: Finding max in unsorted array
function findMax(arr) {
let max = arr[0];
for (let i = 1; i < arr.length; i++) {
if (arr[i] > max) {
max = arr[i];
}
}
return max;
}
// Must check every element once
O(n log n) - Linearithmic Time
Common in efficient sorting algorithms like merge sort and quicksort.
// Example: Merge Sort
function mergeSort(arr) {
if (arr.length <= 1) return arr;
const mid = Math.floor(arr.length / 2);
const left = mergeSort(arr.slice(0, mid));
const right = mergeSort(arr.slice(mid));
return merge(left, right);
}
function merge(left, right) {
const result = [];
let i = 0, j = 0;
while (i < left.length && j < right.length) {
if (left[i] < right[j]) {
result.push(left[i++]);
} else {
result.push(right[j++]);
}
}
return result.concat(left.slice(i)).concat(right.slice(j));
}
O(n²) - Quadratic Time
Common with nested loops. Becomes slow with larger inputs.
// Example: Bubble Sort
function bubbleSort(arr) {
const n = arr.length;
for (let i = 0; i < n; i++) {
for (let j = 0; j < n - i - 1; j++) {
if (arr[j] > arr[j + 1]) {
[arr[j], arr[j + 1]] = [arr[j + 1], arr[j]];
}
}
}
return arr;
}
// For 1000 items, this makes ~1,000,000 comparisons!
O(2ⁿ) - Exponential Time
Very slow. Doubles with each additional input. Avoid when possible!
// Example: Naive Fibonacci (inefficient)
function fibonacci(n) {
if (n <= 1) return n;
return fibonacci(n - 1) + fibonacci(n - 2);
}
// Exponential growth - fibonacci(50) would take years!
// This is why we need dynamic programming
Big O Complexity Chart
| Complexity | Name | n=10 | n=100 | n=1000 |
|---|---|---|---|---|
| O(1) | Constant | 1 | 1 | 1 |
| O(log n) | Logarithmic | 3 | 7 | 10 |
| O(n) | Linear | 10 | 100 | 1,000 |
| O(n log n) | Linearithmic | 30 | 700 | 10,000 |
| O(n²) | Quadratic | 100 | 10,000 | 1,000,000 |
| O(2ⁿ) | Exponential | 1,024 | 1.27e+30 | 💀 Impossible |
Space Complexity
Big O also describes space (memory) complexity. It measures how much additional memory an algorithm needs.
O(1) Space - Constant
// Uses fixed amount of space
function sum(a, b) {
return a + b; // No extra space
}
O(n) Space - Linear
// Creates new array
function double(arr) {
const result = [];
for (let i = 0; i < arr.length; i++) {
result.push(arr[i] * 2);
}
return result;
}
Big O, Big Omega, and Big Theta
Big O (O)
Upper Bound - Worst case scenario. Most commonly used.
"This algorithm will never be slower than..."
Big Omega (Ω)
Lower Bound - Best case scenario.
"This algorithm will be at least this fast..."
Big Theta (Θ)
Tight Bound - Average case, both upper and lower.
"This algorithm will always be around..."
Rules for Calculating Big O
1. Drop Constants
O(2n) → O(n), O(500) → O(1)
// This is O(n), not O(2n)
function example(arr) {
for (let i = 0; i < arr.length; i++) { } // O(n)
for (let i = 0; i < arr.length; i++) { } // O(n)
// Total: O(n + n) = O(2n) = O(n)
}
2. Drop Non-Dominant Terms
O(n² + n) → O(n²), O(n + log n) → O(n)
// This is O(n²), not O(n² + n)
function example(arr) {
for (let i = 0; i < arr.length; i++) { } // O(n)
for (let i = 0; i < arr.length; i++) {
for (let j = 0; j < arr.length; j++) { } // O(n²)
}
// Total: O(n + n²) = O(n²)
}
3. Different Inputs → Different Variables
// This is O(a + b), not O(n)
function example(arr1, arr2) {
for (let i = 0; i < arr1.length; i++) { } // O(a)
for (let i = 0; i < arr2.length; i++) { } // O(b)
// Total: O(a + b)
}
💡 Key Takeaways
- ✓ Big O describes worst-case performance as input size grows
- ✓ Focus on the dominant term and drop constants
- ✓ O(1) is best, O(2ⁿ) is worst - aim for O(log n) or O(n) when possible
- ✓ Nested loops usually mean O(n²) or worse
- ✓ Divide and conquer approaches often achieve O(log n) or O(n log n)
- ✓ Consider both time AND space complexity in your solutions