807 words
4 minutes
Learn Big O Notation

Introduction#

If you’ve ever taken a computer science course or dived into coding interviews, you’ve likely come across Big O notation. It’s one of those concepts that can feel a bit abstract at first but is crucial for understanding the efficiency of algorithms.

In this post, we’ll break down Big O into simple parts, covering what it means, why it’s important, and how to apply it practically when analyzing the performance of your code.


What is Big O Notation?#

Big O notation is a way to describe how the runtime or space requirements of an algorithm scale as the input size grows. In simpler terms, it helps us answer questions like:

  • How much longer will this algorithm take if I double the input size?
  • How much more memory will it use if I add more data?

It provides an upper bound on an algorithm’s performance, giving a worst-case scenario. This allows us to compare different algorithms to determine which one is more efficient for large inputs.


Why is Big O Notation Important?#

Efficiency is crucial in programming. If you’re working with small datasets, performance might not be a concern, but as your input size grows, the performance of your algorithm can become a bottleneck. Big O allows us to:

  • Optimize: Identify which parts of the code could be improved.
  • Compare: Make better decisions when choosing between multiple algorithms.
  • Predict: Understand how well your algorithm will perform in the worst-case scenario.

Common Big O Classes#

Let’s look at some of the most common Big O classes you’ll encounter, from best (fastest) to worst (slowest):

  1. O(1) — Constant Time
    The algorithm’s runtime is unaffected by the input size. No matter how big or small the input, it runs in the same amount of time.

    Example:
    Accessing an element in an array by index: arr[5].

  2. O(log n) — Logarithmic Time
    The algorithm’s runtime grows logarithmically with the input size. This is common in algorithms that divide the problem in half each time, like binary search.

    Example:
    Binary search on a sorted array.

  3. O(n) — Linear Time
    The runtime grows linearly with the input size. If you double the input, the time it takes to process it also doubles.

    Example:
    Looping through an array:

    for i in arr:
        print(i)
  4. O(n log n) — Linearithmic Time
    This is slightly worse than O(n), but still efficient for many problems. Algorithms like merge sort and quicksort (average case) have this time complexity.

    Example:
    Sorting algorithms like mergesort or quicksort.

  5. O(n²) — Quadratic Time
    As the input size grows, the runtime grows quadratically. This is common in algorithms that involve nested loops.

    Example:
    Selection sort or bubble sort.

    for i in range(len(arr)):
        for j in range(i + 1, len(arr)):
            print(arr[i], arr[j])
  6. O(2ⁿ) — Exponential Time
    The runtime doubles with each additional input. Algorithms with this complexity are highly inefficient for large inputs.

    Example:
    Solving the Tower of Hanoi or generating all subsets of a set.

  7. O(n!) — Factorial Time
    This is the slowest and least efficient. Algorithms with factorial complexity are usually impractical for even moderately large inputs.

    Example:
    Solving the Traveling Salesman Problem using brute force.


How to Analyze Big O#

Now that we know the common time complexities, let’s see how we analyze the Big O of an algorithm. Here are some steps:

  1. Focus on the Largest Term
    When determining Big O, we only care about the largest term because it dominates the growth rate as the input size increases. For example:

    • ( O(3n^2 + 5n + 100) ) simplifies to ( O(n^2) ).
  2. Ignore Constants
    Constants don’t matter in Big O notation, so we drop any coefficients or terms that don’t significantly affect growth as input increases.

    • ( O(2n) ) becomes ( O(n) ).
  3. Look at Loops
    Loops typically give away the complexity:

    • A single loop over an array: ( O(n) )
    • Nested loops: ( O(n^2) )

    Consider the work being done inside the loops to determine if there’s additional complexity.


Space Complexity: Don’t Forget Memory!#

Big O also applies to memory usage, known as space complexity. It tells you how much memory an algorithm uses relative to the input size.

  • O(1) space means your algorithm uses a constant amount of memory regardless of input size.
  • O(n) space means memory usage grows linearly with input size.

Examples in Code#

Let’s walk through some simple examples to identify their time complexity.

  1. Constant Time (O(1))

    def get_first(arr):
        return arr[0]

    No matter the size of arr, this function always takes the same time.

  2. Linear Time (O(n))

    def print_all(arr):
        for i in arr:
            print(i)

    The time grows linearly with the size of arr.

  3. Quadratic Time (O(n²))

    def print_pairs(arr):
        for i in arr:
            for j in arr:
                print(i, j)

    Nested loops mean the runtime grows quadratically as the input size increases.


Conclusion#

Understanding Big O notation is a key skill for any developer, especially when working with algorithms and large datasets. It helps you write efficient code and make informed decisions about which algorithms to use. Remember, it’s all about identifying patterns in how algorithms behave as the input grows.

Now that you’ve learned the basics, go out and practice! Analyze the time and space complexity of your own code and start thinking about how you can optimize it.


Further Reading

  • “Introduction to Algorithms” by Cormen, Leiserson, Rivest, and Stein.
  • “Grokking Algorithms” by Aditya Y.
Learn Big O Notation
https://zxce3.net/posts/learn-big-o/
Author
Memet Zx
Published at
2024-01-27