Big O Notation

Ali Samir
Aug 6, 2023
2 min read
post_comment7 Comments
post_like8 Likes

#A brief overview of Big O notation


📌Big O notation is a mathematical notation used to describe an algorithm's complexity or running time. It is used to compare the efficiency of algorithms by analyzing how the time and space requirements grow as the input size increases. In other words, it tells us how quickly the resource requirements of an algorithm increase as the input size grows more prominent.


📌In Big O notation, the notation "O" stands for "order of", and is followed by a mathematical expression that describes the algorithm's performance.


📌For example, an algorithm with a runtime of O(n) has a linear runtime, meaning that the time it takes to execute the algorithm grows linearly with the input size n. An algorithm with a runtime of O(n^2) has a quadratic runtime, meaning that the time it takes to execute the algorithm grows quadratically with the input size n.


📌Some commonly used Big O notations include:

  • O(1) - Constant time

  • O(log n) - Logarithmic time

  • O(n) - Linear time

  • O(n log n) - Linearithmic time

  • O(n^2) - Quadratic time

  • O(2^n) - Exponential time


⚡Big O notation is essential because it helps us analyze the efficiency of algorithms and determine which algorithm is the best for a given problem. It allows us to make informed decisions about the trade-offs between time and space complexity, and to choose the algorithm that is most appropriate for a given situation.

Data Structures & Algorithms