big-o-notation-explained.png

Big O Notation Explained

Last Updated | Jan, 19, 2021 By Kingsley Ijomah
We measure the quality of code we write by making sure we choose the right step by step actions ( algorithm ) to solve the problem, we know it is the right algorithm because of how long it takes to execute ( time complexity ) when given larger input. Big-O notations are used to represent time complexity, we are going explore 7 of these notations.

The Big-O notation and the word algorithm use to be a scary topic for me and I avoided it and a lot of people do the same, I later found out that it is not complex and infact helps get you to become a really good and confident programmer really quickly.

In this post, I will help break this down so any beginner can understand it and start applying them immediately to how they think and write code.

Big O notation is used in computer science to describe the worst-case performance of an algorithm, and the algorithm is simply a step by step plan of execution to solving a given problem.

A function's Big-O notation is calculated by the time it takes to respond to different inputs. How much slower is it if we give it a list of 1000 things to work on instead of a list of 1 thing?

Below we will explore different notations used to describe time complexity.

big-o-notation-explained-pinterest.png

O(1) - Constant Runtime

An algorithm is said to have a constant time because no matter the size of the input given the run time is always the same.

A good example of this is to return a given value based on its index in a list as shown below:

code
'''
Return item in a given index location
'''

def find_item(index, the_list):
    return the_list[index]


print(find_item(1, [2, 9, 4, 27, 14, 5, 1]))  # 9

The Big-O notation will always be what is known as O(1) constant in the worst case.

What this means is no matter how big our input is, it always takes the same amount of time to execute the function.

O(n) - Linear Runtime

An algorithm is said to have a linear time complexity when the running time increases linearly with the size of the input data. This occurs when the algorithm must examine all values in the input data.

For example, search if an item exists in a list:

code
def find_item(item, the_list):
    for x in the_list:
        if item == x:
            return True
    return False


print( find_item(1, [2, 9, 4, 27, 14, 5, 1]) ) #ouput => True

Big-O calculates the worst-case scenario, so looking at the example code above, the code will have to iterate through all the list if item is the last one in the list or does not exist.

The Big-O notation would be O(n), which reads as Order of magnitute N, 'n' in this case represents the total number of items in the list.

O(log n) - Logarithmic Runtime

An algorithm is said to have a logarithmic time complexity when it reduces the size of the input data in each step (it doesn't need to step through all values of the input data), for example using binary search algorithm:

code
def binary_search(data, target):
    low = 0
    high = len(data) - 1

    while low <= high:
        mid = (low + high) // 2
        if target == data[mid]:
            return True
        elif target < data[mid]:
            high = mid - 1
        else:
            low = mid + 1
    return False


data = [1, 4, 5, 6, 7, 8, 9, 10, 12, 13, 14, 17]
target = 13
print( binary_search(data, target) ) #output => True

O(n log n) - Linearithmic Runtime

An algorithm is said to have a linearithmic time complexity when each operation on the input data have a logarithm time complexity. It is commonly seen in sorting algorithms e.g. mergsort.

Another example is: for every value in dataA (O(n)) use the binary search (O(log n)) to search the same value in dataB.

code
for value in dataA:
    result.append(binary_search(dataB, value))

O(n2) - Quadratic Runtime

An algorithm is said to have a quadratic time complexity when it has to grow at a rate of n2 if the input is 2 it will do 4 operations, if the input is 8, it will do 64 operations, and so on. for example:

code
for x in data:
    for y in data:
        print(x, y)

Bubble sort is a great example of quadratic time complexity since for each value it needs to compare to all other values in the list.

O(2n) - Exponential Runtime

An algorithm is said to have an exponential time complexity when calculations performed by the algorithm doubles with each additional input data set.

Example of exponential runtime: Powerset ( finding all the subsets on a set. )

To understand the powerset lets imagine you are buying a pizza, the store has different toppings that you can choose from, like mushroom, bacon, pepperoni, pineapple. Let's represent each topping as A, B, C, D. You can also select no toppings.

With this you can choose any combination of toppings, powerset gives you all the possible combo of toppings. 4 toppings will end up with 16 different combos. let's see an example of solving this with powerset:

code
'''
Sample expectation from powerset:
notice output doubling as input increases by 1 each time, empty list 
below represents pizza with no toppins as an option.

powerset(["a"]): empty piza [] plus with toppin "a"
powerset(["a","b"]): empty pizza [] plus with toppin "a", "b", and "ab" combo...
'''

def powerset(seq):
    if len(seq) <= 1:
        yield seq
        yield []
    else:
        for item in powerset(seq[1:]):
            yield [seq[0]] + item
            yield item


print( list(powerset(["a"])) ) # output: [['a'], []]
print( list(powerset(["a","b"])) ) # output: [['a', 'b'], ['b'], ['a'], []]
print( list(powerset(["a","b","c"])) ) # output: [['a', 'b', 'c'], ['b', 'c'], ['a', 'c'], ['c'], ['a', 'b'], ['b'], ['a'], []]
print( list(powerset(["a","b","c", "d"])) ) # output: [['a', 'b', 'c', 'd'], ['b', 'c', 'd'], ['a', 'c', 'd'], ['c', 'd'], ['a', 'b', 'd'], ['b', 'd'], ['a', 'd'], ['d'], ['a', 'b', 'c'], ['b', 'c'], ['a', 'c'], ['c'], ['a', 'b'], ['b'], ['a'], []]

The above is O(2n) - Exponential Runtime because each time we introduce a new toppin, the calculated output double in size compared to the previous toppin combo size.

O(n!) - Factorial Runtime

An algorithm is said to have a factorial time complexity when it behaves in factorial way ( multiplication of all positive integers less than itself ) for example:

code
'''
2!  = 2 x 1 = 2
3!  = 3 x 2 x 1 = 6
4!  = 4 x 3 x 2 x 1 = 24
5!  = 5 x 4 x 3 x 2 x 1 = 120
6!  = 6 x 5 x 4 x 3 x 2 x 1 = 720
7!  = 7 x 6 x 5 x 4 x 3 x 2 x 1 = 5,040
8!  = 8 x 7 x 6 x 5 x 4 x 3 x 2 x 1 = 40,320
20! = 2,432,902,008,176,640,000
'''

As you can see it grows pretty quickly, you want to stay away from algorithms that have this O(n!) running time!

Example of when this can be used is with permutations of a string, so given some strings, compute all the different words that can be formed.

code
def permutations(remaining, candidate=""):

    if len(remaining) == 0:
        print(candidate)

    for i in range(len(remaining)):

        newCandidate = candidate + remaining[i]
        newRemaining = remaining[0:i] + remaining[i + 1 :]

        permutations(newRemaining, newCandidate)


s = "ABC"
permutations(s) #output: ABC ACB BAC BCA CAB CBA

Final Thoughts

So that concludes this post, it is not an easy read and I can admit to it, but it covers solid foundation for any beginner, I hope the Big O notations are not as scary as they use to be.

We covered contant O(1), linear O(n), logarithmic O(log n), linearithmic O(n log n), quadratic O(n2), exponential O(n2) and factorial O(n!) time complexity.

To help you remember them, commit to mind the examples when they are required.

O(1) - Constant Runtime : no matter the size of the input given the run time is always the same. ( find item in a list by key )

O(n) - Linear Runtime : input size affects execution time, all elements have to be examined in worst case. ( check item exists in a list )

O(log n) - Logarithmic Runtime : reduces size of input in each step ( binary search )

O(n log n) - Linearithmic Runtime : perform logarithmic operation from one list against another ( merge sort )

O(n2) - Quadratic Runtime : grow at a rate of n2 if the input is 2 it will do 4 operatons ( bubble sort )

O(2n) - Exponential Runtime : every new input doubles the operation result ( powerset - finding all the subsets on a set )

O(n!) - Factorial Runtime : grows pretty quickly by working like a factorial! ( permutations of a string )

Thanks concludes this post on the Big-O notation, please share this post if you found it useful.

Happy coding...

MY STORY

My name is Kingsley Ijomah, I am the founder of CODEHANCE, an online education platform built with you in mind, a place where I express my gratitude to a skill ( coding ) which has changed my life completely.

Learn To Code With Me.
FREE course included.

GET IN TOUCH

Enter your email address:

Delivered by FeedBurner