Today, as usual, I've been busy updating my repos and delving deeper into the world of Data Structures and Algorithms, using both Java and Python. However, today's blog post is special. Instead of just sharing my journey, I'll attempt to shed some light on a concept I've recently started to understand: Big O notation. I promise to keep this as beginner-friendly as possible, and to provide examples in both Python and Java. So, let's dive in!
Big O: Intro
In computer science, Big O notation is used to describe the performance or complexity of an algorithm. Specifically, it represents the worst-case scenario in terms of time or space an algorithm can take.
Big O: Worst Case
Big O gives us the upper limit of time complexity, i.e., the maximum time an algorithm might take. For example, in a list of unsorted numbers, searching for the largest number would require going through the entire list, making it an O(n) operation, where n represents the number of elements in the list.
Big O: O(n)
This notation implies that the time complexity of an algorithm grows linearly with the size of the input. Let's see a Python and Java example:
Python:
def find_max(numbers):
maximum = numbers[0]
for number in numbers:
if number > maximum:
maximum = number
return maximum
Java:
public int findMax(int[] numbers) {
int maximum = numbers[0];
for(int number: numbers) {
if(number > maximum) {
maximum = number;
}
}
return maximum;
}
Both of these functions have a time complexity of O(n) because they iterate over each element once.
Big O: Drop Constants
When using Big O notation, we only care about the highest order term because that's what will matter the most as n grows. Thus, O(2n) simplifies to O(n), as the constant 2 has less impact as n becomes large.
Big O: O(n^2)
This represents an algorithm whose performance is directly proportional to the square of the size of the input data. A typical example is a nested loop:
Python:
def print_pairs(numbers):
for i in numbers:
for j in numbers:
print(i, j)
Java:
public void printPairs(int[] numbers) {
for(int i: numbers) {
for(int j: numbers) {
System.out.println(i + ", " + j);
}
}
}
Big O: Drop Non-Dominants
In the case of O(n^2 + n), we can drop the non-dominant term (n), leaving us with O(n^2), since the quadratic term will dominate for larger inputs.
Big O: O(1)
This represents constant time complexity. No matter the size of the input, the algorithm will always take the same amount of time.
Big O: O(log n)
O(log n) denotes an algorithm that increases logarithmically in time as the input increases. Binary search is a good example of a log n algorithm.
Big O: Different Terms for Inputs
In cases where an algorithm takes multiple inputs, we use different terms for each. For example, in a function that checks if a pair exists in a 2D matrix, we could denote it as O(n*m), where n and m represent the matrix's dimensions.
Big O: Lists
Data structures, such as lists in Python or arrays in Java, also have their operations evaluated by Big O notation. For example, appending to a list in Python is an O(1) operation, but inserting at the beginning of the list is an O(n) operation.
Big O: Wrap Up
Understanding Big O notation is crucial for evaluating the efficiency of algorithms. It might seem a bit overwhelming at first, but with practice, it becomes an invaluable tool for any developer's toolkit. Remember, it's not about mastering everything overnight, but rather taking small, consistent steps towards a greater understanding.
Stay tuned for more posts on my journey through Python, Java, and the fascinating world of Data Structures and Algorithms!