Table of Contents
What is Big-Omege Notation (Ω)?
Big-Omega notation defines the lower bound of an algorithm in terms of Time Complexity.
That means Big-Omega notation always indicates the minimum time an algorithm requires for all input values. That means Big-Omega notation describes the best case of an algorithm time complexity.
Big-Omega Notation can be defined as follows…
Consider function f(n) as the time complexity of an algorithm and g(n) is the most significant term. If f(n) >= C g(n) for all n >= n0, C > 0 and n0 >= 1. Then we can represent f(n) as Ω(g(n)).
f(n) = Ω(g(n))
Example:-
Consider the following f(n) and g(n)…
f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as Ω(g(n)) then it must satisfy f(n) >= C g(n) for all values of C > 0 and n0>= 1
f(n) >= C g(n)
⇒3n + 2 >= C n
The above condition is always TRUE for all values of C = 1 and n >= 1.
By using Big – Omega notation we can represent the time complexity as follows…
3n + 2 = Ω(n)
Also Read: What is Asymptotic Notation & Big-Oh Notation?
What is Big-Theta Notation (Θ)?
Big-Theta notation defines the average bound of an algorithm in terms of Time Complexity.
That means Big-Theta notation always indicates the average time an algorithm requires for all input values. That means Big-Theta notation describes the average case of an algorithm time complexity.
Big-Theta Notation can be defined as follows…
Consider function f(n) as the time complexity of an algorithm and g(n) is the most significant term. If C1 g(n) <= f(n) <= C2 g(n) for all n >= n0, C1 > 0, C2 > 0 and n0 >= 1. Then we can represent f(n) as Θ(g(n)).
f(n) = Θ(g(n))
Consider the following f(n) and g(n)…
f(n) = 3n + 2
g(n) = n
If we want to represent f(n) as Θ(g(n)) then it must satisfy C1 g(n) <= f(n) <= C2 g(n) for all values of C1 > 0, C2 > 0 and n0>= 1
C1 g(n) <= f(n) <= C2 g(n)
⇒C1 n <= 3n + 2 <= C2 n
The above condition is always TRUE for all values of C1 = 1, C2 = 4, and n >= 2.
By using Big-Theta notation we can represent the time complexity as follows…
3n + 2 = Θ(n)
The efficiency of an algorithm depends on two parameters:
- Time Complexity
- Space Complexity
Time Complexity: Time Complexity is defined as the number of times a particular instruction set is executed rather than the total time taken. It is because the total time taken also depends on some external factors like the compiler used, the processor’s speed, etc.
Space Complexity: Space Complexity is the total memory space required by the program for its execution.
Types Of Time Complexity :
- Best Time Complexity: Define the input for which the algorithm takes less time or minimum time. In the best case calculate the lower bound of an algorithm. Example: In the linear search when search data is present at the first location of large data then the best case occurs.
- Average Time Complexity: In the average case take all random inputs and calculate the computation time for all inputs.
And then we divide it by the total number of inputs. - Worst Time Complexity: Define the input for which the algorithm takes a long time or maximum time. In the worst calculate the upper bound of an algorithm. Example: In the linear search when search data is present at the last location of large data then the worst case occurs.