Sorting: Ep 07a - Big O Notation

Uploaded by lcc0612 on 29.01.2010

Hello YouTube, and welcome back to sorting algorithms
Please watch episode one, which introduces this series.
Today's episode is a special fill in episode that dives into somewhat greater detail about the Big O Notation.
Well, this episode should have come much earlier, but at least it wasn't so important before where we primarily discussed the less efficient algorithms.
The time has come, however, where we should know a little more about the Big O Notation than we already do
so I will take this opportunity to devote some time to the subject as a whole.
First of all, what exactly IS the Big O Notation?
Now, we know some algorithms are more efficient than others, but we need that to be somehow quantifiable.
The Big O Notation is a form of mathematical terminology that does this
It is a algebric representation of the number of steps the algorithm needs to go through before the result is eventually attained.
Since sorting algorithms also run through a certain number of items
the number of steps can also be taken as the number of items it needs to go through.
This is then represented in terms of "n", which stands for the number of items to be sorted by the algorithm.
Now, we've seen quite a few algorithms in action already, and you realise that most of the time
an n² algorithm for example, does not necessarily actually use up all of the n² time it was supposed to take
They *could* have taken all n² time, but they don't necessarily have to
Hence, they are O(n²) in the *WORST CASE*.
Let's say we pass an already sorted list to Bubble sort.
All it does is compare all the neighbours, decide that no swapping is required, conclude the list is sorted, and finish off.
This is considered the best case, since bubble sort must go through a list at least once from start to finish.
Since the algorithm has only done one pass through a list of n items, under the Big O Notation,
the best case for Bubble Sort is actually simply O(n).
So you see, for every algorithm there is a Best Case and Worst Case time complexity as represented by the Big O Notation.
We generally do not quote the best case value, since how often will a sorting algorithm get a completely sorted list for input?
However, getting a worst case input is actually equally improbable.
The most practical value to quote would be an average value
n terms of sorting algorithms, this refers to how the algorithm would fare when processing "normal", "real world" data.
Now it so happens that for the sorting algorithms we've seen so far, their worst case time complexity is the same as their average time complexity
The optimizations don't change the basic nature of the sorting algorithm.
From this point on, as we cover the more efficent sorting algorithms, there may be distinctions between the average and worst case time complexities
which is why this episode comes in right on time.
Of course, this also saves me the trouble of having to recap the Big O Notation in each episode,
I'll just point viewers here!
Alright, that's all there is for this episode.
Please join me on the next episode of sorting algorithms, where we take a look at Quicksort.
You're watching 0612 TV.