0:00

So when, when we analyze the running time of an algorithm, or

Â look at multiple algorithms and try to compare their running times,

Â we usually focus on the running time as the input size grows.

Â We don't use, we don't focus on the running time for very small values.

Â Because, if the input size is 2, probably most algorithms are going to do

Â an okay job, regardless of whether it is exponential or polynomial or linear.

Â It, it, it's not going to make a big difference in practice, again,

Â if the input size is 2, or 4, or 5.

Â The question is, as the input starts growing, in the,

Â in terms of size, how do these algorithms and their running times start differing?

Â Does the, does the gap starts opening between the running times, and so on?

Â So this is why when we do analysis of running time, we focus on the order of

Â growth of the functions that are given for the running times.

Â So if I have running times of, of, of four algorithms and for each one, again we

Â have the function of the running time, a function in terms of the input size.

Â What we, what we really mean the focus on is that how do these functions compare to

Â each other as the input size starts growing?

Â Again, if for, for, for input size of 2 or 4 or 5,

Â I don't really care which one is smallest or which one is biggest.

Â I'm not going to choose an algorithm based on how it performs on an input of size 2.

Â Because when we deal with real life problems,

Â we are not going to be dealing with inputs of size 2.

Â In real, real life problems,

Â we're going to be looking at at inputs of very large sizes most of the time.

Â And it matters to us how is the algorithm going to perform when the input

Â size gets larger.

Â And this brings us to the concept of growth of functions.

Â How do they grow?

Â To illustrate this, I'm going to imagine a sce,

Â a scenario of having a problem, one problem, for which four,

Â four different algorithms A, B, C, D where developed, okay?

Â Four students developed four algorithms for the same problem.

Â I'm going to assume all of them are correct.

Â And I will look at the running time.

Â I asked the students to analyze the running times of these algorithms.

Â They came back to me.

Â 2:08

One, the student developed algorithm, he said his algorithm runs in log n.

Â Okay? It's a log of n.

Â A student who came up with algorithm B told me

Â that her algorithm runs in linear time.

Â It takes o of n time, or on the order of n.

Â The third student told me that the algorithm runs in n squared.

Â And the fourth student told me that her algorithm runs in 2 to the n time, okay?

Â So.

Â Here on the board, I'm showing a table illustrating these four

Â functions of running times that for the four algorithms A, B, C, D.

Â Again one algorithm takes log n, the other takes n, n squared, 2 to the n.

Â And I want to look at how these functions grow.

Â Again as I said, for small input, it's not going to matter.

Â Any algorithm I choose is going to be fine probably.

Â The question is how do these algorithms start differing in terms of

Â running time as the input size grows?

Â So to, to, to look at that or

Â to, to answer that question, I will do a very simple exercise.

Â I will try just a few input sizes and see how the functions evaluate too.

Â So I'm going to try three input sizes.

Â Two.

Â N equal two.

Â N equal 16.

Â N equal 256.

Â Again, let me remind you once again, that even 256 which is much larger than 2,

Â this is still very small compared to sizes of inputs.

Â And real life problems that we solve computationally.

Â So these, by, by any standard, these are still very small input sizes.

Â But, for the sake of the exercise of looking at how these

Â four algorithms are going to fare in terms of running time,

Â it is sufficient to look at these numbers.

Â Now for each one of these numbers,

Â I'm going to evaluate the function, the running time function, for all algorithms.

Â So if the input size is 2, algorithm A is going to, execute one operation,

Â B is going to execute two operations, C is going to execute four, and

Â D is going to execute four.

Â With today, with today's computers, and even with computers from 20 years ago,

Â these are not going to make any difference in practice okay.

Â One, two, four operations.

Â Regardless of the algorithm and

Â the machine is going to execute the new hit until you get the answer immediately.

Â 4:36

With today's computers, these, these algorithms 256 or 65,000 operations were,

Â four operations, the algorithm is going to terminate in fractions of a second.

Â Okay?

Â So the input size increased.

Â The running time started increasing.

Â D is the largest and

Â A is smallest in terms of running time, but still we are fine.

Â Okay?

Â I mean, it's not a big deal to run an algorithm that takes this many operations,

Â because again, today's computers can do this very, very quickly.

Â Again, this is going to be fractions of a second.

Â Now let's look at the third input size of 256.

Â We go to 256.

Â Algorithm A, take, performs on the order of eight operations.

Â 5:21

Algorithm B performs on the order of 256 operations.

Â C is going to take on the order of 65,000.

Â Now algorithm D is going to take on the order of 10 to the 77 operations,

Â whatever the name of that number is.

Â I don't know if there's a name for such a number.

Â But this is going to take on the order of 10 to the 77 operations, okay?

Â Now, now we start asking questions, okay?

Â Now input size is 256.

Â These differences here with today's computers are not significant.

Â Again, yes eight operations is going to be faster but

Â 65,000 is going to still be fractions of a second.

Â How long is it going to take to do this?

Â Okay?

Â How long is it going to take to do this?

Â So, this is the number of operations that this algorithm is going to perform, okay?

Â So let's let's just make some simple assumptions, and let's assume that,

Â I have a very powerful computer that can do 10 billion operations a second.

Â Ten billion operations a second.

Â So now I ask, how many seconds is this going to take?

Â Or how much time is this going to take?

Â So, if I can do 10 billion operations a second,

Â I can divide this by 10 billion, will give me the number of seconds.

Â I can divide that number, then, by 60, it's going to give me a minute.

Â I can divide this, that number by 60, it's going to give me the number of hours.

Â I can divide that number by 24 is going to give me the days, and

Â as I divide that number by 365, is going to give me the number of years.

Â If you do that exercise, you will find that, even if your computer can deal with,

Â with 10 billion operations a second, this is going to take on,

Â more than 10 to the 59 years.

Â 7:04

Okay?

Â It's going to take more than 10 to the 59 years.

Â That's if we assume that the computer can do 10 billion operations.

Â Okay?

Â Now 10 to the 59 years.

Â There are two issues.

Â First of all, to the best of my knowledge, the laptops and

Â the desktops that we use today cannot do 10 billion operations a second.

Â This is number one.

Â Second thing, 256 is still very small for an input size.

Â So, in practice, we'll be dealing with much larger input sizes.

Â In practice we'll be dealing with computers.

Â Using computers that can process fewer than 10 billion operations a second,

Â you can get the point.

Â If under these simplifying assumptions and generous assumptions,

Â I'm getting 10 to the 59 years just to, to handle the input of size 256.

Â You can imagine what's going to happen in practice okay?

Â So, this is the sense, this is why we are interested in the growth of function is

Â that I'm interested in how the running time is going to grow.

Â As the, as the input size grows.

Â Because this is not where I'm interested.

Â This is not the input size that's going to concern me.

Â The, what's going to concern me is that as the input size grows,

Â how does this function grow?

Â If it grows like this, I, I go from two to 16 to 256.

Â And the function grows sub linearly, right?

Â If I draw that, if I draw the plot here and

Â the x-axis, I put the input size, and here I put the function running time.

Â This is 2 here and

Â this is 16 here, and this is 256 here.

Â Again 2, 16, 256.

Â This is a linear, suppose this is a linear function here,

Â 2, 2, 16, 16, so this is going to be the y equal x here.

Â This function, the log n is growing actually slower than that.

Â Slower than linearly.

Â That algorithm B is going to take this running time linear.

Â The squared one, running is time is square is going to take square n squared but

Â 2 to the n is going to start growing very fast.

Â This is where 2 to the n is going to happen and this is n squared.

Â This is the n here and this is log n.

Â So when I look at the growth of functions like this in these terms.

Â I can see that algorithm A as input size grows, its running time is growing.

Â Of course it will grow.

Â We won't, we cannot, I cannot think of many interesting algorithm whose running

Â time is not going to grow with input size.

Â So that is not the question.

Â The, the running time is going to grow, as the input size grows.

Â The question is, how does it grow, okay?

Â This is a nice growth.

Â We would love an algorithm whose growth is like this.

Â We can, we, we love an algorithm whose running time also grows linearly.

Â We can tolerate, and we would be happy also with a,

Â with a, with quadratic time, n squared.

Â But, an algorithm that takes 2 to the n, is not going to be very practical, okay?

Â Again as I said here, if n is 256, if you can handle 10

Â billion operations per second, it's going to take us 10 to the 59 years.

Â What is the point of such an algorithm?

Â Even if it is correct, even if it is elegant,

Â even if it is written in whatever programming language, it's useless.

Â 10:30

Okay? So, this is why we focus on the growth of,

Â of, of functions and they are of interest to us.

Â Now, as we focus on the growth of, of functions and

Â we start looking at, at running time as n becomes bigger and bigger and

Â bigger, or as we say mathematically, as n goes to infinity.

Â What we, what, what we can do is that as n starts going bigger,

Â growing bigger, we can start focusing on dominating terms in the functions.

Â Okay?

Â So, if I have functions that are growing, and two functions and have

Â multiple terms in them, all I really need to think about are the dominating terms.

Â To illustrate that, let's think about two functions, two algorithms whose

Â running time, one of them is 1,000 n, and one of them is n squared.

Â 11:20

What I mean by, as, as n goes to, to infinity we can focus on dominating times.

Â As n goes to infinity, dominating term is going to be the n squared, not the n.

Â Okay?

Â How do I know that?

Â If n is 1, of course this function is bigger than this.

Â If n is 2, this is bigger.

Â If n is 3, is bigger.

Â Four, is bigger.

Â And so on.

Â At some point,

Â I will get to a value of n after which n squared is going to be much bigger than n.

Â That point is actually n equals 1,000.

Â When n equal 1,000, these two are equal, but

Â when n equals 1,001, this becomes bigger.

Â When n equals 1,002, this become bigger, 1,003 becomes bigger, and so on.

Â So this one, after n equal 1,000, this one becomes the dominant one.

Â Okay?

Â So, if I have really a running time of, of a, of an algorithm that is,

Â let's say a 1,000 n plus n squared plus 7 n cubed plus n to the 4th,

Â 12:19

I can simply say that this algorithm is going to be dominated by n to the 4th.

Â This is what we mean by focusing on dominating terms and

Â ignoring lower ordered terms.

Â Further, we can start also when I look at 5 n squared versus an n squared.

Â I can ignore this constant because the n squared is going to

Â be the dominating term rather than that 5.

Â The 5 is not going to make a big difference.

Â Okay?

Â So when we look at growth of function, we are really interested in running time as

Â the input size grows, grows to infinity.

Â And as the input size grows to infinity, we can focus on dominating terms and

Â start ignoring multiplicative factors, constant factors, and

Â we can start ignoring lower order terms.

Â