[MUSIC] We now have a randomized rounding procedure to take the solution of the linear programming relaxation and turn into an ending well solution. Now, let us itemize the cost of a solution and related to the LP optimum. How good is the output collection of sets? It all depends. First, we write the value of the output, the cost of the output algebraically. The cost of the output is, The sum over every S, set S in the output of the cost of S. This is a random quantity. Because whether S is in the output or not, depends on the random choices. So sometimes it's large. Sometimes it's small. And so the question now becomes as often for randomized algorithms, how good is this quantity on average? How good is it typically? So for that we're going to rewrite the quantity sum of c sub S for S in the Output as sum over every set in the collection, in the universe of c sub S times a quantity which is 1 if S is in the Output cover, and zero if it is not in the Output. So this is the indicator function that takes value 1 if S is in the Output and 0 if S is not in the Output. It's a notation. And we can rewrite the sum over every S in the Output of c sub S, as the sum of every S in the output of c sub S, times this indicator function. What is this equal to on average? What is its expected value? Here, we use the Linearity of expectation, a very handy tool in the analysis of algorithms. When you have two random variables A and B. The expected value of A + B is the sum of the expected values. The expectation of the sum is the sum of the expectations. Expectation and sum commute, you can take them in whatever order you want. Here, we want an expectation, other sum. As soon as we see this, automatically, we have a desire to exchange summations. We put the sum on the outside and the expectation on the inside. You're allowed to do this. This is a valid transformation. So, the expected cost of the output is the sum of every set of the expected value of the indicator function times c sub S. Let's focus on this expected value. Next property of expectation. For any constant lambda, any random variable X, the expected value of lambda X = lambda times the expected value of X. Multiplication by a constant and expectation commute. Here, we have the expected value of an indicator function, times c sub S. C sub S is a constant, and so, we can take it out of the expectation. So we do this. We are naturally drawn to doing this. So that now we have to take some expectation of something simple. Expectation, for one particular said s of the indicator function that is 1 if s is in the output and zero of s is not in the output. It is now easy to verify that the expectation of the indicator function of an event is exactly equal to the probability of that event occurring. In fact that is a well known fact of elementary probability theory. So we now have to compute the probability for a particular set S that it is in the output collection of sets and that probability is exactly x sub S. We put S in the output with probability 80% if x sub S is 0.8, 70% if it's 0.7, in general with probability exactly x sub S. So, now we can combine that line of reasoning and what do we deduce? We deduce that the expected value of the output, the expected cost of the collection of sets that will give us an output. Is equal, equal to what? Sum over every set of x sub S, c sub S. Now what? Look at this. You recognize this. This quantity is exactly. It's exactly the objective function of the linear programming relaxation. It's exactly the value of the linear program. And what do we know? Because it's a relaxation, we know that this is at most opt. And so, the Output is a collection of sets, whose cost and expectation is at most opt, the unknown optimal value. This was the reason why we chose this particular rounding while we decided to round to one, with priority x of S exactly. Just so us to get that objective. And so, now we know that our running procedure produces a collection of sets whose expected cost is at most opt. It remains to see the amount of coverage you obtain with that collection.