Now, let us take a closer look at the Glover's algorithm. The first question we might ask is why we choose this vector s as an initial state. Well, as we observed in the last lecture, the Glover situation moves us towards the desired state Omega by a very particular angle, 2 Theta, where Theta is this angle which is divided by 2 minus the angle between s and Omega. We need to know the value of this angle to compute the number of iterations. Now, when we don't know anything about Omega, but we must know Theta. We have to choose some vector, which angle to Omega? We will know, and the only such vector is s because it has equal angles to all the basis vectors or the space. The second possible question is, what if we have more than just one special point Omega? What if we have, say k points, for which f Omega returns one? Well, the algorithm doesn't change, but we need to recalculate the number of iterations for this case because instead of one vector, we now have k vectors. The angle Theta here becomes greater. So, vector s goes higher. Cosine of this angle would be this value, and the number of iterations is in this case Phi divided by 4 Theta, which is Pi square root of N big divided by 4 square root of k. Please note that to use Grover's algorithm, we must know the number of special points to calculate the number of iterations. Grover's algorithm does not tolerate extra work. If you do more iterations than it's needed, we may pass this point of interest and go further lowering the probability of the correct result of the measurement. The last question is, why we choose this operator us? There are plenty of a operators in this space, infinity to be exact. Why this one? The answer is again very simple, we need some point, some vector to reflect it in the desired direction, but we don't know which direction is desired. If we had some vector s dash right here in the middle of the way, then we could reflect there in just one step. But we don't have the vector in the middle of the way because we don't know the way yet. Again, we choose to reflect over the only thing we know in this space, the equally weighted sum of all the basis vectors. Now, to understand the Grover's algorithm even better, let's fantasize a little. I have just told you that if we had some vector here in the middle of the way, then we could reflect over it to the desired state in just one move. We don't have this vector here in the beginning of our iterations, but we do have it in the middle of the algorithm's work. So, from 12 iterations, we have s6 here in the middle of the way. So, we don't know the direction, we don't know the way, but the system itself knows the way. The problem here is that there is nothing to reflect over it, there is no vector s anymore because s6 is itself the state of the system. But we can arrange the initial state here in another system, and the thing we desire now is to use this vector from the right system here as an operator in this system here like this. So, if we use this vector s6 as an operator, we can reflect the vector s over it and get very close. Here's again the angle Theta to the state Omega. To do this, we need to implement the so-called Von Neuman architecture for the quantum computer. Then we can emit the half of iterations here, and even more the vector s6 can be obtained from vector s3 in the very same way. So, if we continue this raising further, we can say that instead of Grover's square root of N big iterations, we will have binary logarithm of n which is the desired exponential speedup. But the problem is that Von Neumann architecture is still under the question, if it can be implemented for the quantum computer and for the model of quantum computing very long before, Grover proved that his algorithm is optimal. This proof of optimality of Grover's algorithm is going to be the topic of our next two lectures.