As you've seen in the previous video, controllable generation is achieved by manipulating the noise vector z that's fed into the generator. In this video, I'll show you the intuition behind that process. I'll review how to interpolate between two GAN outputs first and you'll learn how to manipulate noise vectors in order to control desired features in your outputs. Controllable generation and interpolation are somewhat alike. With interpolation, you get these intermediate examples between two generated observations. In practice, with interpolation, you can see how an image morphs into another, like in this GIF, where each digit from zero to nine morphs into the following one. It's pretty cool. What happens is that you get intermediate examples between the targets by manipulating the inputs from Z-space, which is just the name for the vector space of the noise vectors. You'll see later that this is the same idea behind control bot generation. Just to be clear here, Z_1 and Z_2 are the two dimensions in this Z-space that you're looking at right now. As an example, there's a noise vector V_1 and a noise vector V_2, where V_1 could have a Z_1 value of, let's say five and a Z_2 value of let's say 10. Then this is the vector 5,10 and then V_2 has a smaller value, so four and two. So it's the vector 4, 2. That's what Z_1 and Z_2 are, just dimensions on the Z-space and the actual vectors V_1 and V_2 are going to represent concrete vector values in this Z-space. V_1, when you feed it into the generator, will produce this image here, and V_2 when you feed it into the generator, will produce this image there. If you want to get intermediate values between these two images, you can make an interpolation between their two input vectors, V_1 and V_2 in the Z-space, actually. This interpolation is often a linear interpolation. Of course, there are other ways to interpolate between these two vectors. Then you can take all these intermediate vectors and see what they produce from the generator. The generator takes this vector and produces that image, this vector, that image, and this vector, that image to get this gradient between these two images. Controllable generation also uses changes in the Z-space and takes advantage of how modifications to the noise vectors are reflected on the output from the generator. For example, with the noise vector, you could get a picture of a woman with red hair and then with another noise vector, you could get a picture of the same woman but with blue hair. The difference between these two noise vectors is just this direction in which you have to move in Z-space to modify the hair color of your generated images. In controllable generation, your goal is to find these directions for different features you care about. For example, modifying hair color. But don't worry about finding that exact direction yet, I'm going to show you in the following lectures. With this known direction d, let's call this direction d, in your Z-space, you can now control the features on the output of your GAN, which is really exciting. This means that if you then generate an image of a man with red hair produced by the same Generator g, with this input noise vector here, V_1, you can modify the hair color of this man in the image by adding that direction vector d you found earlier to the noise vector, creating this new noise vector here, V_1 + d, passing that into your generator and getting a resulting image where his hair is now blue. To sum up, in controllable generation, you need to find the directions in the Z-space related to changes of the desired features on the output of your GAN. With known directions, controllable generation works by moving the noise vector in different directions in that Z-space. Up next, you'll learn some challenges related to controllable generation and how to find directions on the Z-space with known effects on the generated outputs.