Similar to the bisection technique, we can use this pretty ancient technique known as the Golden Section Search Method. It's an iterative process used to find the minimum of a function. You have to have a lower bound and an upper bound, and you have to somehow know your function. And it's used to calculate the minimum. It can also be used to find the maximum of f(x), but only if the function is first flipped about the x-axis. So we could take the minimum of -f(x). The process works as follows. So we have our curve, and what we do is we first choose a lower bound. And we call this little a in the Golden Section Search Method. We also choose an upper bound. And we're sort of assuming that the minimum lies somewhere between a and b. Next, we calculate something known as the golden ratio, which is the square root of 5-1/2. It's about 0.62. Next, we calculate value d, which is the golden ratio times b-a. Now this is going to be an iterative process. During each iteration, we're either going to change little a or little b, but not both. So it's similar to the bisection technique in that you're changing either the lower or upper bounds during each iteration. After we have d, we're going to calculate this parameter known x1, which is d above a. We then go down a value d from b to compute x2. Next, we find f(x1) and f(x2), and we do a comparison between those two function values. If f(x1) is less than f(x2) as shown in this diagram, then that means that we're going to eliminate everything to the left of x2. So we're just going to cross all that out similar to bisection, where you're eliminating a lot of your integral. Now in this case we're not quite eliminating 50% as we did in the bisection method. We're eliminating about 38%, which is 1 minus the golden ratio. So if this happens, then x2 becomes the new a. We don't have to do anything with b, there's no change in b. And the way the math works out, x1 actually becomes the new x2. But this is not as important. The golden ratio was used because it minimized the math involved because you could just simply take x1 and use it for the next x2. But we're not really limited by processing speeds, so we're just going to recalculate x1 anyway. Now a second scenario you could have is if f(x1) is greater than f(x2) as shown in the red curve here. If that's the case, then we're going to eliminate all x greater than x1. If this happens, then x1 becomes the new b. There's no change in a, because we're not going to change the lower bound. So let's go through an example here. We want to find the minimum of the following function between x = 0 and x = 4. So I've got a is 0, b is 10, that's our lower and upper bounds. Next, we calculate the golden ratio, which is about 0.62. We calculate d, which is 0.62 times the interval b-a. So in this case, that's 6.2. We then calculate x1. x1 is a distance d above a, so that's equal to 6.2. We then subtract d from b to calculate x2, that's about 3.8. Next we compare f(x1) to f(x2). In this case, since f(x1) on the right is greater than f(x2), then what we do is we eliminate everything to the right of x1. And in this case, x1 then becomes the new b. So we move on to the next iteration. Our interval starts at 0, it goes up to 6.2, which is left over from the previous iteration. We can calculate x1 and x2 after we calculate d. And we go through this and we find an x1- 3.8, x2 = 2.4. In this case, since f(x1), just like the previous iteration is greater than f(x2), we eliminate everything to the right of x1. And x1 then becomes our new upper bound. Let's do another iteration. We calculate d to be 2.4. We calculate our new x1, our new x2. In this case, f(x1) is less than f(x1), so we eliminate everything to the left of x2. And in this case, x2 then becomes our new lower bound, which is a. We move onto the next iteration and we calculate the new parameters. Here we see that f(x1) is also less than f(x2) like the previous iteration. We eliminate everything to the left of x2 and we keep everything to the right, x2 becomes a new a. And so this process continues and we keep going and going and going. And we zoom in this interval, the overall interval starts getting smaller and smaller and smaller. And then at the end of 10 to 20 iterations, we can just take basically our average of x1 and x2, and that's a good estimate of the minimum. So let me show you how to solve this in Excel. We put in our lower bound, our lower initial bound. We put in our upper bound. I'm also computing the golden ratio up here, and I'm going to name this GR, so we can use that easily in our functions. For d here on this first iteration, I've put in d is equal to the golden ratio times b-a, x1 = a+d, x2 is equal to d-b. And then I've placed a function value here, f(x1). I'm just applying this formula, this function f(x1) to x1. And I can just simply, because it's a relative reference, I can do Ctrl+Copy and Ctrl+Paste. And now, the f(x2)computation is using x2. Similar to what we did in the bisection technique, I'm going to have an if statement here. And I don't know, sometimes when I'm recording these screencasts, it does some funky things. I don't know why it's making this really big H5. But if f(x1) is less than f(x2), then we're going to eliminate the left 38%. In that case, the new a is going to be the old x2. Otherwise, there's no change in a. And for b, we can say if f(x1) is less than f(x2), again we're removing the left 38%. In that case, there's no change in b. But if that's false, then we're going to be choosing x1 as the new b. Now everything else is good to go, so I copy this row down. I can take the entire row here, and I can double-click to drag that down. And you notice that after ten iterations, we're converging on something close to 3. For those of you who know calculus, you could take the first derivative of this. It's 2x-6 and set that equal to 0, and we'd see that the minimum is 3. So I like to take the average of these two at the very end, and say that that's our minimum. Now that's the x value corresponding to the minimum. This over here is the y value of the actual minimum. Okay, so that's how we can use the golden search technique in order to find the minimum of a function. And typically, it's not something easy like this, it's some big complicated formula.