Now since the L1 norm is convex, then we

can turn this constraint optimization problem into an unconstrained one.

We can relax it by bringing the norm up to the objective, so weighted by a

parameter lambda, it's called a regulation parameter, or the Lagrange multiplier.

And lambda chosen so that the solution of

this optimization problem which is a function of

lambda should satisfy the constraints, so the L1

norm of this should be less equal S.

We have looked at relaxed problems like this, but

in weeks six and seven, when we talked about recovery.

There, for example, we solved the constraint

list squares problem, the result of this minimization.

[BLANK AUDIO] The difference of course being that here

we use the L2 norm for the stabilizing function, while here we use the L1 norm.

When it comes to considerations about lambda however, they are similar.

For example, when lambda becomes very small, goes to zero,

we are solving a least squares problem, just minimizing this norm.

While, on the other hand, when lambda becomes very large,

we end up with x equals zero as a solution.

As mentioned already, when the L2 norm is used,

then we can end up with a closed form solution.

So the solution to this problem, if you recall, it's simply equal to A

transpose A plus lambda C transpose C minus 1 A transpose b.