This is called the sigmoid function, or the logistic function,

and the term logistic function,

that's what gives rise to the name logistic regression.

And by the way, the terms sigmoid function and

logistic function are basically synonyms and mean the same thing.

So the two terms are basically interchangeable, and

either term can be used to refer to this function g.

And if we take these two equations and put them together,

then here's just an alternative way of writing out the form of my hypothesis.

I'm saying that h(x) Is 1 over 1 plus e to the negative theta transpose x.

And all I've do is I've taken this variable z,

z here is a real number, and plugged in theta transpose x.

So I end up with theta transpose x in place of z there.

Lastly, let me show you what the sigmoid function looks like.

We're gonna plot it on this figure here.

The sigmoid function, g(z), also called the logistic function, it looks like this.

It starts off near 0 and then it rises until it crosses 0.5 and

the origin, and then it flattens out again like so.

So that's what the sigmoid function looks like.

And you notice that the sigmoid function, while it asymptotes at one and

asymptotes at zero, as a z axis, the horizontal axis is z.

As z goes to minus infinity, g(z) approaches zero.

And as g(z) approaches infinity, g(z) approaches one.

And so because g(z) upwards values are between zero and

one, we also have that h(x) must be between zero and one.

Finally, given this hypothesis representation, what we need to do,

as before, is fit the parameters theta to our data.

So given a training set we need to a pick a value for

the parameters theta and this hypothesis will then let us make predictions.

We'll talk about a learning algorithm later for fitting the parameters theta,

but first let's talk a bit about the interpretation of this model.