Let's look at what this little single neuron network will compute.

Just to remind you the sigmoid activation function g(z) looks like this.

It starts from 0 rises smoothly crosses 0.5 and

then it asymptotic as 1 and to give you some landmarks,

if the horizontal axis value z is equal to 4.6 then

the sigmoid function is equal to 0.99.

This is very close to 1 and kind of symmetrically,

if it's -4.6 then the sigmoid function there is 0.01 which is very close to 0.

Let's look at the four possible input values for x1 and x2 and

look at what the hypotheses will output in that case.

If x1 and x2 are both equal to 0.

If you look at this,

if x1 x2 are both equal to 0 then the hypothesis of g of -30.

So, this is a very far to the left of this diagram so it will be very close to 0.

If x 1 equals 0 and x equals 1, then this formula here evaluates the g

that is the sigma function applied to -10, and again that's you

know to the far left of this plot and so, that's again very close to 0.