Let's think about why that is. This probability distribution is

constructed by multiplying all four of these factors, and if you think about

what's going on here, you see that B really, really likes to agree with C, so

these guys are really closely tied together.

And, actually this should probably be and A

and D similarly like to agree. So these are really, really closely tied

together. These guys, C and D strongly like to

disagree. It's nice to have opposite values.

Now all three of these factors are actually stronger, that is the

differences between the assignments are bigger than in phi one.

So where are you going to break the cycle?

You can't have D agreeing with A, A agreeing with B, B agreeing with C and C

disagreeing with D. It doesn't work.

And so somewhere this cycle has to be, this loop has to be broken, and the place

where it gets broken is A and B because it's the weaker factor.

So the A and B probability is actually some kind of complicated aggregate of

these different factors that are used to compose the Markov network.

And this is actually an important point, because it is going to come back and

haunt us in later parts of the course. There isn't a natural mapping between the

probability distribution and factors that are used to compose it.

You can look at the probability distribution and say ha.

This piece of it is what's [INAUDIBLE] off the beat.

This in direct contrast to Bayesian network, where the.

Actors were all conditional propabilities and you could just look at the

distribution and compute them, here you can't do that and that often turns out to

affect things like how we can learn, these, facts for from data, because you

can't just extract them directly from the propability distribution.

So, with that definition we can, with that intuition we can now go ahead and

define a peerwise Markov network. And I'm defining it explicitly, because

peerwise Markov networks are sufficiently, commonly used as a class of

general Markov networks. That that it's worth giving them their

own place. So a peerwise Markov network.

Is an undirected graph, who loads other random variables.

x11) up to xn.n). And we have edge, xi connecting to xj.

And each one of them is associated with a factor.

Also move into potential. Phi IJ.

Oops. Xi [INAUDIBLE].

Okay? That's what.

this wouldn't be an edge. This would be a comma.

Mm-hm. That's a peerwise Markov network.

And from that. and here's an example of a slightly

larger markup network. This is a markup network that is in the

form of a grid. and we, this is the kind of network

that's used for example when we're doing various operations on images, because

then the variables correspond to pixels for example.

and this is the Markov network that corresponds to the image segmentation

when we'ere using super pixels in which case it's no longer a regular grid.