We're now going to look at a different category of games, games that are called

Bayesian games. Sometimes called games of incomplete information, not to be confused

with games of imperfect information. So far what we have seen are games in which

all agents know what the basic setting is. That is, they know who the players are.

They know the actions available to the players. They know that payoffs associated

with each strategy profile or each action profile, depending on what everybody does.

this is true in all games, including games of imperfect information. That is games

of, in which our information is such where agents don't know. Exactly in which state

they are, nonetheless they know what would happen given what the strategy of all the

agents. So we're going to relax that. We're going to assume that What you said

isn't necessarily always common knowledge. Now in principle you can imagine relaxing

the various assumptions. You don't know the number of the players. You don't know

maybe the action, how many actions are available to them. >> But, in some sense,

some informal sense, all of those forms of uncertainties can be reduced to one type

of uncertainty, that is about the payoffs in the game. And, so we will assume that

agents have perfect common knowledge of everything, except what the payoffs of the

game are. And, furthermore, that there is some. Prior knowledge, prior belief that

is common to all the agents about those payoffs, and simply agents have different

signals that lead to different posteriors based on those common prior This may sound

very vague. Let me make it precise. Let me first give the formal definition. and then

just give an example which will make everything clear. So we have a set of

games, that is a Bayesian game is defined by first of all a set of games that are

identical except in their payoffs. So let's start going over the formal

definitions again. So we have A tuple that defined the game. You have a set of agents

N, and we have G as a set of regul ar kind of games. Think of these as normal for

games for example. each game is a consists of N A agent play the game, and they all

have the same strategy space. That is, they're 2 games in the set that have the

same strategy space. As I said, the payoff will be generally different. We have a

prior that is a distribution over those set of games. That's a sum prior. Nature

will decide which game is actually played based on this prior. And then there's

private signals as defined by our partition structure. that is each agent

for each agent, we find some equivalent relation on the games. And agent will be,

sort of, told which in which equivalent's class they are. And based on that, they'll

need to play the game. Now this is a mouth, mouthful, I know but hopefully the

following example will make it clear. Let's assume that we have 4 possible

games. And here the games that are familiar, we have Matching Pennies, we

have Prisoners Dilemma, we have the game of pure Coordination, and we have Bat,

Battle of teh Sexes. Each of those defined simply by their, by their payoffs. Now,

nature is going to decide which of those games actually is being played. And we've

decided, based on the probabilities as listed here. We have a probability of 0.3

here, 0.1 here, 0.2 here, and Point 4 here. Now once nature makes its choice,

agent will play. But the question is, what will they know? They will know the prior,

but they will know something in addition. And what they will know will be defined by

this partition. So here we have the 2 agents playing, and for each of these

agents there is a, a, an equivalent to find.

So, for example, think about the role player. For the role player there are 2

equivalence classes, denoted by the bold Partition.

And I'll make it green now. This is the equivalent relation defined for the row

agent. So, for example, suppose that, nature decided to, in fact. Playing

matching pennies. The agents will know the, that is the row agent, will know that

he's either in this game, or in this game. He'll know that he's not in any of the

other games. So that will be his private signal. He'll now have posterior belief.

What will he believe? Well he will believe that with probability point a, point 75 he

is playing this game at 0.25 he is playing this game and why is that because this is

the ratio of 3 to 1 as defined between these 2 games for him. What will the

column player Now well a column player lets pick a different color for her, she

has a different equal ventilation, this one. And now if matter again chose

matching penny what will she know? Well she'll know that she is either. In this

game or in this game, and in this case she will need to update her prior to reflect

this information and the perceiver for the a,h column agent will be that she is

playing this game's probability .6 and this Proba, probability 0.4. Again,

maintaining the ratios between these 2 games. And then we'll know more,

intuitively. Because the, when the agent knows, The ag, for example, the row agent

knows that she's someplace in this class. She will not know exactly what information

the common player has, but she knows what the possible information is it might have.

She knows that, that the role player knows that Either she is in this game, in which

case she knows that this would be the information that the common player has or

that she is in this game in which case she, the role player knows that the role

player knows that she's someplace here. And so it's a complicated story because

you can keep going. They have some beliefs about what the other player believe about

what they know so on and so forth. But this is the structure of Bayesian games

and based on this, you can start modeling and what it will do. But since this is

complicated there's an alternative Perspective on, on, on beige in game that

is different, but in some sense easier to work with.