Okay folks, so we're into the last part of the course now and we'll be talking about games on networks. And, in particular we're still interested in understanding networks and behavior and now we're trying to bring a strategic interaction into play. Where people's decisions depend on what other people are doing. so the idea is that essentially there's decisions to be made and it's not just a simple diffusion or contagion process, it's not updating beliefs. It's that people care about what other individuals are doing. So there's complementarities. I want to only buy a certain program if other people are using that same program. So, the, the, the way in which I write articles. Depends on what my coauthors are doing. Or I want to learn a certain language if, only if other people were also speaking that language. so there's going to be interdependencies between what individuals do. And there's also going to be situations where I can free ride. So if somebody else buys a, a new book, I can borrow it from them, and maybe I don't buy it myself. so who I know that's actually bought a book that effects whether I buy the book, both positively and negatively. So the strategic interdependencies, and you know, the idea of games. People think of games you know we're not talking about Monopoly or chess, checkers, et cetera. We're, we're thinking about a situation where there's interactions and what a given individual is going to do depends on what other individuals are doing. So there is some game aspect to it in that sense. but we're using game theory as a tool to try and understand exactly how behavior relates to network structure. Okay, so what we're going to do is work with some basic definitions and I won't presume that you're so familiar with Game theory beforehand. and we'll work through the basic definitions, which will be pretty self contained in terms of the network setting. Then work through some examples and then afterwards, we'll begin to do a more formal analysis and more extensive analysis of how these things work. Okay, so the idea here is there's going to be, different individuals, they're, they're on a network, they're each making decisions, and you care about the actions of your neighbors. so the early literature on this, came out of the computer science literature. And what it was really interested in was, how complex was the computation of equilibria in these settings in worst case games. So, how, what, how hard would it be for a computer to actually find an equilibrium of one of these games? In situations where there might be a lot of, of, in, in the case where nature was making it as hard as possible for you to find an equilibrium. And, what we're going to focus in on is sort of a second branch of this literature. Which, instead of being interested in the, the worst case computational issues, is instead interested in applying games on networks to actually understand what networks, have to say about in, how networks influence human behavior. And the one thing that's sort of nice is a lot of the, the interactions that we tend to have between individuals will have more structure, and so the, the games will be nice ones. They won't be the worse-case games that worst-case, I guess worst-case games that are going to be computationally complex. There are going to be ones where we can actually say something meaningful about this structure. So what we're going to start with is a, is a canonical special case. so it's a very simple version of the game but one that is going to be fairly widely applicable. And so what this is true is, is we're looking at a situation where person i is going to take an action, let's let that be xi. And we'll start with the case, where it's just a binary action, it's either zero or one. So I either buy this, book or I don't buy the book. Or I invest in a new technology, I don't. I learn a language, I don't learn a language. I end up, going to a movie, I don't go to the movie. And the pay off is going to depend on how many neighbors choose each action. So how many people choose action 0, how many neighbors choose action 1 and how many neighbors I have. So that's going to be, what my payoff is going to depend on, on those things. Okay? So we've got each person choosing an action in 0,1. And we're going to consider a situation where your payoffs depend on your action, so person i's payoff depends on their action. This, it's also going to depend on the number of individuals number of neighbors of i that choose 1. So how many of my neighbors chose 1, and it'll depend on my degree. How many neighbors I have. So, I have a 100 neighbors it might be different than if I have three neighbors and two of them are choosing action one. two out of three gives different than two out of a 100 so I might care differently depending on how many neighbors I have. Okay? So, what that means, simplifying assumptions in this saying, means simplifying assumptions are that we've got just the 0,1 actions, so we either take an action or we don't. I only care about the number of friends taking the action, not the identities of them. So I don't care whether it's, I don't have best friends and less-best friends. I treat friends equally in terms of who's taking the action, and it also just depends on my degree. So how many friends I have and I, I don't have a different preference than somebody else. So we can enrich these models later to allow for people to have different preferences and weight things differently. But for now, let's think of a world where everybody treats their friends equally, and they, it only matters how many friends they have not who their friends are. Okay. So, let's, let's look at an example of a simple game of complements. I'm willing to choose this new technology if and only if at least two, t neighbors do. So this is a game, I suppose, I, you know, I'm learning to play bridge, a card game. I have to have at least three friends who play bridge before I'm going to want to learn to play bridge. Right, so, my payoff to playing action 0, if I don't learn it I just get a 0, and one example of this would be that I get a payoff from playing action 1 which looks like minus this threshold plus how many friend play it. So if this threshold was 3, then I get minus 3 plus how many. if my friends play it. So for instance, if at least three of my friends play it, then I'm going to get a payoff of zero. If four of my friends play it, I get a payoff of one. If five of my friends play it, I get a payoff of two, and so forth. So this would be a very simple example where I'm going to be willing to choose action one if and only if at least two of my neighbors do. And you can, you could write down all kinds of different payoff matrices. This is just one example. And so let's think of a, of looking at, at, at a network now and we've got a situation where we've got a bunch of different people and a person's willing to take action 1 if and only if at least two sorry, two neighbors do. Okay? So this is a game where, once I have at least two of my friends who have bought this new technology I'm willing to do it, otherwise I don't. Okay. So what do we know, first of all? Well, all these, if we look at this network, all these blue people they're going to take action 0 because they only have one friend. Actually, sorry, this person has 2 friends. That 1 shouldn't be coded as a 0. so these 3 individuals only have one friend, so they definitely are going to have to take action 0. There's no way they're going to have at least 2 neighbors. But what we can do is we can ask, what about this player, right? Well, their actions are going to depend on what their other friends do, okay. And one possibility is that we set, for instance, these three individuals all to playing action one. Right? So these, if these two individuals are doing it, then this person's willing to, they're all willing to because now they each have at least two friends doing it. So one possibility would be to stick it where we were before, where nobody takes the action because nobody else does, and so the technology never gets off the ground. So it's possible that just if it's a technology that needs people to want to communicate with other people and to have other people do it before they do it, there's a possibility of never getting it seated, it never gets off the ground. another possibility is yes, these three people all adopt it because they each have two friends who do it and so, that's also an equilibrium. Okay. Now, if these are the only people adopting. Then nobody else actually wants to do it. Because all the other individuals still have, at most, one friend who did it. So nobody else is above their threshold. And indeed, it's, it's still an equilibrium for these three people to do it, and nobody else to do it, right? So, nobody else wants to take the action because none of the other people have two neighbors to do. Okay. So that's one type of game. Let's take a, a look at a, a game that's going to a sort of opposite feature to this. So this was one which had a feature that if more of my friends take the action then I'm more likely to want to take the action. So, compatible technologies will have that kind of feature. but not always. Let's think of the example where, if somebody else, one of my friends buys the book, I don't buy the book because now I can borrow it from them. Okay. So I'm willing to buy the book, if and only if none of my neighbors do. So, for instance what, if I, if I don't buy the book what's my payoff? My payoff is 1. If some of my neighbors, one of my neighbors buys the book, which, if the number of neighbors who bought the book is positive, I can borrow it from them. I get a payoff of 1. If none of my neighbors bought the book. I can't borrow it. I get a payoff of zero. I didn't buy it. Now, instead, I could buy it myself. And if I end up buying the book myself, then what do I end up with? I end up with a payoff of one minus c, where c is the cost of the book. Right? So, I'm in a situation where, well, in terms of my payoffs here. My optimal pay off would be I would love to have one of my friends buy it, me not buy it and borrow it from them. That would give me the pay off of one, that's my best possible pay off. My worst pay off is nobody buys it and I don't buy it. So if nobody, if none of my friends buy it then I'd actually be willing to buy it. It is, as long as c is less that 1. And, a, the, the situation, it wouldn't be an equilibrium, it's going to one where, none of my, a, friends buy it and I don't buy it. So if they don't buy it, I buy it, but I won't buy it if one of my friends does. So if we look at that example, this is known as whats called a best shot public goods game. So what matters to any individual is sort of the max of the actions of their in their neighborhood and so that is called the best shot public goods game. So an agent is willing to take action one if and only if, no neighbors do. So here would be an equilibrium of that game. this person takes action 1, none of the neighbors do. This person takes action 1, because no neighbors do. And so forth. Right? That's an equilibrium of this game. Okay? That's a different game and it's going to have a eh, a different-shaped equilibrium to what we had before. Here, now we have these people taking action 1. there's multiple equilibria to these games. There can be different combinations of things that are equilibria, and we'll take a look at that in more detail. So the next thing we're going to do is now put a little more structure on these games, try to understand a little bit more about what we can say about the equilibria. And eventually, tie the equilibrium structure back to the network structure and see what we can say that's meaningful about how these games behave.