So I really wanna just build on that last piece a little bit more with something that's a bit of an aside. But it's only an aside because it's so important that it doesn't just apply to this particular segment, it applies throughout the people analytics piece of your course, right. And that is about measuring outcomes because in any one of the modules that you've been listening to and learning from in terms of people analytics, you will have heard a lot about different kinds of outcomes that we care about. Now we were just talking about outcomes, which is why I'm gonna connect it here, but again, it applies across the modules. So the main message here can be summarized in a nutshell, which is basically garbage in, garbage out. So the idea here is that if you don't have good measures, you can plug whatever you like into your statistical program, and it can generate lots and lots of lovely results. But your results will mean nothing if those original measures were bad. Right? So if you put garbage in, in the form of poor measures, whatever results you get are really gonna be meaningless and may actually be harmful if you try to act on them. So it becomes very, very important in any form of analytics and particularly people analytics what we are talking about today, be able to measure your variables really well. And we're focusing here on outcome variables, although I've already talked about those network variables that we might care about and how to measure them really well. But let's think about outcome variables here because that's gonna apply again across the field of people analytics. So if we look at how do collaboration patterns matter for important outcomes? That question we were just talking about and we have well our five building blocks in terms of networks, we discussed a lot in segment two how to actually map those building blocks effectively. Actually we talked a lot in segment three on how to actually map those building blocks effectively. Now we're gonna focus on how to connect those to individual outcomes and as we were pretty just talking about we're going to focus particularly on performance. So we've already discussed a little bit, we need to map network attributes, network features to performance but what I just want to emphasize is the importance of trying to measure performance well and what that takes. So let's think about measuring performance. The question is, what is a strong measure of performance? I have just picked performance, but this could be many other things. It could be satisfaction. It could be commitment. It could be intention turnover. What's a strong measure of anything? Here we're gonna use performance. There are several key criteria whenever you're trying to measure anything in order to get away from the garbage in, garbage out scenario and have really good measures that can give you really good results. The first is you have to get a measure that's at the right level of analysis, so if I'm trying to measure individual performance, I have to make sure that what I'm measuring in terms of performance is actually an individual performance measure. It's not something that was generated by the whole unit or the whole group and the individual didn't really have that much control over it. It needs to be at the right level of analysis if I'm trying to measure individual performance. The second and third criteria are reliability and validity. And if you've taken any measurement type courses or been involved in this field at all, you'll have heard of reliability and validity. Reliability means are your assessments, are your measures, consistent? And that might mean over time and it might mean across raters or across people that are taking the measures. The third criteria here is validity, which means are the assessments accurate? Are your measures accurate? Are they measuring what they're actually supposed to measure? So if you're measuring performance in terms of reliability, you need to say well if I measure this person's performance in the morning, and I happen to look at how well they're performing in the morning, am I gonna get the same result if I measure their performance in the late afternoon? Because if not, it's not a very reliable measure of their performance, right? Maybe it's their typing speed, right? So if their typing speed in the morning is different from their typing speed in the afternoon, it's not a very reliable measure. So we need measures that are consistent over time and across raters. In terms of performance, if you're looking at validity, is it actually capturing their performance? So if you're measuring typing speed, is that an aspect of the performance that's actually important and relevant to the task? Or are you actually trying to measure something about how good they are at writing? And typing speed is really not a good measure of that. So, when we talk about reliability and validity, we often use this kind of bullseye motif. And it's helpful just for understanding the difference between them. So something that's reliable again has to be consistent over time and across raters. In other words you're hitting the same spot on the bullseye over and over again. That makes it reliable, but that doesn't necessarily make it valid. For something to be valid, you need to be hitting the bullseye, right, or you need to be sort of equally distributed around the bullseye, so there's no sort of bias in the way you're hitting the target. So a measure that's valid but not reliable will be sort of hitting, it's equally distributed around the bullseye, but it's not very reliable, you're hitting different points all the time. Neither reliable nor valid, you're kind of biased to one part of the target, and both reliable and valid you're consistently hitting the bullseye. This is just a little way that we sometimes use to help us keep the difference between reliability and validity separate in our minds. But going back to these criteria for measuring performance, so there has to be the right level of analysis, has to be reliable, has to be valid. The next three criteria, it has to be comparable. So, if you're measuring performance across lots of people or across lots of units or lots of teams say, you need a measure that's comparable for all those people. You can't measure everybody on different criteria, right? It needs to be comparable. It needs to be comprehensive, in other words you need to have that measure for everybody whose performance you're trying to measure. And it needs to be cost-effective usually. If it's very expensive to collect performance measures you could have the best performance measure in the world that involves, you know, really following people's performance over time, and monitoring it, and observing it, and all sorts of great stuff. And it can be a really great, reliable, valid measure of performance, but it's just not cost-effective, so we also have to kind of come up with measures that are relatively cost-effective. And then finally, causality. So in terms of causality, the measure of performance, if you want it to be an outcome variable, you have to be able to make a plausible claim, a plausible argument for why it is actually an outcome variable. In other words, for why something else causes that outcome variable. Now I'm not going to go into that in more detail here. It's a very big and a very important topic. If you want to claim that this kind of network configuration or this kind of career path for example affects your performance, you need to be able to make causal claims. The topic of causal claims is gonna be covered much more depth by Matthew Bidwell in his module where he'll talk much more about causality. So that's all I'm gonna say about it here, other than that, it's a really big and important issue. Okay, so when we have these criteria for what's a strong measure of performance and we're thinking about what is the sets of network attributes that predict an individual outcome like performance, again, lots of different performance measures we could be choosing between, but we want to use these criteria to help us decide which are good. Is sales per quarter a good measure of performance? Well, you know sales per quarter may not be at the right level of analysis if it's the whole unit or the whole team and you're trying to measure individual outcomes. If it's a whole team that produces sales, it's not so good. Cost savings, same thing. Maybe cost savings are a result of decisions are taken somewhere else in the organization and not really a good measure of individual performance. Self reported ratings. We sometimes have to rely on, in surveys, people measuring their own performance. We know there's a lot of problems with self reported ratings. They don't tend to be very reliable or valid. Manager reported ratings might be better, but there can be problems with bias there too. May not be very cost effective to collect that kind of data. May not be very comparable across managers. Measures like bonuses often capture something quite different in terms of the bonus pool, for example, and don't really capture performance. So again, there's a measure of validity there. So, any measure of performance that you can get is gonna have some problems, but understanding what those are and trying to apply these criteria for what's a strong measure of performance or a strong measure of any outcome that you care about is gonna be critical to avoiding that garbage in garbage out kind of trap. Okay if the measure is poor in the first place, the results are gonna be very, you can't trust them. It's not worth it. You gotta have good measures going in. So the role of people analytics again, people analytics is a data driven approach to managing people at work. But, and again this applies across all the modules here, collecting and analyzing high quality data is absolutely critical. If you don't have good data, none of the rest of it's going to matter at all, gotta have good data. So we have now had a chance to think about how to evaluate collaboration networks in terms of how can we compare collaboration attributes and how collaboration features of the network across people and how can we map that and connect that to outcomes that we might care about like their individual performance. In our last segment, we're gonna think about, how can we intervene to make changes in collaboration networks based on the kind of data that we've collected and analyzed?