So the fourth and final issue we're going to talk about
on performance evaluation is process versus outcome.
And the first issue that we want to add here
is that a firm ought to, and individuals ought to consider
a broader set of objectives than they typically consider.
So organizations usually care not just about what exactly happens, but
how it happens.
How a person goes about their job.
So for example, perhaps most importantly, along the way to creating some outcome,
what impact did that employee have on other employees?
What impact did they have on others?
Can you fold that into the performance evaluation as well?
This just highlights that we
care about more things than just what is traditionally measured, and
so one prescription that comes from it is to measure more things.
An example of why we do this is research by Bond, Carlson and
Keeney, they found that people consider too few objectives.
And this is in negotiation settings, this is in decision making settings, that
if you ask people what they're trying to accomplish, they'll list a few objectives.
But then if you ask others, you ask a large group and
then you share all the possible answers with people, people will go back and
say, yeah, I forgot that, and I'll add that.
And in the end, they list it on their own,
something about half of all the objectives they ultimately recognize as relevant.
So, left to our own devices, we're a little too casual,
a little too informal about focusing on a few narrow issues instead of a broad set.
Now, it can lead firms to focus too narrowly.
An example of a firm that recognized this, Dell Computers in the early 2000s,
famously hard-charging, famously results-oriented,
changed their performance evaluations.
They changed from 100% results based to 50% results based,
what an employee accomplished, and 50% how he or she accomplished it.
So they're judging not only what the person does, but
their impact on other people.
Because they had too much experience with managers who were running over people
to hit the numbers to get the bonus, and in fact
the firm cared about things other than just the numbers that were being measured.
In general, the more uncertainty there is in an environment,
the less control an employee has over the exact outcomes.
The more a firm should emphasize process over outcome.
So the more noise, the less control the employee has,
the more they should be evaluated on process, and not on outcome.
One way you can go about this is to use analytics to figure out
which processes tend to produce the desired outcomes.
And what you're looking for
here is what's the more fundamental driver of the outcome.
It could be that you're measuring only the last step in the process.
And in fact, there are some important intermediate steps
that are more fundamental and they might provide additional performance measures.
So, for example, the sports analytics world gives us an illustration of this.
In hockey, for a long time, teams were evaluated, and,
in fact, players were evaluated based on goals.
And if you were trying to figure out whether your team was really good or poor,
they would look at the number of goals they'd been scoring.
If they wanted to evaluate whether a player was strong or weak,
they would look at his contribution to the goal scoreboard while he was on ice.
And this is fine and it's related and it's important, but can you do better?
It turns out there aren't many goals scored in hockey.
And sometimes they are scored because they hit the pole and
went left when others aren't scored because they hit the pole and went right.
They tip off of people's skates, all kinds of crazy things happen.
There's a lot of noise between what a player controls and what actually happens.
And whenever there's that noise,
you want to be careful about how much weight you put on it.
It's not giving a very reliable signal for the true effort or
the true ability underneath it.
So what do they do?
They determine through analytics that a better predictor was not goals but shots.
That if you looked at how many shots, they call it shots on goal, shots that were hit
basically at the goal, at the net, that that was a better predictor.
That was a more reliable measure, and
one would think about it is, it's more persistent.
That a guy, or a player, or a team that looks good
in one period on shots is more likely to look good on next period on shots
than a player who looks good on goals is to look at on goals in the next period.
It's a more persistent and fundamental measure.
Well, you can take that even further, they subsequently realized
what really matters is possession, it's not even shots.
So there's, again, so much noise on shots that,
the best measure they can find on team or player is contribution to possession.
The teams that keep the puck,
the teams that had the puck are the ones that are most likely to shoot.
The ones who are more likely to shoot are most likely to score goals.
But, they had to figure that out going backwards, looking for the more and
more fundamental measure, and you can think of these as process measures.
They're getting away from the noisy, rare outcome measure of a goal
to the more fundamental, more reliable measure, process measure, possession.
What about non hockey applications you might reasonably ask?
Well, it's not hard to come up with them.
One, I'd push you to think about what that means for your organization,
but one conversation I've had along these lines is with sales organizations.
Can they come up with more fundamental measures than the traditional dollars
booked?
Can you, for example, consider the process that leads to that dollars booked?
is it the number of bids that a salesperson gets her organization into or
maybe before that, is it the relationship building that
the sales person does in order to get the bids in order to book the dollars?
Or maybe even it's earlier than that, it's an earlier process yet.
It's the number of contacts generated, and we don't know exactly
which of these would be most persistent, it will vary of course by organization and
by industry, but the idea is to take data to the problem.
And determine where in the process you might start adding assessments
to get away from these relatively rare noisy outcome measures.
At the very least,
to supplement them with more fundamental drivers earlier in the process.
I want to end this section with a quote from Shane Battier,
another sports example, another Michael Lewis example.
This was an article written in the New York Times Magazine