Now we've talked about our assessment, we've talked about our problem identification, we've talked about the project design, and we've talked a bit about how do we implement projects with goals, objectives, and what are indicators? Now, we're going to move into how do we monitor things? How do we evaluate what we've actually achieved? Really importantly, how do we learn from what we've done? So going to our cycle again, we can look at our monitoring activities. We often use the term monitoring and evaluation together or M&E. These are really not the same terms, they are different things. Monitoring is a continuous measurement of project performance. So it has a parallel human physiology. So if you're sitting in a chair, you know that your legs are crossed without ever having to actually look to see if the left leg is on top of the right leg, because you have a system in your body that tells you continuously what the different parts of your body are. So we can think about that application of physiology to our projects. We should have a monitoring system in place that tells us at any given time, what we're doing, what we've achieved, what we have yet to achieve, where we're having difficulty, where we're spending our money appropriately, where we're overspending, or where we're underspending. This is the continuing monitoring process, and we'll diagram this in some detail here in a minute. The evaluation is something different. This is a snapshot, this is a cross-sectional glimpse and where things are at a particular time. Commonly, we will do evaluations maybe at a year or maybe earlier if we're in specific urgent situation. Then we might want to do them at the end of a project that we'll be able to tell us what we've actually achieved, what the results are that we've achieved from this activity, what did we get for the money, the human resources, and the sweat, and the worries, and so forth that we put into the project? What do we actually achieve? So that's the evaluation. Now we have a new area that you may have heard about, that's called implementation sciences. This is looking at what's actually happening when we're implementing a project? Where have we made a right decisions, and where have we made poor decisions, and how can we use this scientific approach to improve the way that we're implementing a particular project? Then there's something that the Red Cross often uses and other groups as well called real-time evaluations. In these real-time evaluations, the criteria is the certain size of a project. So in the Ebola outbreak, the Federation of Red Cross and Red Crescent Societies undertook an evaluation when their expenditures reached a SFr 100 million, and that was their criteria. For now, it's time to do an evaluation. So I was actually responsible for that, leading that real-time evaluation. So at that point, we saw in the three West African Ebola countries how the project was functioning, what had been achieved by that period of time. So that was a snapshot at a SFr 100 million mark. So monitoring is key to keeping our project on track. The indicators are developed through a log frame. We haven't talked about log frame, but if you're really seriously interested in project design, take some time to look up about log frames and how that helps us to build our monitoring process. So the indicators we've developed, we're going to see how well we're doing, are we achieving things online? Often, the monitoring gets shortchanged and as a consequence, the project is really blind, does not really knowing what it's achieving. Its perhaps not had the adequate number of people necessary to produce the information. But the purpose of our monitoring is to create information that we can understand where we need to adjust our project as we're continuing along. So what's being done on time? What's lagging behind? What needs are being fully met? What need more resources or less resources? We have a term we often use, it is called the burn rate. How rapidly are we burning through our money? Are we expending our money in the way that we've anticipated? Are we being able to achieve the different results that we've wanted at the various time, so that we end up the project period with our resources entirely financially expended, and that we're not left with a deficit or a left with excess amount of resources? I might say that there's a general dissatisfaction with monitoring, partly because the way monitoring is traditionally set up, people are measuring the wrong things. They maybe tell us what's happened, but they don't tell us where we are falling behind or where we're getting into difficulty. They don't provide this information in a timely manner so we can make those changes that are necessary if we're not doing the things as appropriately as we'd anticipated. Sometimes, they're not being done in a participatory manner. So these are secret bits of data that are not being shared. We're not involving the right people in this evaluation. So maybe we're focusing on the perspective as the donors, and maybe we're not focusing on the perspective of the beneficiaries or the people who receiving services. So they should participate more into things. We're constantly looking for new approaches. So when you look at monitoring, if you look at it in another year or five years from now, there'll be some new approaches because generally, were not satisfied with how well we monitor things and we're looking at something that can be a bit more useful in our project implementation. Now, here's a fairly complex diagram that's going to get more complex in the next few slides. So we can talk about monitoring in terms of input. Where are the data? What kind of information need to go into the monitoring system? So if we're doing a food program, we may want to do market surveys to be sure that our food is not suppressing the price or suppressing the the growing of specific crop by farmers. If you may want to look at records from the clinics, all the clinics meeting the needs of the population, what diseases are recombinant? Do have programs in place that are adequately addressing these? Community surveys are important, are people being satisfied? Do people identify needs that we're not addressing? We have inputs from warehouse reports. If we're moving a lot of commodities, are there leaks in our supply chain in various ways? Are various things being stolen? Are they becoming outdated? Are they being destroyed while they're being stored in various ways? Then of course, we want to look at our financial expenditures. How much are we burning? How much are we actually using in the implementation of our activities? Now, from these inputs, we'll have some information that will create processes, monthly financial statements. If you're a manager, this monthly financial statement is as important as the doctor knowing what your heart rate is, because you need to know where we're in troubles with finances or where we're not in troubles. What's really going well? If we're in an insecure environment, we want to know about our security reports. Are there some areas where we're putting staff or even beneficiaries in danger at various ways? They will want service statistics, who's doing what? How much things are being done? How many immunizations are being given? How many deliveries are carried out? How many answer NATO visits CSER report? Then our commodities or food and non-food items, we want to know about how much is actually being distributed? With this information, we can predict what our needs are going to be in the future. Then we want to look at the quality of care, this is a new thing that we have put into place in the last few years. In the past, we just wanted to be sure that everybody had access to care. Now, we want to know, how good is this care and should we be paying attention to the way that it's being provided? So those are inputs and processes in evaluation. Now, let's move along and talk about the outcomes. What do we do with this? So hopefully, not just hopefully but as part of the plan, we should say that we're going to improve our decision-making based upon the data and the information we've collected through our monitoring program. We can assess our financial status, we can identify emerging threats. These might be emerging threats from diseases, or they may be emerging threats for the environment that we're working in. We want to see how well we're doing toward achieving our objectives, and then we want to know what is the status of the beneficiaries. Is their nutritional status improved? Are there changes in their fertility rates that we should be aware of? Are there a lot of movements of populations in and out? These are things that we need to know from our monitoring system. Then we look at the evaluation. Now, evaluations can be done in a couple of ways, and we have various types. There's a formative evaluation. This is often done at the beginning of activity and this tells us what the situation is actually going on. What's under the surface, what needs more study, that tells us how we can address specific issues that emerge because we have the background information as required. A summative evaluation is often done at the end which tells us what does the project actually achieved, and this is the results section. What's been achieved for the resources expended, and should the project be close at this point as it achieves about all it's going to achieve or should it change directions or be expanded in some way. We find that in the summative evaluation. Then we have process evaluations. These are really critical when we're actually doing things, because we want to know, are we doing things in the right way? Are there unnecessary steps or they're weak steps? Are the inputs appropriate? Are things done in the way that we originally designed things? So process evaluations are important. Outcomes, look at our achievement. What was intended and what do we actually achieve and we can also look at the financial component of this. So we can say that for a case of tuberculosis, how much did it take to treat that case. If we're looking at deaths prevented, how much did it cost to prevented deaths from malaria for instance. Then the impact is that long-term, sustainable results have been achieved. So multiple ways to look at our evaluation. So this is where they may be on the planning cycle has various points. Formative evaluations we may do at the beginning when we want to understand the issues better. Process evaluations we do when our activities are in place. Are we doing things in the right way? Are we having particular steps that are not functioning well? Outcome evaluations are looking at what do we have achieved. Impact look at the long-term. Summative evaluations is to look at the end of the day, what do we really achieved or where are we and how do we want to have this project remembered? Now, in order to be able to do a good evaluation, you have to depend on good objectives and the right indicator. So this goes back to when we talked about objectives and how important It was to get these objectives correctly. Otherwise, when we become to evaluation, it's going to be really hard to evaluate much because we don't know what the benchmarks or what the criteria or standards are that we design the program against. The initial assessment data is there to help us provide the comparisons. So if we start off with a certain level of malnutrition. How at the end we can say, with the recently measured levels of malnutrition, have we really achieved things that we should have for the resources that we put into it? How often do we want to do an evaluation? It all depends on the length of the project. Commonly if it's a long project, we may want to do an early evaluation to see how well things are going on, and this is more likely to be a process evaluation. The nature of evaluations and when they're done, also depends upon what you're actually undertaking. Because in some areas you may not be able to see much change for a year or so, other areas you may be able to see a change fairly quickly. Then at the end we have to think about the reassessment. What has a project achieved? This is why we need to have that good baseline assessment. Because we need to know, now what's the malnutrition rate? What's the immunization rate? What's the acceptors of family planning compared to what it was in the beginning? It may be that the population has changed. So we're reassessing things to see that in another phase, if there's another phase, we may have to redesign things because people are different and their needs are different now compared to what they were. Security situations. I've been working for many years in Afghanistan, what we can do in Afghanistan now is not at all what we were doing 10 years ago. What are the consequences? There may be some unintended consequences. In fact, there are always unintended consequences. Some of these unintended consequences are evil. So we need to know that and we need to recognize where we may have contributed to bad outcomes. But there are some consequences that are really good and we never envisioned those in the beginning, these are accidental achievements. So these are things maybe we want to incorporate into our project design going forward. If we understand what's necessary to have these bonus consequences of our project. So if there is another phase, do we want to do more of the same? Do you want to change? Do we want to change the focus? Has the context changed? Can we use things more effectively? Now often, the second time it will undertake something, we could do things more effectively because we've learned how to be more efficient in the process. Sometimes it's time to say all right it's time to close out, we hand over any residual activities, we close out phase completely, and say this was a good learning experience, but we're now completed in our work in Mozambique or wherever we were working. Now, let's talk about learning. So learning should flow from the evaluation. Now this is something that generally we haven't done very well. So this is a place to admit where our failures have been. When we carry out an evaluation, sometimes it just sits on the shelf and that's the end of things. So there should be organizational learning. So how does an organization seriously internalized what they've learned? Do they put it into changes in their protocols, and their policies in their training programs and so forth. Then there's donor learning as well. How have donors learn to use their money more effectively? This is something I think that the Gates Foundation has really contributed a lot to health programming. Is in the area of donor learning and how to make their programs more effective, and this has actually had a good knock on effect to other donors and other programs to see how to be able to achieve the best results in an effective manner. There should also be learning by the stakeholders. Who are the stakeholders? They may be other organizations, maybe definitely governments or maybe donors. Now we're looking at another area called the community of practice. So this means that practitioners in this particular part of dealing with humanitarian crisis, and how did they learn from the experiences of other organizations? We're getting better in the area of communication. So some of this learning can be more effectively communicated in this community of practice. Because many people who are practitioners in providing humanitarian assistance, they're moving from agency to agency and they'll be able to carry across some of this information from one agency to another. Then there's learning by the community. How is the capacity of a community been built? How has participation been increased? Then the area of empowerment, if people are displaced when they get home. When they go home are they now better able to carry out activities in that community. Do they have a better understanding of reproductive health needs? Better understanding of how to construct shelter? There was one instance where a population was displaced for a long period of time and Doctors Without Borders started a nursing school. So when people went home, many of them had been trained as nurses and they used the curriculum of the country from which they had come. So they had met the basic requirements for license and for practice in that country. Finally, all projects come to an end. So how do we close out a project? A closing out is not easy especially if this is a long established project that's been there for many years. Here's a photo of a close out that came from a project that we did in Afghanistan, and we were running this project for about nine years. So closing it out was a difficult task. But if we think about this in advance, we can close it out in a phased and a consistent manner, and there won't be so much difficulties that would occur if we hadn't closed things out appropriately or plan for that. Now to close out a project, the process has two parts. One set of parts are the contractual agreements with the donor. If you promise to do things, have you really done all those things? If you promise to dispose of your excess equipment or your used equipment at the end of the project, then have you done that as you specified? Then there are the administrative parts which mean the various project requirements related to operations. Have those been done? Have the bank accounts been closed out? Has everybody had been paid? So forth. So in a contractual side of the things, have the deliverables all been delivered? You promised to deliver a certain immunization coverage rates, have you achieve that? That's why we do end of project surveys. But we should also remember to celebrate our successes. This is not just ending up with the dirty office that needs to have junk flipped out and so forth. We've achieved a lot during this period of time and so we need to celebrate it. We need to make other people aware of what we've achieved. We're the services that we promised to have been set up? Do we have the monitoring data to support that? If we create a lot of data, who are the inheritors of these data? Are these other NGOs that are continuing this project? Or is this the government? Is this the UN that has these data sets? Can they be accessible as required? There's a technical report, and this has to be written and accepted by the donor and sometimes the government as well at the end of a project. Then there are administrative obligations, contractual and financial commitments, staff pay and bonuses, in some countries you have to give two or three month bonus at the end of employment. Have these all been done? Have the record has been transported? Because there are a lot of records generated, sometimes an organization will get a letter saying "This person worked for you five years ago and now we want to employ him and so when they were working for you in Northern Iraq, what kind of employee were they?" So where do we find those data and how do we store these so they can be found easily? Then there are government obligations, there are tax obligation, there are registration obligations. Do you want to close out your registration? This needs to be done as well. Then there are assets that have to be disposed of and usually there are donor requirements for these assets. So have you follow these and done these in an appropriate manner? So if you've got a vehicle into your country without paying duty on it and now the project is over, if you sell it then somebody's got to pay that duty that you didn't pay in the beginning. So if you haven't thought about that, it may make the closing out a bit more difficult. There might be some provisions for unforeseen contingencies or problems. So maybe it turns out that you never paid the last two months of electricity bill, or maybe somehow there was a record for servicing your vehicle that got lost and now the garage has come to you and you want to get that paid. So you need to be sure that there's adequate resources available to settle all these accounts, and there's somebody left in the country that can speak on behalf of the project or the donor. So they can tell people what's been done and what final obligations needs to be settled. These records of a program needs to be preserved. So increasingly we're thinking about how do we preserve these, whether it's in the cloud or whether it's on paper, in headquarters and ordered yes. So it's important to have something to resort to, to sort out problems and questions in the future. So in summary, the project design and implementation in emergencies is quite similar to what we might do for other health projects except there's some differences. The time available for the design is really compressed. So you don't have a nine months that you might have with some other type of project to design the detailed implementation plan, you got to do this really quickly. We are also a bit different because objectives may shift during the life of the project. So if this objective is no longer relevant or has been overtaken by reality, then we have to figure some way to adjust these objectives. The duration of a project is generally shorter than for other types of projects particularly developmental projects which may go for five years and it may have a follow-up face to them. If the project is extended past it's first cycle, the budget is likely to be less. So if you've been doing an activity here for the first six months or 12 months, and the donor really likes what you've achieved and they want to fund you to continue with it, beware the donor may put less money into it the second time around. They expect that you will have figured out ways to become more efficient in providing these services. Then the final point is that standard management methods are built around the planning cycle and these standard approaches or usual approaches may have to be adapted to the changing environment that you're working in. The circumstances, the environment, the workforce. They may be different than where you worked before. So the approaches that are required may be different than they were before, or may have to be adapted or changed from the standard ways that your organization would have been doing things in other countries.