Hello. What we want to discuss as somewhat of a wrap up of course 1 before we go on and, in the fourth module, discuss various configurations and platforms for real-time Linux, is rate monotonic theory advantages and pitfalls. I call them pitfalls rather than a bit disadvantages because many of the disadvantages have been overcome with workarounds or extensions to the theory. These are things to watch out for, things that have been developed, solutions to problems that have been discovered since in 1973 paper by Lewin Leyland. We also talk a little bit about what we're going to cover in more depth with respect to real-time theory and rate monotonic analysis in course 2 in this series. Things that we really couldn't complete in one course. The goal of this first course is to get you to the point of becoming a beginning real-time practitioner. If you take all the courses in the series, then you should really be more of an advanced real-time practitioner. The second course really completes the rate-monotonic practices state of practice, that's necessary knowledge. This first course, however, should get you that beginning practitioner level. One of the questions you might have is, well, what do I do when I get stuck? Hopefully by this point, rate monotonic policy, rate monotonic analysis, and rate monotonic theory make sense to you on a beginner basic level. Myself, I get stuck and where do I turn to get help? Well, the good news is because rate-monotonic theory has been around for over 50 years, there are quite a few organizations that can provide help. So here's some of my favorites; IEEE has actually three different conferences on the topic, the real-time systems symposium, the real-time applications symposium, and the real-time computing and systems and applications conference. Three conferences where current research is presented, extensions to rate monotonic theory on the application of theory to real-time embedded systems, alternative theories to rate-monotonic and so forth. There's also an IEEE technical committee on real-time systems which helps to bridge all that new research into practice and for use by industry and so forth. So these are all things you can get involved with. Rate-monotonic theory's been around for over 50 years, a lot of the founders of this theory are still active to some degree at least, in these conferences and on these technical committees. It's well established, but it's still relatively new. For software associated with microprocessor platforms and various types of platforms that you might use in addition to proprietary RTAS, or cyclic executives on bare metal, which we'll discuss as we wrap up this course in the fourth module, you have quite a bit of information on using Linux. That's the approach we've decided to take in this course. Most of that was based on just low cost, wide availability, and ease of use with the Raspberry Pi. But it is actually viable approach for real-time systems, especially soft real-time. It's debatable as to whether it's appropriate for hard real-time, but there are some hard real-time systems, mostly larger-scale real-time systems like telecom networks and things like systems on ships, marine vessels, and things like that. We'll talk about that in the series if you stick with us for all the courses. But Linux is used and its use in real-time systems is increasing. There's a real-time Linux organization that's part of the Linux Foundation. They're always reliable when it comes to anything related to Linux. There's the Zephyr project, which is a free open-source real-time operating system, similar in nature to free RTAS or RTEMS, a smaller nano kernel, microkernel that is not derived from Linux, that has been developed new from the ground up, but is designed to be developed for using a Linux cross-compile, cross-debug host environment. There are two distributions of Linux which are better suited to uses in telecom systems or transportation, namely Carrier Grade and Automotive Grade Linux. With Linux, what you had to do is simplify, configure, tear down, and patch Linux until you can get it to the point where it provides predictable response. We use Vanilla Linux and use all the features that you can find in the standard distribution for the Raspberry Pi because of expediency and logistics. I should mention there is also, I'll talk a little bit about it in module 4 in eCos Linux, and there has been an announcement that the Raspberry Pi will have an eCos real-time distribution for Raspberry Pi in the future. I'm crossing my fingers, it's not available yet, but hopefully, it will be for this course in the future. But that aside, we talked about how you can patch and configure Linux. What you learn here is how to do everything you need to do for Rate Monotonic Service Implementation in user space. We covered in the third course some Linux kernel methods that you need to know. Another option is to build services in kernel space. However, I would caution you that if you spend all your development in kernel space, then you might want to consider just using an RTAS since an RTAS essentially is kernel space only Linux. You always have these options of cyclic executive, RTAS, Linux kernel space, Linux user space, and no software. There is always another option, which we also talk about as we wrap up this course. Along with different open-source software, you also have standards like ARINC 653 and RTCA, both which are standards for use of rate monotonic analysis in theory, in avionics, in flight systems, flight control systems, for things like civil aviation, things that we trust our lives to on a daily basis. The bottom line is, rate monotonic is the state of practice. There are plenty of resources like technical societies and committees, industry support like Carrier and Automotive Grade Linux. There are standards like ARINC and RTCA. You have the Linux Foundation, you have the Software Engineering Institute, which supported the publication of this book, The Practitioners Handbook for Real-Time Analysis, a Guide to Rate Monotonic Analysis. This is an excellent reference. As you can see, it's a little pricey to get new. I believe it's out of print. I've seen it as costly as $800. People are willing to pay for it when they need it. You can usually find a used edition, although I've even seen a pretty high price there at times. It gives you methods, and work-arounds, and helps you with how to use rate monotonic analysis in theory, given all the nuances of your application domain. There are many publications, textbooks, including my textbook, and online education and training like you're taking now. You've got a lot to leverage with rate monotonic. Let's now talk about some of the disadvantages and even potential pitfalls of using this theory. Many of those come from the original assumptions and constraints listed in Liu and Layland's 1973 paper. Derivation of the Rate Monotonic Lub was a great step forward, but it is inexact. It is not both necessary and sufficient to pass the Rate Monotonic Lub. We'll talk briefly more about that in this current segment. This has been solved. Subsequently, [inaudible] came out with a paper, with methods of worst-case analysis that's used in Chatter. So you've already used Chatter and you can do worst-case analysis using that tool. In Course 2, we'll describe the algorithms that are used to implement Chatter. You've done worst-case analysis by hand. You'll get more practice if you stick with this through the whole series. The T equals D limitation. This was just simplify the model in the original [inaudible] paper, and it's solved by deadline monotonic theory to a degree. Deadline monotonic theory has now been considered part of rate monotonic, just a special case and by the rate monotonic practitioner's guide. So we can have deadlines that go beyond the period, which means we can have multiple requests for the same service active at the same time. We can also have deadline less and period, which means we have some extra slack time before we get a second request for our service which can be useful. So that's been solved. We'll talk about that in the second half of this set of notes. It can fail in cases where dynamic priority succeeds. So we'll see that, we'll end on that in this particular segment. But there's a trade-off for simpler failure modes and scheduler implementation with rate-monotonic compared to dynamic priorities which we'll dig into in much more depth in Course 2. It does not include secondary, tertiary, etc., resource issues. The most famous is use of shared memory with a processor. This particular problem with a secondary resource of shared memory has mostly been solved by priority inheritance or priority ceiling covered in detail in Course 2. It does not encode importance of services. This is solved fairly easily with something called period transform. In other words, rate monotonic policy says the highest frequency gets the highest priority. If there is an overrun, everything that is of a lower frequency plus the over-running service are going to start missing their deadlines and everything that's of a higher frequency will continue to make their deadlines. So it's a very deterministic failure mode. That's nice. The question is, are the highest frequency services also the most important to the stability or health of the system or safety? The answer might be no. What do you do if that's the case? Well, you can essentially pretend that a service has a higher request frequency than it actually has. That's called period transform, and that will elevate its priority artificially. Does this break the theory? No. It just creates slack time. It's more of a work-around than a solution. It's supported best rate-monotonic by a simple cyclic executive or an RTOS rather than a general OS. This is still true today, but becoming less true. So the solution are extensions, the POSIX real-time extensions to more general OS's like Linux, FreeBSD which is part of Mac OS, and Oracle Solaris, and so on so forth. Many more general operating systems are now supporting POSIX real-time extensions. That along with proper configuration tailoring, simplification of your distribution can lead to much more deterministic behavior. If not fully deterministic, then at least predictable response suitable for soft real-time if not hard real-time. Some people might say there's a lack of broad understanding of rate monotonic. Well, you got courses like this, many more textbooks today than there were 50 years ago. You have the practitioner's guide. You have Course number 2 in this series that provides a full derivation of the rate monotonic least upper bound and much more practice applying rate monotonic theory. So there are plenty of resources. Worst-case execution time variability. This is something that's come about as microprocessors have advanced. Using architectural methods, hardware architectural methods to improve the average-case performance at the cost of more extreme, worst-case, best-case scenarios. More variability. What's the solution to that? Well, ARM in particular now provides the R-Series microprocessors which are designed for real-time platforms and uses, compared to the ARM A-Series and M-Series. The A-Series is more suitable for things like tablets and smartphones, and general purpose use embedded systems, whereas the R-Series has specific features for real-time, which we'll discuss if you stick with us through the whole series. For example, replacing multi-level tiered cache with a single tightly coupled single cycle access on-chip memory, and other features like that. Through experience and through time, now practitioners know what features on the microprocessor are desirable and which ones actually cause more problems, even though they might increase average performance, and so the R series addresses that. An example of a evaluation board and a system that you could find the R series in, is the TI Hercules board, which specifically runs TI-RTOS, and is targeted toward mission critical systems that have had real-time requirements. Accuracy, precision, and deterministic timing limitations are a general issue with software implementations of services. We've discussed that there are many advantages to software implementations like flexibility, field upgradability. The wider range of engineers that can work on the systems, including software engineers as well as hardware engineers, that comes from software's implementation of services as well as hardware. But of course, the solution has been around for a long time. If software services aren't going to meet accuracy, precision, or deterministic timing requirements and constraints, you can use a co-processor an FPGA, an ASIC, or a GP-GPU. MPEG encoders and decoders are available as ASICs, or as reusable cells for system on chip designs. Therefore, you can encode or decode MPEG with a hardware state machine. That's one of the reasons MPEG can work in real-time on even general purpose computers, because the service is provided by hardware state machine rather than the software service. Does not scale well. Well, real-time really got its start with AMP asymmetric multiprocessing. AMP is still possible with multi-core SoCs as well as multiple chips on a board. That's how real-time systems were scaled in the early days, was multiple instances of cyclic executives, is that basically passed messages to synchronize and to share data. But as you've seen in this course, we emulate AMP with thread affinity and actually run on an SoC. We get our cake and eat it too. We get SMP features, but we have the predictability of rate monotonic and basically disabling load balancing, so we can use rate monotonic analysis. There is SMP and VM research going on by both industry and academia on how to provide predictable or deterministic response in real-time systems with things like load balancing and virtual machine interfaces. I wouldn't jump into that as a practitioner at this point, I would stick with more well-proven methods like AMP, either on discrete processor, single core processors, or on an SoC using the AMP emulation as we've learned in this course. But I think the future is bright for SMP and VMs in the future based on the effort that's being put into that. I think while there's a lot of potential pitfalls, they're largely been overcome or have work arounds. The reality is rate monotonic is what's used, state of practice. Let's address another issue. Rate monotonic. The LUB is not exact, it's really only sufficient. So what does that mean? The rate model monotonic LUB is sufficient because if a scenario passes, in other words, a service set, you want to schedule, if that scenario passes, it is feasible. It is in fact feasible, we can say that definitely. But if the scenario fails LUB, it might still be feasible, so it's not exact. Well, rate monotonic worst-case analysis is exact. Theory that followed on later about a decade or so later, gives you an exact solution and we know how to do that, we have tools that can do that. What is the LUB good for? Well, it's a simple back of the envelope calculation. That's order N, in other words, if you have 100 services, you need 100 rows in a spreadsheet to figure out whether it's going to work, you need to sum up 100 C_is over T_is and compare that to m times quantity to the 1 over m minus 1, and that's all you need to do. So it's great for early modeling and analysis. Deadline monotonic came along and we'll talk about that in the second part of this segment with a better test that's order N squared, but still only sufficient. It's interesting from a historical perspective and an understanding perspective, but really is a special case of rate monotonic. But the necessary and sufficient exact tests presented by the Lehoczky, Shah, and Ding, worst-case analysis, in other words, are exact in their order N cubed, that's still pretty reasonable. If I have 100 services, I have, let's see, 100 cubed which I believe it's order 1 million for 100 and services, 100 services is quite a few. I worked on one of the more well-known real-time systems I worked on was the Spitzer Space Telescope. We had 23 services. So 23 cubed is reasonable. You can definitely run this maybe not on a calculator, but easily on any general purpose computer, simple tools like cheddar run it, no problem. It's not overly complex, is polynomial bounded. There are two algorithms for this: scheduling point and completion test. Both of them automate worst-case analysis that you know how to do by hand now. So it's simply an algorithm that automates what you do. We covered in detail in course 2, but you already have a good jump on understanding it because of the practice you already have from course 1. Remember the LUB is sufficient and therefore it will never incorrectly pass an infeasible service set, there's little bit of safety built into that too, it always has some margin. The only exception is it always has margin if there are services that are feasible by rate monotonic policy that can be 100 percent utilization but they still fail the LUB even though they are feasible, which we've also seen looking at timing diagrams using worst-case analysis. There's some safety built into it, so it will fail some feasible ones but it errors on the side of safety, which is not a bad thing in engineering. A necessary insufficient test is exact, and we have one. So I think we've explained that long enough, if you've forgotten what necessary and sufficient means there's a little primer from Wikipedia on formal, logical, and mathematical definition for it. It just means we have an exact solution. Another observation, and this is the last point we'll make in this segment, is that rate monotonic compared to dynamic priority, EDF and LLF, can deal with a subset of scenarios that EDF and LLF can. Makes sense, they're more complex, they're more adaptable, and so they can schedule scenarios that rate monotonic fixed priority cannot. However, this Venn diagram isn't drawn really well to scale, rate monotonic covers a large portion of the scenarios that EDF and LLF also cover, and you'll often find that the result is the same whether you use dynamic or fixed priorities. But there are of course some scenarios at EDF can schedule that RM can't and that LLF can schedule that EDF can't. Why is that? Well, EDF is what Lewin Leyland called deadline driven scheduling using dynamic priorities. Dynamic priorities are adaptive, they encode what's called urgency. The urgency encoded by EDF is the time-to-deadline. It basically says, as you get closer to your deadline, your deadline is more urgent, your priority should go up. What does that mean? That means you've got to adjust priorities every time there's a change to the ready queue, or a service completes. Anytime there's a change in the scheduler's state machine, overall state. So that's costly and requires constant analysis, somewhat, constant analysis. Anytime there's a change of the time-to-deadline for each service, as new services, enter and exit the ready queue, or complete execution. LLF is the same sort of idea, but more complex. It's time-to-deadline minus time-to-complete. That's why it's called [inaudible] How much do I need to worry? How urgent is my priority, in terms of, how long is it to my deadline? As well as, how much do I have left to do? It's very much like human reasoning about schedules, and we'll cover that in depth in Course 2. So the problem with it is, it's harder to bug, it's harder to implement, and its failure mode is less obvious than rate-monotonic. Remember rate-monotonic, whoever is overrunning fails. All services that are fixed priority is lower, also will start to fail. Everything that's higher-priority, higher-frequency will continue to succeed. So it's a simple failure mode. In harmonic cases, you get great feasibility out of rate-monotonic. So that's another observation. So rate-monotonic is most common for hard real-time systems, mission-critical systems, and EDF is most commonly used for, as an alternative, for soft real-time systems, scalable soft real-time systems, for example, like digital media. Generally, EDF is well accepted for soft real- time. There are some researchers like, Giorgio Buttazzo, who take the position that we should move toward EDF away from rate-monotonic. But this isn't universally agreed upon or accepted by all practitioners. This is more of a research position. So at this point, my advice is to stick with rate-monotonic. But knowing dynamic priority policies, which we covered in Course 2, is quite useful if, for example, your applications are more like, Netflix, or Hulu, or something like that, where nobody dies and no property is loss if a deadline is missed. Rather, there's a loss of the quality of service, that is tolerable. As long as it doesn't happen too often, and as long as the overrun doesn't lead to some sort of cascading failure, or doesn't persist too long, as long as you can recover from it. It allows you to scale and have more efficient utilization of your processors. So let's see that, then we'll wrap this up. So EDF and LLF, can have priority ties, which is interesting. They can succeed where rate-monotonic can fail, but they will certainly also succeed where rate-monotonic succeeds. So in this scheduling example, number nine, which you can find in the resources. You can see that they come up with different schedules than rate-monotonic, and potentially different from each other, and in fact, there's even two scenarios for least laxity. First, when we have a tie on the urgency values. So we have to compute these urgency values, every time we advanced one unit of time. So remember, all of these time diagrams are done in common units and whole numbers. So anytime we have to, essentially, track these changes as we make progress in services, enter and exit the ready queue. So we get different schedules, but in many cases, we have this overlap. Where EDF will work, LLF will work, and rate-monotonic works. So we have to hunt hard to find cases were RM fails, and EDF succeeds, and then we have to hunt even harder to find cases were rate-monotonic and EDF fail, but LLFs is the only one that succeeds. So as we said in summary, rate-monotonic is most often used for hard real-time, and EDF for soft real-time, because it's easier to implement them, LLF. The method to do the dynamic priority computation and scheduling analysis, will be covered in Course 2. This is just a little bit of a teaser, for why you might want to take Course 2, and advance your knowledge even further from beginning practitioner more to intermediate. If you stick through the whole series, I would consider yourself an advanced practitioner in real-time embedded systems. So on that note, we'll end this and then I'll have a second discussion, to go into a little bit more detail on issues in workarounds related to rate-monotonic theory. Thank you very much.