[MUSIC] In this lesson we will start to look at fully dynamic schedulers. Dynamic schedulers in real time systems can adapt the need of CPU for tasks during run time. For example, when a task comes closer to it's deadline, your priorities can dynamically increase to make the task not miss the deadline. The upside of having this kind of scheduler is that it is possible to pack more tasks into a tighter timeline compared to for example, fixed priority schedulers or monotonic schedulers. Priority-driven scheduling algorithms differ from each other mostly depending on how the priorities are assigned to the tasks. And we want to have a look at the algorithms which automatically updates the priorities of the jobs during run time. Priorities of dynamic schedulers are reevaluated at each decision point or scheduling point. This point can be when getting a timer interrupt at the release time of a job, at the deadline of a job and so on. Priorities are then reassigned based on the scheduling algorithm which looks at the certain characteristic in the system. Such as time left until deadline, the period, slack time left or the release time. There exists many approaches, and we will come over some of the most common ones in this course. Increased flexibility as scheduling of the malady is the reason for chosing dynamic schedulers, yet the complexity because of reevaluation is the scheduling main drawback. As all tasks must be evaluated at each scheduling point the overhead of doing so can start to influence the timeline, especially when using more and more complex algorithms. Another downside can also be non-optimal schedule compared to a static schedule. Consider here a set of jobs J1 and J2 with a deadline of two and one respectively. Assume the release time of J2 is X, so it is unknown. But we do know that the execution sum of J2 is 1-x. We also consider a dynamic scheduler with a complete task set at hand. So suppose an unknown variable x is always between zero and one. And let us see how a non pre-emptive dynamic scheduler can handle this situation. Let's assume that x is larger than zero, okay. And this means that the release time for J2 is larger than zero, and the execution time of J2 is smaller than one although larger than zero. At time zero, only J1 is released, so J1 will be schedule. At time x which is sometime between zero and one. J2 will be released. But it cannot execute because J1 is executing, and the scheduler is a non-preemptive one. J1 then finishes at time one, and then J2 starts. Now the deadline of J2 is equal to one, so independent of the value x, J2 will always miss the deadline. This problem can only be solved with a schedule capable of postpone the execution time of day one to the last time. And allowing nothing to execute for a short time when the system is starting. Although this kind of drawback dynamic schedules have a lot of upsides. We will now start to explain the first dynamic scheduling in algorithm called LST or Least Slack Time scheduling. This scheduler will look at the slack time of a job and select the one with the least slack time. Since the slack time for the jobs can change as time proceeds, the system must be reevaluated each time step. We assume that we are using a dick schedule approach where the scheduling points are put every one millisecond. First we need to define how to calculate the slack of a job. We define it with s slack of a task i as the deadline di-t-ei prime where e prime is the remaining execution time of the job. And if the job has not been executing at all then e prime is equal to e. So consider the following setup. And we have jobs J1, J2, and J3 with the parameters according to this table. And we have the following scheduling dependencies as well, and task here is then to schedule the system in the time interval zero and eight. And the question is, will all the tasks meet their deadlines? To calculate this, we start off by marking all the deadlines of the job, so that we have a reference to calculate the slack from. Now we start from time zero and at time only J1 has been released. So we have no other choice but to schedule J1. J1 is then executed until time two, at which J3 is also released. At time two, we have two jobs to choose between, J1 and J3. We then calculate the slack of both the jobs and a slack for J1 is it's deadline, which is six, minus the current time, which is two, minus execution time left of J1, which is one. The slack of J1 is, of course then three. We calculate the same thing for J3. We have a deadline of seven, current time two and execution on left two because this job has not executed yet at all. In this case we got the same select for both jobs, so our secondary rule tells us then to select the job with the higher index. J3 is the first scheduled after J1. At time three we evaluate the slack again for J1 and J3. And if a procedure is exactly the same as in the previous step but a prime is now one for J3 because it has executed one millisecond to since the last point. J1 is therefore selected, because it has less slack. We then proceed to time four. Here J1 just completed, so this job does no longer need scheduling. J2 has not yet been released, so J3 is our only option left. And hence we schedule J3. And finally at time five, both J1 and J3 have completed and at this time J2 has been released, so the only logical option left is to schedule J2. J2 then runs until completion at time seven. And from this schedule we can see that all the tasks meet their deadline, and the schedule is therefore feasible. In this lesson we have looked at the first type of a fully dynamic scheduler called the Least Slack Time scheduler. The nature of dynamic schedulers is that priorities continuously change, so that the task needing the priorities the most will get it. The result of this is that the the scheduler can provide a flexible system with a high utilization, meaning that it can better than for example, monotonic schedulers.