So the next concept that I want to introduce you to is the Improvement Kata, a really helpful DevOps principle. After watching this video, you'll have a good introduction to do your own further research. You'll be able to define Improvement Kata, discuss its origins, and describe the steps to using it successfully. Let's begin. So the Improvement Kata originally comes from the Toyota production system and is really a process to allow you to continuously improve. One of the first steps in the Improvement Kata, is to understand the direction. Essentially, the idea is you must understand the vision and direction, where you want to go on a project. Then, you need to analyze where you are, to understand your current condition. You need to establish the next target condition and then plan, do, check, and adjust towards that target condition. For example, if the vision you had was to have zero defects and your current condition is that in every release, you have 20 defects. Then, maybe you would say, this next release we're going to have ten defects, because you're probably not going to get to zero right away. If you go to 10 or if you pick 10 on your target and then you start iterating towards that, then you'll be on your way to your ultimate target. For example, maybe you have some new automated test scripts or other code coverage that will help you iterate towards that target condition. Then, you determine a checkpoint and adjust your parameters as needed, all in an effort to reach that target condition. Let me share another example from my time at Nordstrom. After seeing success with our value stream mapping and application of the Improvement Kata with our customer mobile team, where released frequency went from twice a year to on-demand. Our business stakeholders ultimately decided that monthly updates where the right frequency. We then made a decision to set a target for the remaining customer facing engineering teams. This included our Web Team, our Personalization Team and our Loyalty Team, to reduce their cycle time by 20% with the vision of getting to on-demand releases. The next step was for each team to grasp their current condition, which meant understanding their current cycle time. Some teams first countermeasure was to measure the current state of cycle time. In order to do that, we conducted value stream mapping workshops. For the web team, they had a cycle time of five weeks and set the target of getting to four weeks. Once they had the current and target set, they could then pick experiments to iterate toward that target. One experiment they conducted was to remove an approval process that was happening late in the lifecycle, during what we called our hardening phase. The hardening phase was intended to be a two week period for stabilization and all testing was supposed to be done by then. However, we often had Dev teams submitting exception requests because they hadn't had the opportunity to finish testing. The approval process added no value and was masking the true issue. That we really needed to move quality to the left and even worse, the approval process would often get delayed waiting on a VP, like me to approve it. The team would pause all efforts until the approval was received. So in choosing to remove that step, the team ended up saving a whole day in the process. The next focus of the team became about the hardening exceptions. Since quality wasn't complete before entering the hardening phase, we'd end up with teams asking for exceptions and what we really wanted was to shift quality left, automate more of our tests and reduce the number of exceptions entering the hardening phase. This was expected to also reduce rework and complexity with the deployment. After implementing the automation, we shaved another two days from the cycle time. Then, the team focused on the deployment process, automating tasks and also with the quality improvements, the release took less time for multiple days to one day. The combined experiments help them reach the goal of four weeks. Throughout the process, we also had weekly check-ins and set appropriate PDCA, which stands for Plan Do Check Adjust Cycles for each experiment. For example, the automation efforts had a longer timeline. So we couldn't check on progress weekly, until we had some of the work done. So we set the check frequency at three weeks. One thing I learned as a senior leader is that often people think they know what problems they need to solve and they jump to conclusions about how to solve those problems. By leveraging the Improvement Kata, it requires leaders to set the direction and gives teams the opportunity to thoroughly understand what target they're trying to achieve and really problem solve and experiment toward that in a methodical way.