Lesson 2. VMs, Virtual Machines Docker and Kubernetes. We talked a lot about virtualization and virtual machines. Virtual machines make better use of the existing hardware much better use than this scenario previously discussed. The bare metal of computing where hosting option. A hypervisor a software protocol called the hypervisor manages access to the physical hardware for each of a number of virtual machines and these number of virtual machines can support many users. The virtual machines allow many instances of an operating system and software to run on the exact same hardware. Here we're referencing it as bare metal. So therefore, each instance, each virtual instance can more efficiently use this hardware. So this hardware is supporting not just one or a small set of users, but a large amount of operating systems, a large amount of instances, and a large amount of users. Again, this is all managed by a special software product called a hypervisor. As stated earlier, virtualization and cloud allows the enterprise to better manage hardware, and to manage it in a way that it's exactly like managing software. Administrators again can template virtual machines, meaning make a copy of it and from that template make other identical copies. Multiple copies of virtual images can be made with their own IP address and own host ID. Therefore, they can stand up and run independently and be accessed over the network independently. Independent operating systems need a network, and Cloud does provide virtual network adapters to allow the virtual images to communicate just like any bare metal machines would communicate over the network. Virtual machines provide a stronger testing strategy. We talked about this before where we can make copies of a live environment, and application tests usually acceptance test security tests can be run on a copy of the live environment which makes for better testing results and higher quality. Virtual machines are not the only hosting scenario. There are other hosting scenarios such as, Docker and Kubernetes. Docker is a product that can be downloaded, and there is Docker documentation, it is an open source product. Docker uses containers or they're hosting and deployment strategy. Docker does not provide a full virtual machine, but Docker still uses virtualization. Docker is a platform as a service products and it provides operating system environments called containers. So our application actually is installed on these containers. Docker containers are 100 percent independent from each other, and they are bundled. Each container is bundled with its own software, bits, libraries and other items. Docker containers cannot run on their own. All Docker containers run on a single operating system kernel called the Docker engine. The Docker containers are dependent on the single operating system kernel. The Docker containers are much simpler than a full virtual machine. Docker containers provide 100 percent predictability on how the application will be installed, run and execute. Because when the system is fully brought together, if it contains or it creates a common app or a predictable run time environment. So here we can see the application is installed here on these containers. All of these containers run on the Docker host which is eventually part of the host operating system. The host operating system obviously runs on hardware which would be the Docker host. So these containers here are each one of these has one application or possibly the same application, but when these are all brought together they provide predictability on how the application will run. Now applications can be installed also on virtual machines. So we can see here App1, App2, App3, all run on their own virtual machines, and these virtual machines can drift slightly. So once they're up and running they can actually become completely independent computers and it sometimes can be hard to keep these all synchronized. Here's the hypervisor that is managing these three virtual machines on the physical server. This is one way to run. But there is a lot of complexity here, almost you can say a lot of inefficiency because we are running the one or the three applications on a very complex virtual infrastructure. Here with containers we can see there is an operating system a Docker engine, and we can see here these apps or App 1,2,3 are installed in these containers. All of these containers use the Docker engine and are able to provide that virtual hosting environment which will guarantee that these applications run the same way every single time. These images can connect to the Internet as well for Devapps and and for Database and are Docker registry. Once Docker Deployment pattern is established it can be repeated and automated. So Docker does come with deployment strategies are pattern [inaudible] deployment strategies which can be very easily automated and documented. Kubernetes is also a platform as a service, container and orchestration product. Kubernetes also offers an infrastructure as a service. Kubernetes automates application deployment, scaling and maintenance. Like we saw it a little bit earlier, scaling app Docker, it could involve quite a bit of work or rework on the environment, whereas Kubernetes provides that scaling and infrastructure as a service as part of the product. Kubernetes also supports Docker containers and is much more complex than Docker. Kubernetes is built on layers or blocks, and these are referred to as primitives. The blocks are built up as the application scale up. The Kubernetes architecture is based on a structure called the pod, and the pod houses containers similar to Docker. Pods make up Kubernetes clusters. So like we said these before cluster could be, well in this case would be independent pods running really as one pod. A pod is independent and has its own IP address. A Kubernetes service is a set of pods that work together within the Kubernetes cluster. So we can take a look here at Kubernetes service is a set of pods that use the cluster approach. We can see this here that we have one pod here running application or with containers A and B, running various applications with this network address. Pods two and three in this case have the same containers C, D and E, and they are essentially acting as one even though they do have a separate network addresses. So in this hosting scenario we can actually see how we can deploy to these various pods, but we can also scale up as needed because of the way that Kubernetes has this infrastructure as a service type of approach.