Hi everyone, I'm Jeremy Gibson Bond, and welcome back to the Unity Certified Programmer exam preparation course. This video is going to be a little bit different. In this video, we're just going to have a conversation about some best practices for virtual reality, and for developing virtual reality. VR is a big part of where the games industry is going and where other industries are going, and there are lots of different devices out there, and lots of different ways that you can do VR. So, what we're going to talk about here are just some general best practices that are pretty much true for all of these different forms of VR. Now, virtual reality is probably the most well-known branch of what Unity calls XR, or extended reality. That involves things like virtual reality, augmented reality, and also things like caves, where you are projecting an environment around someone in a space, or even things like projection mapping, where you are projecting a new surface onto an existing building, or something of the sort. There's two things that I'm here to talk about in this video today. The first is nasure reduction, and then the second is performance optimization. Let's start by talking about nasure reduction. So, it turns out that the nasure or simulation sickness that you sometimes feel in VR, is caused by the same thing that causes seasickness if you're on a boat, and is the same response that lets you know when you've been poisoned. Because when your body senses that it's proprioception, its sense of itself in space is different from what your eyes are telling you. When things like your inner ear are telling you different things about movement from what your eyes are seeing, then you get simulation sickness or sea-sickness, and this feels sick in your stomach because this is also what can happen when you get poisoned. So, your body says, ''Wow, wow, wow, the inner ear and the body proprioception is not matching up with my eyes, I've definitely been poisoned, I need to make you sick to your stomach.'' So, one of our major goals in developing VR, is to avoid your body having this response. So, let's talk about different ways to do that. So, the number one thing that you must do to avoid simulation sickness is to not have any discrepancy between how the players actually moving their head, and how the camera moves in the game. Now, you can have an enhanced version of this, right? So, if you move like two centimeters, the head in the game moves four centimeters. That's actually okay. But anything that is different like sea did, like a head Bob, like sometimes in a first-person shooter, that will absolutely make somebody sick. By the way, the simulation sickness, there is a bell curve on that, there are things that will make most people sick, there are some people that will never get sick, and there are some people that will almost always get sick. But the new headsets that we have, things like the HTC vive, that can run at like 90 frames per second across both eyes, that really reduces the chance of your average person getting sick by having that higher frame rate. One of the major ways that you reduce simulation sickness is by keeping your performance up, and keeping your frame rate up. But we'll talk about that more in the performance optimization section. Another aspect that's really important to think about is vection. So, vection is the term for the feeling of motion you get when things pass you by. So, that works really well if those things are a distance, but if they're too close, it can make you feel disoriented and actually make you feel off balance as a player. This was a problem that they solved in Eagles Flight by UB Sought, by actually blacking out that part of the screen. So, if you flew really really close to a wall, then your vision next to the wall, would go black, so you didn't see the wall passing by really quickly. You may have actually felt this feeling of vexing causing you disorientation or unease, if say you're in a car, and right next to you is a really big bus, and the bus starts moving, you might feel like you're moving even though you're not, because so much of your vision in your peripheral vision, is moving. So, that's the concept of vection. Vection is really good, like I said, for giving the feeling of movement, but in VR, it can actually be a real problem. One of the main ways to avoid this problem of vection is by putting the player in a cockpit, by giving them a sense of reference that is a space that doesn't move around them, and then out the windows things are moving that works totally fine, and it is another great way to help avoid the disorientation you can get from vection. Another thing when dealing with the camera, just make sure to make your near plane as close to the player as possible. So, you've got the near and far clipping planes on a camera, and that's the full range that things can be rendered at. Now, if I pick up my mug here, and I put it too close my face, if my clipping plane is here, then as the bug gets close to my face, it will actually lose polygons and disappear. So, you really want the near clipping plane as close as possible, so I don't see the mug disappear as I bring it towards me. Another thing to think about, is how to handle your UI, your heads-up display, or your your graphical information display. So, if it's too close to the player, or if it's in screen space, it's going to be really disorienting, and hard for the player to look at. So, you want to put it in world space. So, for example, in this space, right now, if I were to have a UI on this screen, that feels really natural, because I can move around and it stays in its world space coordinates. You can also do things like cockpits around the player, like I mentioned before with the vection, a cockpit that has screens and you can see, the UI, and the screen is also pretty good. But in general, I find it's best to keep UI at about arm's distance or at least a distance that is comfortable to say, ''Read a book at.'' if it's too close, it's like inside a helmet or something, again, that's just too close to the player, and it can get claustrophobic, and it can actually be difficult for the player to try and focus on. Speaking of focus, be careful with post-processing effects. So, any effect that you do that is something that your eye naturally does, if you try and do it for the player with the post-processing effect in VR, it's going to be very uncomfortable for them. So, that means avoid things like depth of field, like throwing the foreground and the background out of focus, that'll really not look good in VR. Other things you should avoid are any kind of motion blurring, like your eyes are moving around inside of there and there's blurring happening anyway, and something called jader happening anyway, in VR, and adding an extra motion blur onto that, is problematic and you should avoid it. So, let's move on to performance optimization. So, one of the first questions you might have is, well, why is it so much more important in VR than it is in just regular game making? Well, the answer to that is that first of all, VR is happening across two screens at the same time, and really I guess, it's one screen. But it's happening for two eyes, which means we have two viewpoints, and two cameras to render. So, you're doing double the work. The second thing is for modern high-end headsets, you're talking about rendering each eye at 90 frames per second, which is much faster than the standard, like 30, which is minimum acceptable frame rate for most games. Now, when you're optimizing for VR, all of the standard Unity optimization things that you can do to increase performance, they still apply. These are things like understanding performance, optimizing shader load time, and optimizing memory. There are Unity guides on all three of those topics. One of the most important things that you can do for VR is to minimize what's called motion to photon latency. This means, I move and the latency between when I move somewhere, and when I say see, my hand move somewhere in VR. You want to get that latency down to 20 milliseconds or less, which is a very short amount of time. So, tracking and positional information is read one time before an update or if there fixed update the fixed update happens. Then it's read a second time as part of on before render, which you can actually get a callback on on before render as part of the application, to say, application.on before render, and then that's a callback that you can register for. This on before render happens immediately before rendering happens, and so we're reading the position of our trackers immediately before the rendering happens, to make sure that we get that motion to photon latency down as low as possible. Now, I mentioned that of course, we're rendering for two viewpoints, for two different eyes. Unity now has what's called single-pass rendering. What single-pass rendering does is, it first calculates all the things that are going to be the same for the two different viewpoints, and then when it starts rendering the two different viewpoints, rather than say, rendering the left eye, and then rendering the right eye. It renders this double wide field of view, and it draws into the left side, and draws into the right side, in parallel. So, it will render a model first on the left, and then on the right, and it will render another model first on the left and then on the right. That reduces not the GPU time very much, the GPU time, the graphics card time, does not go down that much. But the CPU time, the central processor of the computer, that time goes down a lot, because it has to do less switching and sending commands to the graphics card. So, it batches it all together. There's also something called single-pass instancing, which gets you a slight improvement as well over the regular single pass. What that does is, it forces some of that switching on the graphics card and just reduces that time on the CPU slightly more. When you have single-pass instancing, the GPU is actually able to draw a model to both eyes in a single draw call instead of two draw calls. Again, this doesn't really reduce our GPU time that much, but it does again slightly shave down that CPU time. For the most part, if you want it to look nice in VR but still have really good performance, you want to avoid real-time lighting. The way we do that is by rendering non-directional lite maps for all the static objects, and then using light probes to do the lighting on dynamic objects. There's also an option called shadow mask. The shadow mask is actually available in several different places, the different settings that are all called shadow mask throughout Unity. But they'll have to do with this idea of rendering real-time shadows and specular highlights on dynamic objects even though you're in a scene that has static light maps. Now, let's say you want to do some post-processing. Now, we talked before about how post-processing effects that match things that your eye does on its own, or a problem in VR. But there are some other things you might want to do with post-processing. Things like color grading, or anti aliasing. Those are okay but realize that as a full-screen effect, it's double the screen. So, you're doing twice the size, and so it is a performance hit. If you're going to do post-processing, I highly recommend looking at Unity's post-processing stack. Because it has a bunch of optimizations built in to reduce the performance hit as much as possible. Speaking of post-processing, anti-aliasing is something that can really help you in VR. It looks great. It smooths things out nicely, and it works really well for forward rendering using standard MSAA, which is multi-sample anti-aliasing, and just sort of goes to the sub-pixel level to render it and then blurs things and it looks great. If you're doing deferred rendering, you can't do MSAA. But there are two possibilities for both of which can be done with humanities post-processing stack. The first of these is FXAA, which stands for Fast Approximate Anti-aliasing. This is a pixel level thing that looks for jagged areas where there are pixels and it just kind of blurs them. So, It's very, very fast but it can make your image slightly blurrier than you want it to be. The second is TAA or Temporal Anti-aliasing. This actually anti-aliases is by looking at the image over time. This is very high-quality, looks really nice, but it's also a pretty intense GPU hit. Just be aware that if you're going to implement this, you want to be doing it for project on a high-end PC. You don't want to try and use TAA on like a cell phone project like that would be terrible. You definitely want to optimize any shaders that you're using of course. But you also want to optimize the loading of the shaders. In general, Unity doesn't load a shader in and get it ready until you start using it. So, anytime a new shader comes into your scene, you can actually have a little hiccup in performance particularly in VR. A great way to avoid this is to use a shader variant collection. Then you take that shader variant collection and warm up the shaders ahead of time. You can do this when the scene is loading. This can make sure that the shader is ready to go before the first time you actually use it to draw something on screen. In terms of rendering things a little bit faster, there are a couple of x-r settings that you might want to set to either improve performance or actually improve the image at the expense of performance. So, the first one is called high texture resolution scale and so this would be x-rsettings.itexture resolution scale. This sets the scale of the area in memory you are drawing to relative to the actual scale of the pixels in your headset. One is a one-to-one match between the pixels you're drawing and the pixels that are actually showing up in the player's headset. If you have greater than one, you get a better looking image but you get worse performance. If you do less than one, then you get a slightly worse image but better performance. If you are going into a scene, there's particularly intense or if you have found that maybe the player's computer is not quite up to the spec that you designed for, decreasing that value can render to a smaller texture area and can improve your performance. However, one thing you want to avoid is, changing this value while the game is running and the player s actually doing something. Because on some platforms, you can get a hiccup that can actually drop a frame when you switch the high texture resolution. So, you want to make sure that you switch that and places where maybe not much is happening or you are loading scenes or something like that. On some platforms, it's okay, you just really need to test it. There's also another x-r settings value called renderViewportScale. What render viewport scale is, it's a number that goes from one down to much smaller values. It says, how much of that high texture memory do you want to use and draw to, So, rather than reallocating a chunk of memory which can cause a hiccup, it just says, we'll just use a little bit less of the whole memory for now. This you can change without causing any performance hit. It does have some issues again on some platforms, but it works on a lot of platforms and it's a way to dynamically adjust how much you're rendering each frame in order to account for if you're getting some lag or some other issues. So, let's talk about mobile VR. Mobile VR is probably the most ubiquitous and widely distributed version of VR, but it's also the biggest performance problem. You are not going to be able to get the same performance out of a phone is you could out of a high-end PC or a high-end console. So, you need to tone everything down. Decrease the amount of geometry in your models, decrease the texture size, you want to avoid things like the standard shader that can be a high-performance shader or require a lot of performance. You really want to focus on watching things like the frame rate and maybe cutting back things even further if you detect that the frame rate is low because someone's on an older device. Another issue that you will encounter on a mobile device more than you'll encounter it on like a PC is thermal throttling. So, thermal throttling occurs when the process are starting to get really hot and it doesn't want to overheat and so it cuts back on its performance in order to avoid overheating. Happily, on Android if you're worried about this. There's something called sustained performance mode. What sustained performance mode does is, it actually reduces the performance of your Android device slightly but guarantees that it's not going to get to a point where it has to do thermal throttling. So, rather than having really high-performance and then tanking because of thermal throttling, it'll say, slightly worse performance but it'll be nice and consistent. So, I guess the last thing I'll say here is, just be aware that every platform's different. There are so many different platforms out there and they all have different special things and issues and opportunities that you need to be aware of. So, one thing that is different across all these platforms but is really kind of cool is called asynchronous reproduction. Asynchronous reproduction occurs when the headset is ready to show a new frame to the player but the computer hasn't quite rendered it yet. So, what happens is, it will reuse the old frame but kind of move the viewpoint artificially a little bit to match up with any movement the player's head has done. It's a neat trick and it's done slightly differently across a lot of different platforms. So, that's definitely something to look into when you're looking at the different platforms. So, that's it for our discussion of best practices for VR. I just touched on the very top level of things. There are lots of resources on the Unity website to help you learn more about this and more of their ideas what some great best practices are. So, I really encourage you to dig into that if you're interested in making a VR project. Thanks for watching this video. I hope you enjoyed it and I'll see you in the next one. Take care.