So with our Python implementation, the analytical solutions, we've seen how the error behaves for our finite difference calculations. We've seen weird phenomena like a Gaussian pulse is disintegrating into a longer wave form, what we call dispersion, we also saw the solution exploding. Now, how can we explain that from a mathematical point of view? The nice thing about the finite difference method is it's actually possible to basically predict these things mathematically and that's based on the so-called von Neumann analysis. John von Neumann was a brilliant physicist mathematician, one of the founding fathers of quantum mechanics, game theory and also scientific computation and computer theory. He was of Hungarian origin and his analysis of numerical approximations is so fundamental, it's quite a bit of maths, but at this point I really think we should go through the derivation once because it leads to one of the most fundamental results that will be relevant for all the other numerical methods that we will encounter later. So in the von Neumann analysis, we will make use of plane waves using complex notation, so I hope you have some familiarity with complex numbers but let's start simple and look at the Euler equation which relates basically the exponential function to the trigonometric functions cosine and sine. So each of the ix cosine x plus i sine x and equivalently, for negative x each of them minus ix is cosine open bracket minus x plus i sine open brackets minus x, and because of the symmetry behavior of cosine, this is equal to cosine x minus i sine x. This has important consequences and we will make use of this later if you add up e to the ix plus e to the minus ix, which is easy to see, is equal to two times cosine x. So basically the real cosine x is equal to one over two open brackets e to the ix plus e to the minus ix. You can do the same with for the definition for sine x by taking the difference between e to the ix minus e to the minus ix is equal to two i sine x, and by that you have a definition of the sine function, sine x is equal to one over two i open bracket e to the ix minus, e to the minus ix. Before we dive into the discrete version of the acoustic wave equation and numerical approximation, let's have a look again at the acoustic wave equation in the continuous form, just the second derivative in time of p equal to c square, times the second derivative with respect to space of p and we omit the source term. Now, let's recall the definition of plane waves, that's p of x t is equal to e to the i, open bracket kx minus ωt with k the wave number and ω the temporal frequency. Now with this trial solution, we can enter the wave equation and it's very easy to calculate the second derivative with respect to time that turns out to be minus omega square e to the i(kx-ωt). And in space, it's a similarly easy calculation, the second derivative with respect to x of p is minus k square e to the i (kx-ωt). Now if we put that derivative, analytical derivatives into the wave equation, we end up on the left-hand side with minus omega square, and the exponential term is equal to c square k square e to the i (kx-ωt), and because the exponential term is always positive, we can basically cancel it out and we're left with c equals ω by k, which is the very well-known result relating the temporal frequency to the wave number or in other words the wavelength to the period, that's a dispersion relation and that gives us basically the condition under which waves will propagate. But what happens if we're not in the continuous world but in our discrete world, with our finite difference approximation to the acoustic wave equation. Now let's recall our discretization of space and time, we defined x to be jdx, where j is an integer, dx is our space increment and equivalently time is NDT, where n is an integer and DT is our time increment. Now it's very simple, we can introduce this definition into our plane wave definition, but now its not, p is not a function of x and t, but P basically has upper lower index n and j and now is equal to e to the i (kjdx-ωndt) and this is basically our discrete definition of plane waves. So remember that in our finite difference approximation to the wave equation, we have terms p of x plus dx, p of x minus dx, and how do we formulate that in this plane wave notation? Well, it's actually very simple. For example p and j plus one which would be a point x plus dx would then be e to the i (k(j+1)dx-ωndt). We can of course it's simple relation extract e to the ikdx from this exponential term multiplying then e to the ikx, basically the discrete version of e to the ikx minus ωt. The same works for the point j minus one in an equivalent form, so you have p of j minus one n, is equal to e to the power of minus ikdx e to the kx minus ωt in discrete form. Of course it applies equally well to discrete time, so for example pj n plus one would correspond to a t plus dt term is equal to e to the minus ωdt multiplying the standard definition of a plane wave in discrete form. So basically now we are ready to use these new expressions and enter with them into the finite difference approximation of the acoustic wave equation. So that you see here, on the left-hand side, we replace the finite difference term with the exponential terms and we've already extracted the basic plane wave definition and we end up with e to the ikx minus ωdt in discrete form, open brackets, now e to the iωdt minus two plus e to the minus i ωdt divided by dt square, on the left hand side. On the right-hand side, we have the square velocity and the equivalent second derivative form based on the plane wave solution multiplying again the fundamental plane wave solution on the right hand side, without source term. So let's continue modifying that equation, we first basically divide the equation on both sides with e to the ikx minus ωt in discrete form, then we're left with on the left-hand side e to the iωdt plus e to the minus iωdt minus two equals the c squared, dt square by dx square. Open bracket e to the ikdx plus e to the minus ikdx minus two. So, of course we need to get rid of this imaginary units and now we go back and use the definition of cosine. So cosine x is equal to one half open bracket e to the ix plus e to the minus ix, and as you can easily see if we inject that definition into the equation, we are left on the left-hand side with two cosine ωdt minus two. On the right-hand side with c square dt square by dx square, open bracket two cosine kdx minus two. We can divide the whole equation by two, and as you see we're left with this, the following term. So to continue, we need another definition, another trigonometric relation that's given here which is basically sine x over two is equal to plus minus square root of one minus cosine x divided by two. So in other words, two sine square x over two is equal to one minus cosine x and you can already see that this will dramatically simplify the equation that we've derived, so we end up with on the left-hand side sine square open bracket ωdt over two is equal to c square dt square by dx square. Multiplying sine square kdx over two, we can take the square root of this equation and we end up with a relation of two sine functions, sine ωdt over two is equal to c dt by dx, sine kdx over two, and this simple equation basically leads to one of the most fundamental results of numerical analysis. So, we see that we're relating two sine functions, and in the middle there is the term C times dt by dx. So, in order for this equation to have real solutions, C times dt by dx has to be smaller equals one. That condition is the famous Courant-Friedrichs-Lewy criterion, often referred to as the CFL criterion for numerical analysis. Basically, it's the relation of two velocities: the physical velocity in the medium where we're propagating waves, and we have the dx by dt which is a grid velocity relating the grid increment divided to the time increment, dt. We cannot overstate this result. C times dt by dx smaller than one, in our case which in more general terms is smaller than epsilon, epsilon can be something around one, can be smaller, can be larger. It depends on the dimensionality of the problem and also on the specific numerical method. So, basically, because our space increment dx is usually imposed by the physical problem that we want to solve, the time step that is required for a stable solution is basically derived from the Courant-criterion. But another important message is that, a stable solution does not mean it's accurate. As we've seen, we had stable solutions, we found numerical dispersion. So, that's a very important point, to know whether your solution is accurate, you have to either compare with analytical solutions, or do some convergence tests and that's something we will encounter later. So, can we make use of this plane wave analysis to understand what we've observed in our Python implementation of the finite difference approximation to the acoustic wave equation? We saw dispersion which is not predicted by the analytical solution. So, let's have a look again at the final result, where we had the two sine terms on both sides of the equation connected by C times dt by dx. So, that's extract omega on the left hand side, making use of the inverse sine function and that allows us to come up basically now with the temporal frequency, as a function of the physical velocity and our discretization schemes, dt and dx. So, if we divide both sides of that equation by wave number k, we now basically have a phase velocity C equals omega divided by k. But as you can easily see, this time the phase velocity actually depends on the discretization scheme. There is one special situation, and that's basically if C times dt divided by dx is equal to one, can easily see if you, under that situation, you basically exactly recover the analytical solution. But that's a kind of an academic case because of course we're doing numerical solutions, because we want to have space dependent velocities, you have a complex model. So in reality, that actually will not really help us, but that's a very special situation. Now, let's try to understand what this equation really means. So, let's try to understand the consequences of the result by looking at the sampling theorem. That allows us to basically come up with the analytical solution of the wave speeds as a function of our discretization scheme. What's the minimal wavelength we can describe when we have a discretization in dx or regular gradient dx in one-dimension? That's actually the Nyquist wavelength, two times dx, and in terms of wave number, wave number is 2pi by lambda is 2pi by lambda, the Nyquist wavelength this is pi by dx. So, we can go back to that equation that we just derived which is the phase velocity as a function of dx and dt, and basically, come up with a graph that shows the phase velocity as a function of number of grid points per wavelength, and that's shown here. The true velocity, the true physical velocity is two kilometers per second. We see that if we started on the left-hand side with very few grid points per wavelength 2,3,4,5, we see that actually we do not recover the proper physical velocity, the velocity is smaller. If we use more and more points, we converge to the true physical velocity, and that's actually what we saw in our simulations. We injected our Gaussian shape and that waveform disintegrated and we're left with a tail, the energy that actually came later because and we now know why we had energy that was traveling at smaller velocities. Actually, for all numerical solutions, particular for wave propagation problems and many other problems too, this kind of numerical dispersion has to be avoided by all means. In other words, we have to find an appropriate discretization scheme that leads to an accurate representation of the velocity in our physical model, whether that's a model of the atmosphere or model of the Earth. That's a very important aspect to consider for all numerical simulations. So, let's look at another aspect of this phenomenon analysis. Remember, in the original introduction of the definition for the finite difference that, if dx the space increment converges to zero, we recover the analytical result for the derivative. Now, we can pose the same question basically for our analytical analysis of finite difference approximation for plane waves. What happens if dt and dx goes to zero? Are we actually recovering the analytical result? Now, for that, we make use of a very simple relationship for the sine, if X is small, we know that sine of X equals X for very very small X. So, as you can see here, if dt and dx become smaller, this condition can be applied, and you see what happens. If we let dx and dt get smaller and smaller, all these terms cancel each other out in this equation, and we actually recover C equals omega by k which is the actual true physical velocity. That shows basically that our numerical scheme is convergent, meaning that if our increments becomes smaller and smaller, we converge to the analytical solution of our problem, and that's a very very important result.