Time-series come in all shapes and sizes, but there are a number of very common patterns. So it's useful to recognize them when you see them. For the next few minutes we'll take a look at some examples. The first is trend, where time series have a specific direction that they're moving in. As you can see from the Moore's Law example we showed earlier, this is an upwards facing trend. Another concept is seasonality, which is seen when patterns repeat at predictable intervals. For example, take a look at this chart showing active users at a website for software developers. It follows a very distinct pattern of regular dips. Can you guess what they are? Well, what if I told you if it was up for five units and then down for two? Then you could tell that it very clearly dips on the weekends when less people are working and thus it shows seasonality. Other seasonal series could be shopping sites that peak on weekends or sport sites that peak at various times throughout the year, like the draft or opening day, the All-Star day playoffs and maybe the championship game. Of course, some time series can have a combination of both trend and seasonality as this chart shows. There's an overall upwards trend but there are local peaks and troughs. But of course, there are also some that are probably not predictable at all and just a complete set of random values producing what's typically called white noise. There's not a whole lot you can do with this type of data. But then consider this time series. There's no trend and there's no seasonality. The spikes appear at random timestamps. You can't predict when that will happen next or how strong they will be. But clearly, the entire series isn't random. Between the spikes there's a very deterministic type of decay. We can see here that the value of each time step is 99 percent of the value of the previous time step plus an occasional spike. This is an auto correlated time series. Namely it correlates with a delayed copy of itself often called a lag. This example you can see at lag one there's a strong autocorrelation. Often a time series like this is described as having memory as steps are dependent on previous ones. The spikes which are unpredictable are often called Innovations. In other words, they cannot be predicted based on past values. Another example is here where there are multiple autocorrelations, in this case, at time steps one and 50. The lag one autocorrelation gives these very quick short-term exponential delays, and the 50 gives the small balance after each spike. Time series you'll encounter in real life probably have a bit of each of these features: trend, seasonality, autocorrelation, and noise. As we've learned a machine-learning model is designed to spot patterns, and when we spot patterns we can make predictions. For the most part this can also work with time series except for the noise which is unpredictable. But we should recognize that this assumes that patterns that existed in the past will of course continue on into the future. Of course, real life time series are not always that simple. Their behavior can change drastically over time. For example, this time series had a positive trend and a clear seasonality up to time step 200. But then something happened to change its behavior completely. If this were stock, price then maybe it was a big financial crisis or a big scandal or perhaps a disruptive technological breakthrough causing a massive change. After that the time series started to trend downward without any clear seasonality. We'll typically call this a non-stationary time series. To predict on this we could just train for limited period of time. For example, here where I take just the last 100 steps. You'll probably get a better performance than if you had trained on the entire time series. But that's breaking the mold for typical machine, learning where we always assume that more data is better. But for time series forecasting it really depends on the time series. If it's stationary, meaning its behavior does not change over time, then great. The more data you have the better. But if it's not stationary then the optimal time window that you should use for training will vary. Ideally, we would like to be able to take the whole series into account and generate a prediction for what might happen next. As you can see, this isn't always as simple as you might think given a drastic change like the one we see here. So that's some of what you're going to be looking at in this course. But let's start by going through a workbook that generates sequences like those you saw in this video. After that we'll then try to predict some of these synthesized sequences as a practice before later we'll move on to real-world data.