Recall that machine learning uses standard algorithms to analyze data, to derive predictive insights and make repeated decisions. This definition applies to the regression and classification types of machine learning problems. Let's breakdown each of these components one by one. The first one is about algorithms. What do I mean when I say machine learning uses standard algorithms? Normally, when we think of computers, we think of programs that do different things. For example, the software that you use to file your taxes is very different from the software that you use for directions to get home. Machine learning is very different. You use the same software or the same set of functions to solve seemingly different problems. In this case, you use the same algorithm to estimate the taxes you owe, or to estimate the time it will take you to get home. You're probably wondering how is this possible. How can one algorithm answer two different questions? This is where the second component of our machine learning definition comes in. Machine learning uses standard algorithms plus data to do its job. Meaning the algorithm uses a set of mathematical functions to model the data. The preset mathematical functions enable the machine to learn the distribution behavior and patterns in the data. We train the algorithm to predict the future outcome by giving it lots and lots of domain specific examples. An algorithm that's been trained with data is called a trained model. I'll mention now that you might hear me use the word model instead of standard algorithm. They both refer to a preset combination of mathematical functions that learn to understand a pool of data. Let's go back to our two examples. If you train a standard algorithm with data from previous tax years, it will become a model for predicting estimates for future tax payments. Similarly, if you train the algorithm with examples of travel journeys, it will become a model for predicting travel time between any two or more points. Even though predicting how long it will take to get home and how much taxes you owe are different problems. The same algorithm works for both. The algorithm exists independently of the use case. Now, the key to making a successful ML model is lots of examples. For regression and classification ML problem, specifically, you need lots of label data. Let me explain. Suppose I have several photos of fruits and vegetables. The photos are the input data. And the labels are either fruit or vegetable. If I want to use machine learning to automatically label a new photo as fruit or vegetable, I'll need to train my model using photos that have already been labeled. All right, that was a simple classification problem. Let's look at a more realistic one. Imagine you work for a manufacturing company. You want to train a machine learning model to detect defective parts before they are assembled into cars. For example, you'll need a large data set of historical examples, parts in good condition, and parts that are fractured or rejected. The more examples you feed the algorithm, the more accurate the trained model. Once the model has been trained, it can predict a new car part as in good condition. The accuracy of a machine learning models prediction is dependent on the volume and quality of the data you feed it. You'll probably hear me say this multiple times throughout this course. No data, no amount. In the next video, I'll explain why data quality is particularly important for a successful ML model.