Machine Learning with Swift
上QQ阅读APP看书,第一时间看更新

Evaluating accuracy

Score function calculates accuracy of the model using the data. Let's calculate the accuracy of our model on the training set:

In []: 
tree_model.score(X_train, y_train) 
Out[]: 
1.0 

Wow, looks like our model is 100% accurate. Isn't it a great result? Let's not hurry and check our model on held-out data. Evaluation on the test set is the golden standard of success in machine learning:

In []: 
tree_model.score(X_test, y_test) 
Out[]: 
0.87666666666666671 

Worse now. What's just happened? Here, the first time we were faced with the problem of overfitting, when the model is trying to fit itself to every quirk in the data. Our model adjusted itself to the training data so much, that on the previously unseen data, it lacks the ability to generalize. As any real-world data contains noise and signal, we want our models to fit to the signal and to ignore the noise component. Overfitting is the most common problem in machine learning. It's common when datasets are too small, or models are too flexible. The opposite situation is called underfitting—when the model is not able to fit the complex data well enough:

Figure 2.6: Underfitting (right column) versus good fit (central column) versus overfitting (right column). Top row shows classification problem, bottom row shows regression problem.

An overfitting problem is familiar to anyone who looked at some item at the online store, and then was presented with targeted advertisement of the same item everywhere on the internet. This item most likely is not relevant anymore, but the machine learning algorithm already overfitted to the limited dataset, and now you have trinket rabbits (or whatever you've looked at on the e-store) on every page you open.

In any case, we must fight overfitting somehow. So, what can we do? The simplest solution is to make the model simpler and less flexible (or, speaking machine learning, to reduce model capacity).