Algoritmo Lab Forum
How do I know if my model is Overfitting or Underfitting?
If the model performs well on the training dataset but does not perform well on the test set, it is an indication of overfitting, as the model is unable to generalise on the unseen data.
If the model performance is poor on both the training and the test set, then the model is underfitting. It indicates that neither the model is able to capture the underlying patterns in the data, and nor is it able to generalise on the unseen data.
In simple words, the inability to capture the true relationship from the training data is known as bias. High bias leads to under-fitting. If your model is undercutting, there is no good enough reason to test the model in unseen data.
After you have built your model, you would like to test the model in unseen data. If your model fits well into your training data but does not deliver expected result on unseen data, then it is known to have high variance. This high variance leads to overfitting.
So to answer your question: Train your model on training subset. Apply the model on your train data itself to see train performance i.e. how your model is performing in the training data itself. If it does not perform well, your model is under-fitting.
However if the train performance is good, then apply the model on your test data. If you notice high difference in performance in your train subset and test subset then your model is likely overfitting.
You must be logged in to post a comment.
How can I validate user input in Python
P-Value in Linear Regression