Dan | Machine Learning Engineer
Dan | Machine Learning Engineer

@DanKornas

7 Tweets 31 reads Jan 05, 2023
Day 58 of #60daysOfMachineLearning
🔷 Accuracy, Overfitting, Underfitting 🔷
In machine learning, accuracy is a measure of how well the model is able to make predictions on new examples. It is usually measured as the percentage of correct predictions made by the model.
Overfitting occurs when the model has learned the training data too well and is not able to generalize to new examples. This means that the model will have high accuracy on the training data, but low accuracy on the validation and test data.
Underfitting occurs when the model is not able to learn the underlying patterns in the training data. This means that the model will have low accuracy on the training data, as well as low accuracy on the validation and test data.
To prevent overfitting, it is important to use a sufficient amount of training data and to use regularization techniques, such as early stopping or weight decay.
To prevent underfitting, it is important to use a model with enough capacity (e.g., a deep neural network with enough layers and units) and to tune the hyperparameters of the model to find the best fit for the data.
It is important to monitor the accuracy on the validation and test sets during the training process to ensure that the model is not overfitting or underfitting.
If the accuracy on the validation set is significantly lower than the accuracy on the training set, it is a sign of overfitting. If the accuracy is low on both the training and validation sets, it is a sign of underfitting.

Loading suggestions...