Confusion Matrix in Machine Learning
A confusion matrix is a table that is used to evaluate the performance of a classification model. It is a summary of the model's predictions compared to the true values. The confusion matrix is usually plotted as a heatmap, where the rows represent the true classes and the columns represent the predicted classes.
The confusion matrix is typically used to compute several evaluation metrics, such as accuracy, precision, recall, and F1 score.
Here is an example of a confusion matrix for a binary classification problem:
Predicted Positive | Predicted Negative | |
---|---|---|
True Positive | TP | FN |
True Negative | FP | TN |
- True Positive (TP): The model correctly predicted that the instance belongs to the positive class.
- False Positive (FP): The model incorrectly predicted that the instance belongs to the positive class, but it actually belongs to the negative class.
- True Negative (TN): The model correctly predicted that the instance belongs to the negative class.
- False Negative (FN): The model incorrectly predicted that the instance belongs to the negative class, but it actually belongs to the positive class.
Using these definitions, several evaluation metrics can be computed:
- Accuracy: The proportion of correct predictions made by the model. It is computed as (TP + TN) / (TP + TN + FP + FN).
- Precision: The proportion of positive predictions that are actually correct. It is computed as TP / (TP + FP).
- Recall: The proportion of actual positive instances that are correctly predicted. It is computed as TP / (TP + FN).
- F1 Score: The harmonic mean of precision and recall. It is computed as 2 * (precision * recall) / (precision + recall).
To compute the confusion matrix and evaluation metrics in TensorFlow, you can use the tf.math.confusion_matrix
function to compute the confusion matrix and the tf.keras.metrics
module to compute the evaluation metrics.
Here is an example of how to compute the confusion matrix and evaluation metrics in TensorFlow:
import tensorflow as tf
# Compute the confusion matrix
cm = tf.math.confusion_matrix(y_true, y_pred)
# Compute the evaluation metrics
accuracy = tf.keras.metrics.Accuracy()
precision = tf.keras.metrics.Precision()
recall = tf.keras.metrics.Recall()
f1 = tf.keras.metrics.F1Score()
# Update the evaluation metrics with the current predictions
accuracy.update_state(y_true, y_pred)
precision.update_state(y_true, y_pred)
recall.update_state(y_true, y_pred)
f1.update_state(y_true, y_pred)
# Print the evaluation metrics
print(f'Accuracy: {accuracy.result()}')
print(f'Precision: {precision.result()}')
print(f'Recall: {recall.result()}')
print(f'F1 Score: {f1.result()}')
In this example, y_true
and y_pred
are the true labels and predicted labels, respectively.
Leave a Comment