- How to create a custom PreprocessingLayer in TF 2.2
- Python: How to retrive the best model from Optuna LightGBM study?
- How to predownload a transformers model
- How to reset Keras metrics?
- How to handle missing values (NaN) in categorical data when using scikit-learn OneHotEncoder?
- How to get probabilities along with classification in LogisticRegression?
- How to choose the number of units for the Dense layer in the Convoluted neural network for a Image classification problem?
- How to use pydensecrf in Python3.7?
- How to set class weights in DecisionTreeClassifier for multi-class setting
- How to Extract Data from tmdB using Python
- How to add attention layer to a Bi-LSTM
- How to include SimpleImputer before CountVectorizer in a scikit-learn Pipeline?
- How to load a keras model saved as .pb
- How to train new classes on pretrained yolov4 model in darknet
- How To Import The MNIST Dataset From Local Directory Using PyTorch
- how to split up tf.data.Dataset into x_train, y_train, x_test, y_test for keras
- How to plot confusion matrix for prefetched dataset in Tensorflow
- How to Use Class Weights with Focal Loss in PyTorch for Imbalanced dataset for MultiClass Classification
- How to solve "ValueError: y should be a 1d array, got an array of shape (3, 5) instead." for naive Bayes?
- How to create image of confusion matrix in Python
How to calculate TPR and FPR in Python without using sklearn?
To calculate true positive rate (TPR) and false positive rate (FPR) in Python, you can use the following steps:
1. First, you will need to have a set of predictions and a set of ground truth labels. Let's say you have two lists: predictions and labels, where predictions contains your model's predictions and labels contains the ground truth labels.
2. You will also need to decide on a threshold value for classifying a prediction as positive. This threshold can be any value between 0 and 1, depending on the desired sensitivity and specificity of your model. Let's say you choose a threshold of 0.5.
3. Next, you can iterate through the predictions and labels and count the number of true positives (TP), false positives (FP), true negatives (TN), and false negatives (FN). A true positive is a prediction that is positive (above the threshold) and the ground truth label is also positive. A false positive is a prediction that is positive (above the threshold) but the ground truth label is negative. A true negative is a prediction that is negative (below the threshold) and the ground truth label is also negative. A false negative is a prediction that is negative (below the threshold) but the ground truth label is positive.
4. Once you have counted the number of TP, FP, TN, and FN, you can calculate the TPR and FPR as follows:
TPR = TP / (TP + FN)
FPR = FP / (FP + TN)
Here is some example code that shows how to implement this in Python:
predictions = [0.9, 0.3, 0.8, 0.1, 0.2]
labels = [1, 0, 1, 0, 0]
threshold = 0.5
TP = 0
FP = 0
TN = 0
FN = 0
for i in range(len(predictions)):
if predictions[i] > threshold:
# Prediction is positive
if labels[i] == 1:
# True positive
TP += 1
else:
# False positive
FP += 1
else:
# Prediction is negative
if labels[i] == 0:
# True negative
TN += 1
else:
# False negative
FN += 1
TPR = TP / (TP + FN)
FPR = FP / (FP + TN)
print("TPR:", TPR)
print("FPR:", FPR)