Evaluation metrics in PyTorch

While doing a project related to Transfer learning using Pytorch, I encountered this error.

def train(epochs):
print('Starting training..')
for e in range(0, epochs):
    print('='*20)
    print(f'Starting epoch {e + 1}/{epochs}')
    print('='*20)
    conf_matrix = None
    train_loss = 0.
    val_loss = 0.
    train_acc = 0

    resnet18.train() # set model to training phase

    for train_step, (images, labels) in enumerate(dl_train):
        
        optimizer.zero_grad()
        outputs = resnet18(images)
        _, tpreds = torch.max(outputs, 1)
        loss = loss_fn(outputs, labels)
        loss.backward()
        optimizer.step()
        train_loss += loss.item()
        train_acc += sum((tpreds == labels).numpy())
        if train_step % 20 == 0:
            print('Evaluating at step', train_step)

            accuracy = 0
            best_acc = 0
            

            resnet18.eval() # set model to eval phase
            test_preds = np.zeros((90,))
            gt = np.zeros((90,))


            for val_step, (images, labels) in enumerate(dl_test):
                outputs = resnet18(images)
                loss = loss_fn(outputs, labels)
                val_loss += loss.item()
                _, preds = torch.max(outputs, 1)
                accuracy += sum((preds == labels).numpy())
                
                test_preds[val_step*batch_size:(val_step+1)*batch_size] = preds
                gt[val_step*batch_size:(val_step+1)*batch_size] = labels



            val_loss /= (val_step + 1)
            accuracy = accuracy/len(test_dataset)
            
            print(f'Validation Loss: {val_loss:.4f}, Accuracy: {accuracy:.4f}')
            if accuracy > best_acc:
              best_acc = accuracy
            show_preds()

            resnet18.train()

    train_loss /= (train_step + 1)
       
plt.plot(e, train_loss, 'g', label='Training loss')
plt.plot(e, val_loss, 'b', label='validation loss')
plt.title('Training and Validation loss')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.show()
                 
train_acc = train_acc/len(train_dataset)              
print("Training accuracy:",round(train_acc,2))
print('The best accuracy observed was:',round(best_acc,2))
print('Performance condition satisfied, stopping..')
print(f'Training Loss: {train_loss:.4f}')
print('Training complete..')

conf_matrix = confusion_matrix(gt, test_preds)
print(conf_matrix)
plt.figure(figsize=(7,7))
sns.heatmap(conf_matrix, annot=True, fmt=".3f", linewidths=.5, xticklabels = class_names,yticklabels = class_names)            
plt.ylabel('y')
plt.xlabel('x')

Using the above code for training my model, I encountered several errors.

  1. The last or final epoch accuracy is returned as best training accuracy and test accuracy, rather than comparing all the obtained accuracies and returning the highest achieved.
  2. Instead of considering the trained model with highest accuracy, the last or final epoch model is taken for calculation of confusion matrix.
  3. The train loss and test loss vs epochs graph is printed as a blank graph.

I also wanted to print the train accuracy and test accuracy vs epoch graph, it'll be helpful if you can help me with that



Read more here: https://stackoverflow.com/questions/64414722/evaluation-metrics-in-pytorch

Content Attribution

This content was originally published by Revanth Krishna at Recent Questions - Stack Overflow, and is syndicated here via their RSS feed. You can read the original post over there.

%d bloggers like this: