You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm experimenting with a regression problem in which I'd like to predict 6 floats given an image. I have a working pipeline using a ResNet backbone and a linear layer with leaky activation as output. Loss is computed as RMSE
Now I'd like to see how vision transformers perform on the same task. The approach I'm following is to load a pretrained model as
Loss is still computed as RMSE.
I'm using pytorch-lightning to find the optimal learning rate which is found to be 2e-5.
During training the loss doesn't change, hence the training is not converging.
Is this the right way of setting the regression problem?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
hello 👋
I'm experimenting with a regression problem in which I'd like to predict 6 floats given an image. I have a working pipeline using a ResNet backbone and a linear layer with leaky activation as output. Loss is computed as RMSE
Now I'd like to see how vision transformers perform on the same task. The approach I'm following is to load a pretrained model as
wich is used in the forward pass as
Loss is still computed as RMSE.
I'm using pytorch-lightning to find the optimal learning rate which is found to be
2e-5
.During training the loss doesn't change, hence the training is not converging.
Is this the right way of setting the regression problem?
Many thanks!
Beta Was this translation helpful? Give feedback.
All reactions