You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After the model is defined, and before compiling it, I can see that the losses for each concrete dropout layer have been added to the model losses by the line self.layer.add_loss(regularizer) run when the layers are built:
After the compilation, however, model.losses becomes an empty list, and the assertion assert len(model.losses) == 5 fails. If I choose to ignore the assertion, the fact that the layer losses are being neglected shows up in the warning WARNING:tensorflow:Gradients do not exist for variables ['concrete_dropout/p_logit:0', 'concrete_dropout_1/p_logit:0', 'concrete_dropout_2/p_logit:0', 'concrete_dropout_3/p_logit:0', 'concrete_dropout_4/p_logit:0'] when minimizing the loss. when training the model.
I am trying to port the implementation of concrete dropout in keras in https://github.com/yaringal/ConcreteDropout/blob/master/concrete-dropout-keras.ipynb to tensorflow 2. This is mostly straightforward, as tf 2 has most of the keras API built into it. However, the custom losses are being cleared before fitting.
After the model is defined, and before compiling it, I can see that the losses for each concrete dropout layer have been added to the model losses by the line
self.layer.add_loss(regularizer)
run when the layers are built:After the compilation, however,
model.losses
becomes an empty list, and the assertionassert len(model.losses) == 5
fails. If I choose to ignore the assertion, the fact that the layer losses are being neglected shows up in the warningWARNING:tensorflow:Gradients do not exist for variables ['concrete_dropout/p_logit:0', 'concrete_dropout_1/p_logit:0', 'concrete_dropout_2/p_logit:0', 'concrete_dropout_3/p_logit:0', 'concrete_dropout_4/p_logit:0'] when minimizing the loss.
when training the model.After digging into the compilation code in https://github.com/tensorflow/tensorflow/blob/r2.0/tensorflow/python/keras/engine/training.py#L184 I believe the problematic lines are
Does anyone know why this is being done, and how to add custom losses to tensorflow 2 models in an analogous way?
I posted this as a question in stackoverflow, https://stackoverflow.com/questions/61164373/loss-added-to-custom-layer-in-tensorflow-2-is-cleared-when-compiling
The text was updated successfully, but these errors were encountered: