Constant validation loss and increasing validation accuracy

by Mr.Sh4nnon   Last Updated May 15, 2019 16:19 PM

I am training a fully convolutional network. The loss is decreasing whilst the validation loss stays mostly where it is. There is some variance in the validation loss.

I thought it might overfits, but the validation accuracy is increasing with each epoch. Is this legit? How would something like this happen? Introducing L2 regularization helped in the beginning. Validation loss is at a lower level but stays more or less constant. Big L2 values worsened the loss, the validation loss and the validation accuracy. So I kept at around 1e-5.

My loss function is a categorical-cross-entropy for a one-hot encoded label. The accuracy is just Keras' standard "accuracy" metric.



Related Questions


Updated May 11, 2017 13:19 PM

Updated August 28, 2017 11:19 AM

Updated February 23, 2017 12:19 PM

Updated December 02, 2017 08:19 AM

Updated August 18, 2018 13:19 PM