Description

We introduce a temperature into the exponential function and replace the softmax output layer of neural nets by a high temperature generalization.  Similarly, the logarithm in the log loss we use for training is replaced by a low temperature logarithm. By tuning the two temperatures we create loss functions that are non-convex already in the single layer case. When replacing the last layer of the neural nets by our two temperature generalization of logistic regression, the training becomes more robust to noise. We visualize the effect of tuning the two temperatures in a simple setting and show the efficacy of our method on large data sets. Our methodology is based on Bregman divergences and the related matching loss for any increasing transfer function. The new approach is superior to a related two-temperature method using the Tsallis divergence.

Joint work with Ehsan Amid, Rohan Anil and Tomer Koren.

All scheduled dates:

Upcoming

No Upcoming activities yet