![Algorithms and Uncertainty_small text_hi-res logo](/sites/default/files/styles/workshop_banner_sm_1x/public/2023-01/Algorithms%20and%20Uncertainty_hi-res_1.jpg?h=6dcb57f1&itok=7LxcepO0)
Abstract
We investigate dropout for a single neuron (inputs are dropped independently at random with probability half). When the loss is linear, then we can prove very good properties for dropout perturbation: optimal regret in the worst case without having to tune any parameter, AND optimal regret in the iid case when there is a gap between the best and second best feature.
We give high level intuitions of the new proof techniques and discuss a number of competitor algorithms some of which require tuning.
Joint work with Tim Van Erven and Wojciech Kotlowski.