Abstract
I will talk about our recent results on asymptotic study of gradient flow trajectories and their implicit optimization bias when minimizing the exponential loss over certain ``diagonal linear networks". This is the simplest model displaying a transition between" kernel" and non-kernel (" rich" or" active") regimes and we will discuss how how the transition is controlled by the relationship between the initialization scale and how accurately we minimize the training loss. Our results indicate that some limit behaviors of gradient descent only kick in at ridiculous training accuracies (well beyond 10^{-100}). Moreover, the implicit bias at reasonable initialization scales and training accuracies is more complex and not captured by these limits.