Andrew Gordon Wilson | Polylogues
In this episode of our Polylogues web series, Science Communicator in Residence Anil Ananthaswamy talks with machine learning researcher Andrew Gordon Wilson (NYU). The two discuss Wilson’s work connecting transformers, Kolmogorov complexity, the no free lunch theorem, and universal learners. Wilson explains why transformers, which have supplanted all other deep neural network architectures, such as convolutional neural networks and multilayer perceptrons, are the closest thing we have to universal learners. Yet, they are missing some key ingredients that are needed to get us to artificial general intelligence (AGI).