Explanations are the fuel of progress, the fundamental language tool through which humans have earned more and more control over their future throughout history. How can these simple strings of symbols be so powerful? And what is the secret allowing their discovery?
This talk aims to shed new light on these enigmas by attacking them as a single computational problem: how can we build a machine capable of making scientific discoveries?
To tackle this question, we'll introduce the concept of Explanatory Learning (EL)—a machine-digestible formalization of the scientific process.
Diverging from traditional AI methods that rely on human-coded interpreters—like Program Synthesis—EL is premised on the idea that a true artificial scientist can only emerge when a machine can autonomously interpret symbols. This shift in perspective presents a fresh outlook on a machine’s ability to understand and use language, which we will contemplate through the unexpected findings of our core experiment: the creation of a successful artificial scientist in Odeen, a simple simulated universe full of phenomena to explain.