The Science of Cause and Effect: From Deep Learning to Deep Understanding | Theoretically Speaking
What does it mean to “understand” a phenomenon, a domain, or a situation? Machine learning systems have long been labeled “opaque,” “black boxes,” or plain “dumb” for not “understanding” the purpose or the implications of their predictions. But what does it take to qualify as an “understander,” and what computational capacities are needed to meet these requirements?
This month, in a presentation in our Theoretically Speaking public lecture series, Judea Pearl (UCLA) proposed a formal definition of “understanding” as the capacity to answer questions of three types: predictions, actions, and imagination. He described a computational model, a language, and a calculus that facilitate reasoning at these three levels, and demonstrated how features normally associated with understanding follow from this model. They include generating explanations, generalization across domains, integrating data from several sources, recovery from missing data, and more. Pearl concluded by describing future horizons, including automated scientific explorations, personalized decision-making, and social intelligence.