Natural language parsers are widely used in applications such as question answering, information extraction and machine translation. In this talk, I will describe recent and ongoing work in the UW NLP group for learning CCG parsers that build relatively rich representations of the meaning of input texts. I will cover recent work on interactive approaches for improving both data collection and parsing algorithms. On the data side, we have introduced methods for gathering semantic supervision from non-expert annotators at a very large scale and using such supervision interactively, to label as little data as possible while still learning high quality models. For inference, we introduce new neural A* parsing algorithms that achieve state-of-the-art runtimes and accuracies, while also providing formal guarantees of optimality in inference, by learning to interactively focus on promising parts of the parsing search space. Finally, I will sketch some of our future directions where we aim to extend these ideas to build parsers that work well for any domain and any language, with as little engineering effort as possible.