About the Course
Despite the recent successes of deep neural networks in fields such as image recognition, machine translation, or gameplay, big challenges remain in applying deep learning techniques to applications that require symbolic reasoning: theorem proving, compiler optimization, software verification and synthesis, and solving NPcomplete problems. This seminar aims to make progress in this exciting and timely field.
The course will be research oriented and handson: we will read and discuss original papers from the forefront of AI research (many from 2018), and participants will propose and complete a course project.
Organization
Like any good ML approach, the course is roughly split into two halves, training and inference, and students will also get a chance to act as startup founders and venture capitalists.
After a few introductory lectures, the course will proceed as follows:
The first half ("training") focuses on catching up with the state of the art: in every meeting, one student will present a paper, which will be discussed by the class. Two other students will serve as facilitators and guide the discussion. By the end of each week, every student will submit a 4sentence summary of each of the papers discussed during that week.
The second half ("inference") focuses on original research: by the end of week 5, each student will submit a project proposal (1 page), which they will pitch to the class, and perhaps adjusted based on feedback, implement during the rest of the term. Deliverables include progress updates, a final presentation, a working implementation and reproducible experiments, and a final report (4 pages).
To make matters interesting, projects will incude a peerevaluation component. Each student receives a fund of 30 million fictitious dollars, which they "invest" in other students' project proposals. There are three investment rounds, series A, B, and C: at the time of the proposal, midproject, and after the final handin. Each student's grade will depend to a small degree on (a) how well their "portfolio" of other project investments did in the final IPO (where the postIPO "market value" is determined by the project grades assigned by the instructor); and (b) how much funding (i.e., confidence) their own project attracts from other students.
Learning Goals
The goal of the class is to kick off a set of research projects on the boundary of deep learning and symbolic reasoning. For students that do well in class, the expected outcome is to set the student on a successful research trajectory in this area. Ideally, the class project will lay the foundation for a future conference submission.
Besides the technical material the course will be a great training ground for presenting, writing, reviewing, and other important research skills. For example, paper summaries can be a blueprint for future related work sections, and reviewing peer proposals and judging their chances for success are important skills for any academic.
Policies
Grading
 60% Project
 30% paper presentations, summaries, and participation in class
 10% peer evaluation
Course Policies
Prerequisites
 Students are expected to have a background in either deep learning or symbolic reasoning.
 Since the set of researchers fluent in both domains is tiny, everybody is expected to fill the respective gaps in their knowledge through significant background reading during the semester.
Schedule
Date  Topic  Notes 

M 08/20 (W1)  Introduction & logistics  
W 08/22  
F 08/24  Automatic reasoning and SMT (guest lecture: Prof. Roopsha Samanta)  
M 08/27 (W2)  Interactive theorem proving (guest lecture: Prof. Ben Delaware)  
W 08/29  Deep Learning (guest lecture: Prof. Bruno Ribeiro)  
F 08/31  Deep Learning on graphs (guest lecture: Prof. Jennifer Neville)  
M 09/03 (W3)  No class (Labour Day)  
W 09/05  
F 09/07  
M 09/10 (W4)  
W 09/12  
F 09/14  
M 09/17 (W5)  
W 09/19  
F 09/21  
M 09/24 (W6)  
W 09/26  
F 09/28  
M 10/01 (W7)  
W 10/03  
F 10/05  
M 10/08 (W8)  No class (October Break)  
W 10/10  
F 10/12  
M 10/15 (W9)  
W 10/17  
F 10/19  
M 10/22 (W10)  
W 10/24  
F 10/26  
M 10/28 (W11)  
W 10/30  
F 11/01  
M 11/05 (W12)  
W 11/07  
F 11/09  
M 11/12 (W13)  
W 11/14  
F 11/16  
M 11/19 (W14)  
W 11/21  No class (Thanksgiving break)  
F 11/23  No class  
M 11/26 (W15)  
W 11/28  
F 11/30  
M 12/03 (W16)  
W 12/05  
F 12/07  
M 12/10 (W17)  No class  
W 12/12  No class  
F 12/14  No class, final project reports due 
Topics/Reading
Below is a tentative and nonexhaustive list of papers that will be presented and discussed in class.

Neural Arithmetic Logic Units
Andrew Trask, Felix Hill, Scott Reed, Jack Rae, Chris Dyer, Phil Blunsom. 2018 
Learning Explanatory Rules from Noisy Data
Richard Evans, Edward Grefenstette. JAIR 2018 
Towards Neural Theorem Proving at Scale
Pasquale Minervini, Matko Bosnjak, Tim Rocktäschel, Sebastian Riedel. NAMPI 2018 
Synthetic Datasets for Neural Program Synthesis
Richard Shin, Neel Kant, Kavi Gupta, Christopher Bender, Brandon Trabucco, Rishabh Singh, Dawn Song. NAMPI 2018 
Leveraging Grammar and Reinforcement Learning for Neural Program Synthesis
, (blog)
Rudy Bunel, Matthew Hausknecht, Jacob Devlin, Rishabh Singh, Pushmeet Kohli ICLR 2018 
Towards Mixed Optimization for Reinforcement Learning with Program Synthesis
Surya Bhupatiraju, Kumar Krishna Agrawal, Rishabh Singh ICML 2018 
Programmatically Interpretable Reinforcement Learning
Abhinav Verma, Vijayaraghavan Murali, Rishabh Singh, Pushmeet Kohli, Swarat Chaudhuri. ICML 2018 
GamePad: A Learning Environment for Theorem Proving
Daniel Huang, Prafulla Dhariwal, Dawn Song, Ilya Sutskever. 2018 
OpenAI Five
OpenAI 2018 
MetaGradient Reinforcement Learning
Zhongwen Xu, Hado van Hasselt, David Silver. 2018 
Learning to search with MCTSnets
Arthur Guez, Théophane Weber, Ioannis Antonoglou, Karen Simonyan, Oriol Vinyals, Daan Wierstra, Rémi Munos, David Silver. 2018 
Training Neural Machines with Partial Traces
Matthew Mirman, Dimitar Dimitrov, Pavle Djordjevich, Timon Gehr, Martin Vechev. 2018 
Neural Sketch Learning for Conditional Program Generation
Vijayaraghavan Murali, Letao Qi, Swarat Chaudhuri, Chris Jermaine. 2018 
The Three Pillars of Machine Programming
Justin Gottschlich, Armando SolarLezama, Nesime Tatbul, Michael Carbin, Martin Rinard, Regina Barzilay, Saman Amarasinghe, Joshua B Tenenbaum, Tim Mattson. 2018 
Learning Heuristics for Automated Reasoning through Deep Reinforcement Learning
Gil Lederman, Markus N. Rabe, Sanjit A. Seshia. 2018 
From Gameplay to Symbolic Reasoning: Learning SAT Solver Heuristics in the Style of Alpha(Go) Zero
Fei Wang, Tiark Rompf. 2018 
Learning a SAT Solver from SingleBit Supervision
Daniel Selsam, Matthew Lamm, Benedikt Bunz, Percy Liang, Leonardo de Moura, David L. Dill. 2018 
Fast Numerical Program Analysis with Reinforcement Learning
CAV 2018 
Memory Augmented Policy Optimization for Program Synthesis with Generalization
Chen Liang, Mohammad Norouzi, Jonathan Berant, Quoc Le, Ni Lao. 2018 
Reinforcement Learning of Theorem Proving
Cezary Kaliszyk, Josef Urban, Henryk Michalewski, Mirek Olšák. 2018 
Evolving simple programs for playing Atari games
Dennis G Wilson, Sylvain CussatBlanc, Hervé Luga, Julian F Miller. 2018 
Synthesizing Programs for Images using Reinforced Adversarial Learning
Yaroslav Ganin, Tejas Kulkarni, Igor Babuschkin, SM Eslami, Oriol Vinyals. 2018 
Treetotree Neural Networks for Program Translation
Xinyun Chen, Chang Liu, Dawn Song. 2018 
Humanlevel performance in firstperson multiplayer games with populationbased deep reinforcement learning
Max Jaderberg, Wojciech M Czarnecki, Iain Dunning, Luke Marris, Guy Lever, Antonio Garcia Castaneda, Charles Beattie, Neil C Rabinowitz, Ari S Morcos, Avraham Ruderman, Nicolas Sonnerat, Tim Green, Louise Deason, Joel Z Leibo, David Silver, Demis Hassabis, Koray Kavukcuoglu, Thore Graepel. 2018 
Unsupervised Predictive Memory in a GoalDirected Agent
Greg Wayne, ChiaChun Hung, David Amos, Mehdi Mirza, Arun Ahuja, Agnieszka GrabskaBarwinska, Jack Rae, Piotr Mirowski, Joel Z Leibo, Adam Santoro, Mevlana Gemici, Malcolm Reynolds, Tim Harley, Josh Abramson, Shakir Mohamed, Danilo Rezende, David Saxton, Adam Cain, Chloe Hillier, David Silver, Koray Kavukcuoglu, Matt Botvinick, Demis Hassabis, Timothy Lillicrap. 2018 
Relational Deep Reinforcement Learning
Vinicius Zambaldi, David Raposo, Adam Santoro, Victor Bapst, Yujia Li, Igor Babuschkin, Karl Tuyls, David Reichert, Timothy Lillicrap, Edward Lockhart, Murray Shanahan, Victoria Langston, Razvan Pascanu, Matthew Botvinick, Oriol Vinyals, Peter Battaglia. 2018 
Relational inductive biases, deep learning, and graph networks
Peter W. Battaglia, Jessica B. Hamrick, Victor Bapst, Alvaro SanchezGonzalez, Vinicius Zambaldi, Mateusz Malinowski, Andrea Tacchetti, David Raposo, Adam Santoro, Ryan Faulkner, Caglar Gulcehre, Francis Song, Andrew Ballard, Justin Gilmer, George Dahl, Ashish Vaswani, Kelsey Allen, Charles Nash, Victoria Langston, Chris Dyer, Nicolas Heess, Daan Wierstra, Pushmeet Kohli, Matt Botvinick, Oriol Vinyals, Yujia Li, Razvan Pascanu. 2018 
Device Placement Optimization with Reinforcement Learning
Azalia Mirhoseini, Hieu Pham, Quoc V. Le, Benoit Steiner, Rasmus Larsen, Yuefeng Zhou, Naveen Kumar, Mohammad Norouzi, Samy Bengio, Jeff Dean. 2017 
Neural Program MetaInduction
, (
blog)
Jacob Devlin, Rudy Bunel, Rishabh Singh, Matthew Hausknecht, Pushmeet Kohli. NIPS 2017 
Endtoend differentiable proving
Tim Rocktäschel, Sebastian Riedel. NIPS 2017 
pix2code: Generating Code from a Graphical User Interface Screenshot
Tony Beltramelli. 2017 
A Syntactic Neural Model for GeneralPurpose Code Generation
Pengcheng Yin, Graham Neubig. 2017 
A simple neural network module for relational reasoning
Adam Santoro, David Raposo, David G.T. Barrett, Mateusz Malinowski, Razvan Pascanu, Peter Battaglia, Timothy Lillicrap. 2017 
Learning to Infer Graphics Programs from HandDrawn Images
Kevin Ellis, Daniel Ritchie, Armando SolarLezama, Joshua B. Tenenbaum. 2017 
Deep Network Guided Proof Search
Sarah Loos, Geoffrey Irving, Christian Szegedy, Cezary Kaliszyk. 2017 
Mastering the game of Go without human knowledge
, (blog,
cheat sheet, build your own)
David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel, Demis Hassabis. Nature 2017 
Mastering the game of Go with deep neural networks and tree search
David Silver, Julian Schrittwieser, Karen Simonyan, Ioannis Antonoglou, Aja Huang, Arthur Guez, Thomas Hubert, Lucas Baker, Matthew Lai, Adrian Bolton, Yutian Chen, Timothy Lillicrap, Fan Hui, Laurent Sifre, George van den Driessche, Thore Graepel, Demis Hassabis. Nature 2016 
Towards Deep Symbolic Reinforcement Learning
Marta Garnelo, Kai Arulkumaran, Murray Shanahan. 2016 
Neural Combinatorial Optimization with Reinforcement Learning
Irwan Bello, Hieu Pham, Quoc V. Le, Mohammad Norouzi, Samy Bengio. 2016
Resources
Reviews, Papers, and Talks
 Communication in Computer Science
Olivier Danvy, class at YaleNUS College Singapore
Presenting

How To Give Strong Technical Presentations
(older version with notes)
Markus Püschel 
How to Give a Great Research Talk
Simon Peyton Jones
Writing

How to Write Papers So People Can Read Them
Derek Dreyer 
How to Write a Great Research Paper
(video)
Simon Peyton Jones
Proposals

How to Write a Great Research Proposal
Simon Peyton Jones and Alan Bundy 
The Heilmeier Catechism
George H. Heilmeier
Experiments

SIGPLAN empirical evaluation checklist
Steve Blackburn, Matthias Hauswirth, Emery Berger, Michael Hicks
Deep Learning (General)

Deep Learning World
Collected resources 
Where should I start learning AI?
ParnianBarekatin, Quora answer 
Deep Learning
Book by Ian Goodfellow, Yoshua Bengio, Aaron Courville 
The Deep Learning Revolution
Christopher Manning, Russ Salakhutdinov 
OpenAI MetaLearning and SelfPlay
Ilya Sutskever 
Deep Learning for Coders
Course at fast.ai 
DeepLearning.ai
Andrew Ng, specialization on Coursera 
Reinforcement learning
David Silver, class at UCL
ML & Symbolic Reasoning

Machine Learning for Theorem Proving
Class at University of Innsbruck 
Neural Abstract Machines & Program Induction
, (v1 2016)
Workshop at ICML/NIPS 
NeuralSymbolic Learning and Reasoning
Workshop series 
Program Synthesis in 201718
Alex Polozov