About the Course

Despite the recent successes of deep neural networks in fields such as image recognition, machine translation, or gameplay, big challenges remain in applying deep learning techniques to applications that require symbolic reasoning: theorem proving, compiler optimization, software verification and synthesis, and solving NP-complete problems. This seminar aims to make progress in this exciting and timely field.

The course will be research oriented and hands-on: we will read and discuss original papers from the forefront of AI research (many from 2018), and participants will propose and complete a course project.


Like any good ML approach, the course is roughly split into two halves, training and inference, and students will also get a chance to act as startup founders and venture capitalists.

After a few introductory lectures, the course will proceed as follows:

The first half ("training") focuses on catching up with the state of the art: in every meeting, one student will present a paper, which will be discussed by the class. Two other students will serve as facilitators and guide the discussion. By the end of each week, every student will submit a 4-sentence summary of each of the papers discussed during that week.

The second half ("inference") focuses on original research: by the end of week 5, each student will submit a project proposal (1 page), which they will pitch to the class, and perhaps adjusted based on feedback, implement during the rest of the term. Deliverables include progress updates, a final presentation, a working implementation and reproducible experiments, and a final report (4 pages).

To make matters interesting, projects will incude a peer-evaluation component. Each student receives a fund of 30 million fictitious dollars, which they "invest" in other students' project proposals. There are three investment rounds, series A, B, and C: at the time of the proposal, mid-project, and after the final hand-in. Each student's grade will depend to a small degree on (a) how well their "portfolio" of other project investments did in the final IPO (where the post-IPO "market value" is determined by the project grades assigned by the instructor); and (b) how much funding (i.e., confidence) their own project attracts from other students.

Learning Goals

The goal of the class is to kick off a set of research projects on the boundary of deep learning and symbolic reasoning. For students that do well in class, the expected outcome is to set the student on a successful research trajectory in this area. Ideally, the class project will lay the foundation for a future conference submission.

Besides the technical material the course will be a great training ground for presenting, writing, reviewing, and other important research skills. For example, paper summaries can be a blueprint for future related work sections, and reviewing peer proposals and judging their chances for success are important skills for any academic.



  • 60% Project
  • 30% paper presentations, summaries, and participation in class
  • 10% peer evaluation

Course Policies


  • Students are expected to have a background in either deep learning or symbolic reasoning.
  • Since the set of researchers fluent in both domains is tiny, everybody is expected to fill the respective gaps in their knowledge through significant background reading during the semester.


M 08/20 (W1) Introduction & logistics (Prof. Tiark Rompf)
W 08/22 Theorem proving, basics of reinforcement learning (Prof. Tiark Rompf)
F 08/24 Automatic reasoning and SMT (guest lecture: Prof. Roopsha Samanta)
M 08/27 (W2) Interactive theorem proving (guest lecture: Prof. Ben Delaware)
W 08/29 Deep Learning (guest lecture: Prof. Bruno Ribeiro)
F 08/31 Deep Learning on graphs (guest lecture: Prof. Jennifer Neville)
M 09/03 (W3) No class (Labour Day) Facilitators
W 09/05 Using Recurrent Neural Networks for Decompilation
by James Decker
Guannan Wei
Xilun Wu
F 09/07 Mastering the game of Go with deep neural networks and tree search
Mastering the game of Go without human knowledge
by Fei Wang
M 09/10 (W4) OpenAI Five
by Fei Wang
W 09/12 Neural Random-Access Machines
by Daniel Zheng
Raphael Arkady Meyer
Xilun Wu
F 09/14 Hybrid computing using a neural network with dynamic external memory
by Xilun Wu
Daniel Zheng
James Decker
M 09/17 (W5) Fast Numerical Program Analysis with Reinforcement Learning
by Guannan Wei
Daniel Zheng
Susheel Suresh
W 09/19 Neural Arithmetic Logic Units
by Raphael Arkady Meyer
Siyu Jiang
Guannan Wei
F 09/21 Learning To Represent Programs With Graphs
by Susheel Suresh
Shih-Feng Yang
Siyu Jiang
M 09/24 (W6) Towards Mixed Optimization for Reinforcement Learning with Program Synthesis
by Siyu Jiang
Raphael Arkady Meyer
Shih-Feng Yang
W 09/26 DeepCoder: Learning to Write Programs
by Pradeep Kumar Srinivasan
Pradeep Periasamy
Melanie Lotz
F 09/28 Neural sketch learning for conditional program generation
by Melanie Lotz
Zhanfu Yang
Pradeep Kumar Srinivasan
M 10/01 (W7) proposal pitching
W 10/03 proposal pitching
F 10/05 proposal pitching
M 10/08 (W8) No class (October Break)
W 10/10 proposal pitching
F 10/12 proposal pitiching
M 10/15 (W9) proposal pitching
W 10/17 Tree-to-tree Neural Networks for Program Translation
by Shih-Feng Yang
Susheel Suresh
James Decker
F 10/19 GamePad: A Learning Environment for Theorem Proving
by Zhanfu Yang
Pradeep Periasamy
Pradeep Kumar Srinivasan
M 10/22 (W10) pix2code: Generating Code from a Graphical User Interface Screenshot
by Pradeep Periasamy
Melanie Lotz
Zhanfu Yang
W 10/24
F 10/26
M 10/29 (W11)
W 10/31
F 11/02
M 11/05 (W12)
W 11/07
F 11/09
M 11/12 (W13)
W 11/14
F 11/16
M 11/19 (W14)
W 11/21 No class (Thanksgiving break)
F 11/23 No class
M 11/26 (W15)
W 11/28
F 11/30
M 12/03 (W16)
W 12/05
F 12/07
M 12/10 (W17) No class
W 12/12 No class
F 12/14 No class, final project reports due


Below is a tentative and non-exhaustive list of papers that will be presented and discussed in class.


Reviews, Papers, and Talks






Deep Learning (General)

ML & Symbolic Reasoning