Hall A, Long Beach Convention Center

Workshop Abstract

The past five years have seen a huge increase in the capabilities of deep neural networks. Maintaining this rate of progress however, faces some steep challenges, and awaits fundamental insights. As our models become more complex, and venture into areas such as unsupervised learning or reinforcement learning, designing improvements becomes more laborious, and success can be brittle and hard to transfer to new settings.

This workshop seeks to highlight recent works that use theory as well as systematic experiments to isolate the fundamental questions that need to be addressed in deep learning. These have helped flesh out core questions on topics such as generalization, adversarial robustness, large batch training, generative adversarial nets, and optimization, and point towards elements of the theory of deep learning that is expected to emerge in the future.

The workshop aims to enhance this confluence of theory and practice, highlighting influential work with these methods, future open directions, and core fundamental problems. There will be an emphasis on discussion, via panels and round tables, to identify future research directions that are promising and tractable.

Confirmed Speakers


Time Event
8:35 - 8:45 Opening Remarks
8:45 - 9:15 Yoshua Bengio: Generalization, Memorization and SGD
9:15 - 9:45 Ian Goodfellow: Bridging Theory and Practice of GANs
9:45 - 10:00 Spotlights 1
10:00 - 10:30 Peter Bartlett: Generalization in Deep Networks
10:30 - 11:00 Coffee
11:00 - 11:30 Doina Precup: Experimental design in Deep Reinforcement Learning
11:30 - 11:45 Spotlights 2
11:45 - 1:30 Lunch + first poster session
1:30 - 2:00 Percy Liang: Fighting Black Boxes, Adversaries, and Bugs in Deep Learning
2:00 - 3:00 Contributed talks
3:00 - 4:00 Coffee + second poster session
4:00 - 4:30 Sham Kakade: Towards Bridging Theory and Practice in DeepRL
4:30 - 5:30 Panel: Russ Salakhutdiov, Percy Liang, Peter Bartlett, Sham Kakade

Spotlights 1 (9:45 - 10:00)

  1. Generalization in deep nets: the role of distance from initialization
    Vaishnavh Nagarajan and Zico Kolter
  2. Entropy-SG(L)D optimizes the prior of a (valid) PAC-Bayes bound
    Gintare Karolina Dziugaite and Daniel Roy
  3. Large Batch Training of DNNs with Layer-wise Adaptive Rate Scaling
    Boris Ginsburg, Yang You, and Igor Gitman

Spotlights 2 (11:30 - 11:45)

  1. Measuring robustness of NNs via Minimal Adversarial Examples
    Sumanth Dathathri, Stephan Zheng, Sicun Gao, and Richard M. Murray
  2. A classification based perspective on GAN-distributions
    Shibani Santurkar, Ludwig Schmidt, and Aleksander Madry
  3. Learning one hidden layer neural nets with landscape design
    Rong Ge, Jason Lee, and Tengyu Ma

Contributed talks (2:00 - 3:00)

  1. Don’t Decay the Learning Rate, Increase the Batch Size
    Samuel L. Smith, Pieter-Jan Kindermans, and Quoc V. Le
  2. Meta-Learning and Universality: Deep Representations and Gradient Descent Can Approximate Any Learning Algorithm
    Chelsea Finn and Sergey Levine
  3. Hyperparameter Optimization: A Spectral Approach
    Elad Hazan, Adam Klivans, and Yang Yuan
  4. Learning Implicit Generative Models with Method of Learned Moments
    Suman Ravuri, Shakir Mohamed, Mihaela Rosca, and Oriol Vinyals

Call for Papers and Submission Instructions

We invite researchers to submit anonymous extended abstracts of up to 4 pages (excluding references). No specific formatting is required. Authors may use the NIPS style file, or any other style as long as they have standard font size (11pt) and margins (1in).

Submit on https://easychair.org/conferences/?conf=dltp2017.

Important Dates


Please email nips2017deeplearning@gmail.com with any questions.