Workshop Abstract

The past five years have seen a huge increase in the capabilities of deep neural networks. Maintaining this rate of progress however, faces some steep challenges, and awaits fundamental insights. As our models become more complex, and venture into areas such as unsupervised learning or reinforcement learning, designing improvements becomes more laborious, and success can be brittle and hard to transfer to new settings.

This workshop seeks to highlight recent works that use theory as well as systematic experiments to isolate the fundamental questions that need to be addressed in deep learning. These have helped flesh out core questions on topics such as generalization, adversarial robustness, large batch training, generative adversarial nets, and optimization, and point towards elements of the theory of deep learning that is expected to emerge in the future.

The workshop aims to enhance this confluence of theory and practice, highlighting influential work with these methods, future open directions, and core fundamental problems. There will be an emphasis on discussion, via panels and round tables, to identify future research directions that are promising and tractable.

Confirmed Speakers

Call for Papers and Submission Instructions

We invite researchers to submit anonymous extended abstracts of up to 4 pages (excluding references). No specific formatting is required. Authors may use the NIPS style file, or any other style as long as they have standard font size (11pt) and margins (1in).

Submit on https://easychair.org/conferences/?conf=dltp2017.

Important Dates

Schedule

TBD

Organizers

Please email nips2017deeplearning@gmail.com with any questions.