Deep Generative Models and Downstream Applications Workshop- 14 December 2021
Sponsors - Workshop - Schedule- Speakers - Accepted Papers - Poster Session - About
Sponsors
We have two paper awards worth £500, sponsored by Microsoft and Boltzbit.
The Cambridge ELLIS unit will cover workshop registration fees for up to 10 under-represented participants.
Workshop
Click here to join the workshop talks
Click here to join Gathertown
Schedule
2:00 p.m. - 2:10 p.m. (GMT) 9:00 a.m. - 9:10 a.m. (EST) | Opening remarks |
2:10 p.m. - 2:30 p.m. (GMT) 9:10 a.m. - 9:30 a.m. (EST) | Invited talk #1: Aapo Hyvärinen |
2:30 p.m. - 2:50 p.m. (GMT) 9:30 a.m. - 9:50 a.m. (EST) | Invited talk #2: Finale Doshi-Velez |
2:50 p.m. - 3:10 p.m. (GMT) 9:50 a.m. - 10:10 a.m. (EST) | Invited Talk #3: Rianne van den Berg |
3:10 p.m. - 3:20 p.m. (GMT) 10:10 a.m. - 10:20 a.m. (EST) | Particle Dynamics for Learning EBMs |
3:20 p.m. - 3:30 p.m. (GMT) 10:20 a.m. - 10:30 a.m. (EST) | VAEs meet Diffusion Models: Efficient and High-Fidelity Generation |
3:30 p.m. - 3:35 p.m. (GMT) 10:30 a.m. - 10:35 a.m. (EST) | Contributed poster talk #1-2 Q&A |
3:35 p.m. - 4:00 p.m. (GMT) 10:35 a.m. - 11:00 a.m. (EST) | Break #1 |
4:00 p.m. - 4:20 p.m. (GMT) 11:00 a.m. - 11:20 a.m. (EST) | Invited talk #4: Chris Williams |
4:20 p.m. - 4:40 p.m. (GMT) 11:20 a.m. - 11:40 a.m. (EST) | Invited talk #5: Mihaela van der Shaar |
4:40 p.m. - 5:00 p.m. (GMT) 11:40 a.m. - 12:00 p.m (EST) | Invited Talk #6: Luisa Zintgraf |
5:00 p.m. - 5:10 p.m. (GMT) 12:00 p.m. - 12:10 p.m. (EST) | Your Dataset is a Multiset and You Should Compress it Like One |
5:10 p.m. - 5:20 p.m. (GMT) 12:10 p.m. - 12:20 p.m. (EST) | Contributed poster talk #3 Q&A + Best paper awards |
5:20 p.m. - 6:00 p.m. (GMT) 12:20 p.m. - 1:00 p.m. (EST) | Break #2 |
6:00 p.m. - 7:00 p.m. (GMT) 1:00 p.m. - 2:00 p.m. (EST) | Poster session #1- Gathertown |
7:00 p.m. - 7:30 p.m. (GMT) 2:00 p.m. - 2:30 p.m. (EST) | Panel Discussion |
7:30 p.m. - 7:50 p.m. (GMT) 2:30 p.m. - 2:50 p.m. (EST) | Invited Talk #7: Romain Lopez |
7:50 p.m. - 8:10 p.m. (GMT) 2:50 p.m. - 3:10 p.m. (EST) | Break #3 |
8:10 p.m. - 8:30 p.m. (GMT) 3:10 p.m. - 3:30 p.m. (EST) | Invited talk #8: Alex Anderson |
8:30 p.m. - 8:40 p.m. (GMT) 3:30 p.m. - 3:40 p.m. (EST) | AGE: Enhancing the Convergence on GANs using Alternating extra-gradient with Gradient Extrapolation |
8:40 p.m. - 8:50 p.m. (GMT) 3:40 p.m. - 3:50 p.m. (EST) | Sample-Efficient Generation of Novel Photo-acid Generator Molecules using a Deep Generative Model |
8:50 p.m. - 8:55 p.m. (GMT) 3:50 p.m. - 3:55 p.m. (EST) | Contributed poster talk #5-6 Q&A |
8:55 p.m. - 9:15 p.m. (GMT) 3:55 p.m. - 4:15 p.m. (EST) | Invited talk #9: Zhifeng Kong |
9:15 p.m. - 9:35 p.m. (GMT) 4:15 p.m. - 4:35 p.m. (EST) | Invited talk #10: Johannes Ballé |
9:35 p.m. - 9:45 p.m. (GMT) 4:35 p.m. - 4:45 p.m. (EST) | Bayesian Image Reconstruction using Deep Generative Models |
9:45 p.m. - 9:55 p.m. (GMT) 4:45 p.m. - 4:55 p.m. (EST) | Grapher: Multi-Stage Knowledge Graph Construction using Pretrained Language Models |
9:55 p.m. - 10:00 p.m. (GMT) 4:55 p.m. - 5:00 p.m. (EST) | Contributed poster talk #7-8 Q&A |
10:00 p.m. - 11:00 p.m. (GMT) 5:00 p.m. - 6:00 p.m. (EST) | Poster session #2 -Gathertown |
Invited Speakers
List of Keynote Abstracts
- Johannes Ballé (Google Research)
- Mihaela van der Shaar (University of Cambridge)
- Alex Anderson (WaveOne)
- Aapo Hyvärinen (University of Helsinki)
- Romain Lopez (UC Berkeley)
- Luisa Zintgraf (University of Oxford)
- Chris Williams (Edinburgh University)
- Rianne van den Berg (Microsoft)
- Zhifeng Kong (University of California)
- Finale Doshi (Harvard University)
Accepted Papers
Authors | Title | Acceptance |
---|---|---|
Igor Melnyk, Pierre Dognin, Payel Das | Grapher: Multi-Stage Knowledge Graph Construction using Pretrained Language Models | Oral |
Jan Zuiderveld, Marco Federici, Erik J Bekkers | Towards Lightweight Controllable Audio Synthesis with Conditional Implicit Neural Representations | Oral |
Daniel Severo, James Townsend, Ashish J Khisti, Alireza Makhzani, Karen Ullrich | Your Dataset is a Multiset and You Should Compress it Like One | Oral |
Samuel C Hoffman, Vijil Chenthamarakshan, Dmitry Zubarev, Daniel P Sanders, Payel Das | Sample-Efficient Generation of Novel Photo-acid Generator Molecules using a Deep Generative Model | Oral |
Kushagra Pandey, Avideep Mukherjee, Piyush Rai, Abhishek Kumar | VAEs meet Diffusion Models: Efficient and High-Fidelity Generation | Oral |
Huan He, Shifan Zhao, Yuanzhe Xi, Joyce Ho | AGE: Enhancing the Convergence on GANs using Alternating extra-gradient with Gradient Extrapolation | Oral |
Razvan Marinescu, Daniel Moyer, Polina Golland | Bayesian Image Reconstruction using Deep Generative Models | Oral |
Kirill Neklyudov, Priyank Jaini, Max Welling | Particle Dynamics for Learning EBMs | Oral |
Junwen Bai, Shufeng Kong, Carla P Gomes | Gaussian Mixture Variational Autoencoder with Contrastive Learning for Multi-Label Classification | Poster |
Donghun LEE, Ingook Jang, SEONGHYUN KIM, Chanwon Park, Junhee Park | Stochastic Video Prediction with Perceptual Loss | Poster |
Luis Armando Pérez Rey, Dmitri Jarnikov, Mike Holenderski | Content-Based Image Retrieval from Weakly-Supervised Disentangled Representations | Poster |
Jose Ignacio Delgado-Centeno, Paula Harder, Ben Moseley, Valentin Bickel, Siddha Ganju, Miguel Olivares-Mendez, Alfredo Kalaitzis | Single Image Super-Resolution with Uncertainty Estimation for Lunar Satellite Images | Poster |
V Manushree, Sameer Saxena, Parna Chowdhury, Manisimha Varma Manthena, Harsh Rathod, Ankita Ghosh, Sahil Khose | XCI-Sketch: Extraction of Color Information from Images for Generation of Colored Outlines and Sketches | Poster |
Ekansh Verma, Souradip Chakraborty | Uncertainty-aware Labelled Augmentations for High Dimensional Latent Space Bayesian Optimization | Poster |
Ramon Winterhalder, Marco Bellagente, Benjamin Nachman | Latent Space Refinement for Deep Generative Models | Poster |
Sheikh Shams Azam, Taejin Kim, Seyyedali Hosseinalipour, Carlee Joe-Wong, Saurabh Bagchi, Christopher Brinton | A Generalized and Distributable Generative Model for Private Representation Learning | Poster |
Quang H. Le, KAMAL YOUCEF-TOUMI, Dzmitry Tsetserukou, Ali Jahanian | Instance Semantic Segmentation Benefits from Generative Adversarial Networks | Poster |
Fouad OUBARI, Antoine De mathelin, Rodrigue Décatoire, Mathilde MOUGEOT | A Binded VAE for Inorganic Material Generation | Poster |
Ho-Sang Chan, Siu-Hei Cheung, Victoria Ashley Villar, Shirley Ho | Searching for the Weirdest Stars: A Convolutional Autoencoder-Based Pipeline For Detecting Anomalous Periodic Variable Stars | Poster |
Andrea Karlova, Wim Dehaen, Andrei Penciu | How to Reward Your Drug Agent? | Poster |
Yordan Yordanov, Vid Kocijan, Thomas Lukasiewicz, Oana-Maria Camburu | Few-Shot Out-of-Domain Transfer of Natural Language Explanations | Poster |
Alban Petit, Caio Filippo Corro | Preventing posterior collapse in variational autoencoders for text generation via decoder regularization | Poster |
Jongmin Yu, Hyeontaek Oh, Minkyung Kim, Junsik Kim | Normality-Calibrated Autoencoder for Unsupervised Anomaly Detection on Data Contamination | Poster |
Pierre Wüthrich, Jun Jin Choong, Shinya Yuki | An Interpretability-augmented Genetic Expert for Deep Molecular Optimization | Poster |
Fares Meghdouri, Thomas Schmied, Thomas Gärtner, Tanja Zseby | Controllable Network Data Balancing With GANs | Poster |
Tal Daniel, Thanard Kurutach, Aviv Tamar | Deep Variational Semi-Supervised Novelty Detection | Poster |
Anthony L. Caterini, Gabriel Loaiza-Ganem | Entropic Issues in Likelihood-Based OOD Detection | Poster |
Cristian Ignacio Challu, Peihong Jiang, Ying Nian Wu, Laurent Callot | Deep Generative model with Hierarchical Latent Factors for Timeseries Anomaly Detection | Poster |
Sarah Lewis, Tatiana Matejovicova, Yingzhen Li, Angus Lamb, Yordan Zaykov, Miltiadis Allamanis, Cheng Zhang | Accurate Imputation and Efficient Data Acquisitionwith Transformer-based VAEs | Poster |
Jonathan Ho, Tim Salimans | Classifier-Free Diffusion Guidance | Poster |
Si-An Chen, Chun-Liang Li, Hsuan-Tien Lin | Improving Model Compatibility of Generative Adversarial Networks by Boundary Calibration | Poster |
Ali Nihat Uzunalioglu, Tameem Adel, Jakub Mikolaj Tomczak | Semi-supervised Multiple Instance Learning using Variational Auto-Encoders | Poster |
Tobias Weber, Michael Ingrisch, Bernd Bischl, David Rügamer | Towards modelling hazard factors in unstructured data spaces using gradient-based latent interpolation | Poster |
Yuanqi Du, Xiaojie Guo, Hengning Cao, Yanfang Ye, Liang Zhao | Learning Disentangled Representation for Spatiotemporal Graph Generation | Poster |
Roland S. Zimmermann, Lukas Schott, Yang Song, Benjamin Adric Dunn, David A. Klindt | Score-Based Generative Classifiers | Poster |
Kin Gutierrez Olivares, Nganba Meetei, Ruijun Ma, Rohan Reddy, Mengfei Cao | Probabilistic Hierarchical Forecasting with Deep Poisson Mixtures | Poster |
Ben Barrett, Alexander Camuto, Matthew Willetts, Tom Rainforth | Certifiably Robust Variational Autoencoders | Poster |
Naoya Takeishi, Alexandros Kalousis | Variational Autoencoder with Differentiable Physics Engine for Human Gait Analysis and Synthesis | Poster |
Chitwan Saharia, William Chan, Huiwen Chang, Chris A. Lee, Jonathan Ho, Tim Salimans, David J. Fleet, Mohammad Norouzi | Palette: Image-to-Image Diffusion Models | Poster |
Jiyoung Lee, Wonjae Kim, Daehoon Gwak, Edward Choi | Conditional Generation of Periodic Signals with Fourier-Based Decoder | Poster |
Jiaxin Zhang, Kyle Saleeby, Thomas Feldhausen, Sirui Bi, Alex Plotkowski, David Womble | Self-Supervised Anomaly Detection via Neural Autoregressive Flows with Active Learning | Poster |
Howard Zhong, Guha Balakrishnan, Richard Strong Bowen, Ramin Zabih, William T. Freeman | Finding Maximally Informative Patches in Images | Poster |
Gautham Narayan Narasimhan, Kai Zhang, Ben Eisner, Xingyu Lin, David Held | Transparent Liquid Segmentation for Robotic Pouring | Poster |
Poster Session
Join the Postersession via [Gathertown (via NeurIPS SSO login)
About
Deep generative models (DGMs) have become an important branch of deep learning which now includes methods such as variational autoencoders, generative adversarial networks, normalizing flows, energy based models and autoregressive models. Many of these methods have been shown to achieve state-of-the-art results in the generation of synthetic data such as text, speech, images, music, molecules, etc. However, besides just generating synthetic data, DGMs are of relevance in practical downstream applications. A few examples are
- Imputation and acquisition of missing data
- Anomaly detection
- Data denoising
- Compressed sensing
- Image compression
- Image super-resolution
- Molecule optimization
- Interpretation of machine learning methods
- One-shot generation of low-energy molecular structures
- Computation of free-energy differences between molecular states
- Estimating intervention effects from observational data
Despite such important application areas, advances in DGMs are typically quantified using general metrics such as test log-likelihood, conditional data reconstruction error, inception scores or visual inspection of the generated samples. These metrics are useful for justifying gains with respect to baselines when no specific application is in mind, but they may also be poor indicators of performance in practical downstream applications.
At present, there is a gap between researchers working on new DGM-based methods and researchers applying such methods to practical problems (like the ones mentioned above). We aim to fill in this gap by bringing the two aforementioned communities together. We will connect basic researchers working in the area of DGMs with relevant application areas where their methods could have significant practical impact. We will also connect practitioners with the most recent methodological advances in deep generative modeling.
In a highly interactive format, we will outline the current frontiers of practical applications and methodological contributions in DGMs. We aim to use this workshop as an opportunity to establish common language across diverse communities, to actively discuss new research problems, and to collect relevant benchmark tasks by which novel data modeling methods can be benchmarked. The program is a collection of invited talks, alongside contributed posters. A panel discussion will provide different perspectives and experiences of influential researchers and also engage in open participant conversation. An expected outcome of this workshop is the interdisciplinary exchange of ideas and initiation of collaborations.
NeurIPS 2021 and Registration
The Deep Generative Models and Downstream Applications Workshop is part of the 35th Annual Conference on Neural Information Processing Systems. Originally planned to be in Vancouver, NeurIPS 2021 and this workshop will take place entirely virtually (online). Please use the main conference website to register for the workshop.
Call for Papers
The 2021 NeurIPS Workshop on Deep Generative Models and Downstream Applications is calling for contributions in the area of deep generative modeling, with a view on using these methods to solve real-world problems with practical impact. We invite researchers to submit methodological contributions in either variational autoencoders, flows, autoregressive models, energy based models or GANs, as well as applications of these methods to specific real-world problems. Potential applications include but are not limited to the following areas:
- imputation and acquisition of missing data
- anomaly detection
- data denoising
- compressed sensing
- data compression
- image super-resolution
- molecule optimization
- interpretation of machine learning methods
- identifying causal structures in data
- generation of molecular structures, etc.
We invite submissions that either address new problems and provide insights or present progress on established problems. The workshop includes a poster session, which will be held online, giving the opportunity to present novel ideas and ongoing projects.
Submission Instructions
We expect most submissions to be around 4 pages in length. If your submission will be longer than 4 pages, there is no need to move material to an appendix as long as the full submission is within 10 pages, not counting references. Submissions will be accepted as contributed talks or poster presentations. Extended abstracts should be submitted by Sep 17, 2021. Papers must be submitted through the Open Review submission system. Final versions will be posted on the workshop website (and are archival but do not constitute a proceedings). Authors should use the following style files for the workshop:
Work that is presented at the main NeurIPS conference, or accepted for publication somewhere else, will not be accepted for presentation at the workshop.
Please submit at Open Review
Important Dates
- Submission Date for Workshop Contributions: Sep 17, 2021 23:59 Anywhere on earth
- Submission Date for Workshop Contributions: Oct 3, 2021 23:59 Anywhere on earth
- Author notification: Oct 22, 2021
- Workshop: Dec 14, 2021
Organizing Committee and Contact
For questions, please contact ellis-admin@eng.cam.ac.uk
- Cheng Zhang
- Yingzhen Li
- José Miguel Hernández Lobato
- Weiwei Pan
- Yichuan Zhang
- Austin Tripp
- Oren Rippel