Home Organization Call For Papers

NeuralGen 2019

Methods for Optimizing and Evaluating Neural Language Generation

Workshop will be on June 6, 2019, co-located with NAACL 2019 in Minneapolis!

Email: neuralgen2019@gmail.com — Twitter: @NeuralGen & #NeuralGen2019

Overview

The goal of this workshop is to discuss new methods for language generation that address some of the recurring problems in existing language generation techniques (eg. bland, repetitive language) as well as novel techniques for robustly evaluating and interpreting model output.

We are accepting papers in the following areas:

  • Novel architectures and new approaches to training models: beyond maximum likelihood training (eg: risk loss, reinforcement learning objectives, variational approaches, adversarial training, pretrained discriminators, other novel loss functions), unsupervised, weakly supervised, and semi-supervised language generation, editing models, mixing neural and template-based generation, human-in-the-loop learning, beyond teacher-forcing (beam search during training, non-autoregressive generation).
  • Evaluation: new automatic metrics for evaluating different characteristics of coherent language, evaluation using pretrained models, proposing better human evaluation strategies.
  • Generalization: transfer learning (unsupervised pre-training for generation, low-resource generation, domain adaptation), multi-task learning, model distillation.
  • Analysis: model analysis, interpretability and/or visualizations, error analysis of machine-generated language, analysis of evaluation metrics, benefits/drawbacks of different loss functions.

Program

Speakers

Yejin Choi University of Washington, Allen Institute for AI
Bio: Yejin Choi is an associate professor at the Paul G. Allen School of Computer Science & Engineering at the University of Washington and also a senior research manager at AI2 overseeing the project Mosaic. Her research interests include language grounding with vision, physical and social commonsense knowledge, language generation with long-term coherence, conversational AI, and AI for social good. She was a recepient of Borg Early Career Award (BECA) in 2018, among the IEEE’s AI Top 10 to Watch in 2015, a co-recipient of the Marr Prize at ICCV 2013, and a faculty advisor for the Sounding Board team that won the inaugural Alexa Prize Challenge in 2017. Her work on detecting deceptive reviews, predicting the literary success, and interpreting bias and connotation has been featured by numerous media outlets including NBC News for New York, NPR Radio, New York Times, and Bloomberg Business Week. She received her Ph.D. in Computer Science from Cornell University.
Hal Daumé III University of Maryland, Microsoft Research
Bio: Hal Daumé III is a professor in Computer Science at the University of Maryland, College Park and Principal Researcher at Microsoft Research, New York City. His primary research interest is in developing new learning algorithms for prototypical problems that arise in the context of natural language processing, with a focus on interactive systems and fairness. He has received several "best of" awards, including at ACL 2018, NAACL 2016, NeurIPS 2015, CEAS 2011 and ECML 2009. He has been program chair for NAACL 2013 (and chair of its executive board), will be program chair for ICML 2020, and was an inaugural diversity and inclusion co-chair at NeurIPS 2018.
Tatsunori Hashimoto Stanford University
Bio: Tatsunori (Tatsu) Hashimoto is currently finishing up a 3 year post-doc in the Statistics and Computer Science departments at Stanford, supervised by Professors Percy Liang and John Duchi. Starting in 2020, he will be joining the Computer Science department at Stanford as an assistant professor. Tatsu holds a Ph.D from MIT where he studied connections between embeddings and random walks under Professors Tommi Jaakkola and David Gifford, and a B.S. from Harvard in Statistics and Math. His work has been recognized in NeurIPS 2018 (Oral), ICML 2018 (Best paper runner-up), and NeurIPS 2014 Workshop on Networks (Best student paper).
He He New York University, Amazon Web Services
Bio: He He is a senior applied scientist at Amazon Web Services, Palo Alto. Starting Fall 2019, she will be joining New York University as an assistant professor. She received her PhD from University of Maryland, College Park, followed by a post-doc at Stanford. She is broadly interested in machine learning and natural language processing. Her research focuses on building intelligent agents that process language a changing environment and interact with people, recently focusing on controllable text generation and dialogue systems.
Graham Neubig Carnegie Mellon University
Bio: Graham Neubig is an assistant professor at the Language Technologies Institute of Carnegie Mellon University. His work focuses on natural language processing, specifically multi-lingual models that work in many different languages, and natural language interfaces that allow humans to communicate with computers in their own language. Much of this work relies on machine learning to create these systems from data, and he is also active in developing methods and algorithms for machine learning over natural language data. He publishes regularly in the top venues in natural language processing, machine learning, and speech, and his work occasionally wins awards such as best papers at EMNLP, EACL, and WNMT. He is also active in developing open-source software, and is the main developer of the DyNet neural network toolkit.
Alexander Rush Harvard University, Cornell Tech
Bio: Alexander "Sasha" Rush is an Associate Professor at Harvard University, where he studies natural language processing and machine learning. Sasha received his PhD from MIT supervised by Michael Collins and was a postdoc at Facebook NY under Yann LeCun. His group supports open-source development, running several projects including OpenNMT. His research has received several best paper awards at NLP conferences, an NSF Career award, and faculty awards from Google, Facebook, and others. He is currently the senior program chair of ICLR 2019.

Schedule

Thurs June 6
9:00-9:05Opening Remarks
9:05-9:45Invited Speaker: Graham Neubig -- What can Statistical Machine Translation teach Neural Text Generation about Optimization?
9:45-10:25Invited Speaker: He He -- Towards Controllable Text Generation
10:25-10:45Coffee Break
10:45-11:25Invited Speaker: Tatsunori Hashimoto -- Defining and evaluating diversity in generation
11:25-12:05Invited Speaker: Yejin Choi -- The Enigma of Neural Text Degeneration as the First Defense Against Neural Fake News.
12:05-13:35Lunch
13:35-14:15Invited Speaker: Alexander Rush -- Pretraining Methods for Neural Generation
14:15-14:25Best Paper Presentation: Bilingual-GAN: A Step Towards Parallel Text Generation
14:25-14:35Best Paper Presentation: Designing a Symbolic Intermediate Representation for Neural Surface Realization
14:35-14:45Remote Presentation: Jointly Measuring Diversity and Quality in Text Generation Models
14:45-16:15Poster Session & Coffee Break
16:15-16:55Invited Speaker: Hal Daumé III -- Out of Order! Flexible neural language generation
16:55-17:55Panel
17:55-18:00Closing Remarks

Accepted Papers

  • An Adversarial Learning Framework For A Persona-Based Multi-Turn Dialogue Model
    Oluwatobi Olabiyi, Anish Khazane, Alan Salimov and Erik Mueller
  • DAL: Dual Adversarial Learning for Dialogue Generation
    Shaobo Cui, Rongzhong Lian, Di Jiang, Yuanfeng Song, Siqi Bao and Yong Jiang
  • How to Compare Summarizers without Target Length? Pitfalls, Solutions and Re-Examination of the Neural Summarization Literature
    Simeng Sun, Ori Shapira, Ido Dagan and Ani Nenkova
  • BERT has a Mouth, and It Must Speak: BERT as a Markov Random Field Language Model
    Alex Wang and Kyunghyun Cho
  • Neural Text Simplification in Low-Resource Conditions Using Weak Supervision
    Alessio Palmero Aprosio, Sara Tonelli, Marco Turchi, Matteo Negri and Mattia A. Di Gangi
  • Paraphrase Generation for Semi-Supervised Learning in NLU
    Eunah Cho, He Xie and William M. Campbell
  • Bilingual-GAN: A Step Towards Parallel Text Generation
    Ahmad Rashid, Alan Do Omri, Md Akmal Haidar, Qun Liu and Mehdi Rezagholizadeh
  • Designing a Symbolic Intermediate Representation for Neural Surface Realization
    Henry Elder, Jennifer Foster, James Barry and Alexander O’Connor
  • Neural Text Style Transfer via Denoising and Reranking
    Joseph Lee, Ziang Xie, Cindy Wang, Max Drach, Dan Jurafsky and Andrew Ng
  • Better Automatic Evaluation of Open-Domain Dialogue Systems with Contextualized Embeddings
    Sarik Ghazarian, Johnny Wei, Aram Galstyan and Nanyun Peng
  • Jointly Measuring Diversity and Quality in Text Generation Models
    Ehsan Montahaei, Danial Alihosseini and Mahdieh Soleymani Baghshah

Posters

  • An Adversarial Learning Framework For A Persona-Based Multi-Turn Dialogue Model
    Oluwatobi Olabiyi, Anish Khazane, Alan Salimov and Erik Mueller
  • DAL: Dual Adversarial Learning for Dialogue Generation
    Shaobo Cui, Rongzhong Lian, Di Jiang, Yuanfeng Song, Siqi Bao and Yong Jiang
  • Towards Coherent and Engaging Spoken Dialog Response Generation Using Automatic Conversation Evaluators
    Sanghyun Yi, Rahul Goel, Chandra Khatri, Tagyoung Chung, Behnam Hedayatnia, Anu Venkatesh, Raefer Gabriel and Dilek Hakkani-Tur
  • How to Compare Summarizers without Target Length? Pitfalls, Solutions and ReExamination of the Neural Summarization Literature
    Simeng Sun, Ori Shapira, Ido Dagan and Ani Nenkova
  • BERT has a Mouth, and It Must Speak: BERT as a Markov Random Field Language Model
    Alex Wang and Kyunghyun Cho
  • Neural Text Simplification in Low-Resource Conditions Using Weak Supervision
    Alessio Palmero Aprosio, Sara Tonelli, Marco Turchi, Matteo Negri and Mattia A. Di Gangi
  • Paraphrase Generation for Semi-Supervised Learning in NLU
    Eunah Cho, He Xie and William M. Campbell
  • Bilingual-GAN: A Step Towards Parallel Text Generation
    Ahmad Rashid, Alan Do Omri, Md Akmal Haidar, Qun Liu and Mehdi Rezagholizadeh
  • Learning Criteria and Evaluation Metrics for Textual Transfer between Non-Parallel Corpora
    Yuanzhe Pang and Kevin Gimpel
  • Dual Supervised Learning for Natural Language Understanding and Generation
    Shang-Yu Su, Chao-Wei Huang and Yun-Nung Chen
  • Designing a Symbolic Intermediate Representation for Neural Surface Realization
    Henry Elder, Jennifer Foster, James Barry and Alexander O’Connor
  • Insertion-based Decoding with automatically Inferred Generation Order
    Jiatao Gu, Qi Liu and Kyunghyun Cho
  • Neural Text Style Transfer via Denoising and Reranking
    Joseph Lee, Ziang Xie, Cindy Wang, Max Drach, Dan Jurafsky and Andrew Ng
  • Generating Diverse Story Continuations with Controllable Semantics
    Lifu Tu, Xiaoan Ding, Dong Yu and Kevin Gimpel
  • Better Automatic Evaluation of Open-Domain Dialogue Systems with Contextualized Embeddings
    Sarik Ghazarian, Johnny Wei, Aram Galstyan and Nanyun Peng
  • Improved Zero-shot Neural Machine Translation via Ignoring Spurious Correlations
    Jiatao Gu, Yong Wang, Kyunghyun Cho and Victor O.K. Li
  • Jointly Measuring Diversity and Quality in Text Generation Models
    Ehsan Montahaei, Danial Alihosseini and Mahdieh Soleymani Baghshah

Organization

Steering Committee

Yejin Choi University of Washington
Dilek Hakkani-Tür Amazon Research
Dan JurafskyStanford University
Alexander Rush Harvard University

Organizing Committee

Antoine Bosselut University of Washington
Marjan GhazvininejadFacebook AI Research
Srinivasan IyerUniversity of Washington
Urvashi Khandelwal Stanford University
Hannah RashkinUniversity of Washington
Asli CelikyilmazMicrosoft Research
Thomas WolfHuggingFace