Home
Invited Speakers
Important Dates
Submissions
Program
Venue
Registration
Contact
Supported by
TTIJ.jpg   TTIC.jpg
   AIP.jpg     AIRC.png
Additional cooperation from
tokyotech2.png logo_A.jpg
ISM2.jpg

Third International Workshop on Symbolic-Neural Learning (SNL-2019)

July 11-12, 2019
Miraikan hall, Odaiba Miraikan 7F (Tokyo, Japan)

Keynote Talks:

  1. July 11 (Thursday), 13:10-14:10

    Noah Smith (University of Washington/Allen Institute for Artificial Intelligence)

    Rational Recurrences for Empirical Natural Language Processing

    Abstract:
    Despite their often-discussed advantages, deep learning methods largely disregard theories of both learning and language. This makes their prediction behavior hard to understand and explain. In this talk, I will present a path toward more understandable (but still "deep") natural language processing models, without sacrificing accuracy. Rational recurrences comprise a family of recurrent neural networks that obey a particular set of rules about how to calculate hidden states, and hence correspond to parallelized weighted finite-state pattern matching. Many recently introduced models turn out to be members of this family, and the weighted finite-state view lets us derive some new ones. I'll introduce rational RNNs and present some of the ways we have used them in NLP. My collaborators on this work include Jesse Dodge, Hao Peng, Roy Schwartz, and Sam Thomson.

    Bio:
    Noah Smith is a Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, as well as a Senior Research Manager at the Allen Institute for Artificial Intelligence. Previously, he was an Associate Professor of Language Technologies and Machine Learning in the School of Computer Science at Carnegie Mellon University. He received his Ph.D. in Computer Science from Johns Hopkins University in 2006 and his B.S. in Computer Science and B.A. in Linguistics from the University of Maryland in 2001. His research interests include statistical natural language processing, machine learning, and applications of natural language processing, especially to the social sciences. His book, Linguistic Structure Prediction, covers many of these topics. He has served on the editorial boards of the journals Computational Linguistics (2009-2011), Journal of Artificial Intelligence Research (2011-present), and Transactions of the Association for Computational Linguistics (2012-present), as the secretary-treasurer of SIGDAT (2012-2015 and 2018-present), and as program co-chair of ACL 2016. Alumni of his research group, Noah's ARK, are international leaders in NLP in academia and industry; in 2017 UW's Sounding Board team won the inaugural Amazon Alexa Prize. Smith's work has been recognized with a UW Innovation award (2016-2018), a Finmeccanica career development chair at CMU (2011-2014), an NSF CAREER award (2011-2016), a Hertz Foundation graduate fellowship (2001-2006), numerous best paper nominations and awards, and coverage by NPR, BBC, CBC, New York Times, Washington Post, and Time.

     

  2. July 12 (Friday), 10:00-11:00

    Kristina Toutanova (Google)

    Learning and evaluating generalizable vector space representations of texts

    Abstract:
    I will talk about our recent and forthcoming work on pre-training vector space representations of texts of multiple granularities and in different contexts. I will present evaluation on end-user tasks and an analysis of the component representations on probing tasks.
    Finally, I will motivate the need for new kinds of textual representations and ways to measure their ability to generalize. This includes work by Jacob Devlin, Ming-Wei Chang, Kenton Lee, Lajanugen Logeswaran, Ian Tenney, Dipanjan Das, and Ellie Pavlick.

    Bio:
    Kristina Toutanova is a research scientist at Google Research in the Language team in Seattle and an affiliate faculty at the University of Washington. She obtained her Ph.D. from Stanford University with Christopher Manning. Prior to joining Google in 2017, she was a researcher at Microsoft Research, Redmond. Kristina focuses on modeling the structure of natural language using machine learning, most recently in the areas of representation learning, question answering, information retrieval, semantic parsing, and knowledge base completion.
    Kristina is a past co-editor in chief of TACL and was a program co-chair for ACL 2014.

     

  3. July 12 (Friday), 14:30-15:30

    Maximilian Nickel (Facebook)

    Representation Learning in Symbolic Domains

    Abstract:
    Many domains such as natural language understanding, information networks, bioinformatics, and the Web are characterized by problems involving complex relational structures and large amounts of uncertainty. Representation learning has become an invaluable approach for making statistical inferences in this setting by allowing us to learn high-quality models on a large scale. However, while complex relational data often exhibits latent hierarchical structures, current embedding methods do not account for this property. This leads not only to inefficient representations but also to a reduced interpretability of the embeddings.

    In the first part of this talk, I will discuss methods for learning distributed representations of relational data such as graphs and text. I will show how these models are related to classic models of associate memory and that a simple change in training procedure allows them to capture rule-like patterns on relational data. In the second part of the talk, I will then introduce a novel approach for learning hierarchical representations by embedding relations into hyperbolic space. I will discuss how the underlying hyperbolic geometry allows us to learn parsimonious representations which simultaneously capture hierarchy and similarity. Furthermore, I will show that hyperbolic embeddings can outperform Euclidean embeddings significantly on data with latent hierarchies, both in terms of representation capacity and in terms of generalization ability.

    Bio:
    Maximilian Nickel is a research scientist at Facebook AI Research in New York. Before joining FAIR, he was a postdoctoral fellow at MIT where he was with the Laboratory for Computational and Statistical Learning and the Center for Brains, Minds and Machines. In 2013, he received his PhD with summa cum laude from the Ludwig Maximilian University Munich. From 2010 to 2013 he worked as a research assistant at Siemens Corporate Technology. His research centers around geometric methods for learning and reasoning with relational knowledge representations and their applications in artificial intelligence and network science.