Rami Al-Rfou (Google Research)
Rami Al-Rfou is a senior research scientist at Google Research. Recently, he has been focused on building robust self-supervised crosslingual language models where zero text preprocessing is required. The token-free techniques he introduced enable maximal sharing of representation across languages. Rami's interests extend to learning efficient representations for deep retrieval and learning from structured data such as graphs. Rami applies his deep learning expertise to assisted writing products at Google such as SmartReply and SmartCompose where quality of predictions intersect with human computer interaction challenges to offer delightful experiences to users.
Attention & Language
Abstract: The recent introduction of attention mechanisms to deep learning brought significant gains in quality and interpretability to machine learning models. The impact on natural language processing has been transformational. With the introduction of new models such as Transformer, T5, GPT-x and BERT, machine understanding of language has accelerated. In this talk, we study the problems attention tries to address, the intuition behind self-attention mechanisms, and connection to CNN and Graph neural networks through the application of language modeling. We follow up with a discussion of popular self-supervised language models and scalability challenges that face attention-based models.