Tess Smidt (Berkeley Lab)
Bio
Tess Smidt is the 2018 Alvarez Postdoctoral Fellow in Computing Sciences. Her current research interests include building neural networks from first-principles for rich data types (such as those found in scientific data sets) and accelerating existing techniques and creating new capabilities for computational chemistry and material science.
Tess earned her PhD in physics from UC Berkeley in 2018 working with Professor Jeffrey B. Neaton. As a graduate student, she used quantum mechanical calculations to understand and systematically design the geometry and corresponding electronic properties of atomic systems.
During her PhD, Tess spent a year as an intern on Google’s Accelerated Science Team where she developed a new type of convolutional neural network, called Tensor Field Networks, that can naturally handle 3D geometry and properties of physical systems. These networks are now called Euclidean Neural Networks.
As an undergraduate at MIT, Tess engineered giant neutrino detectors in Professor Janet Conrad's group and created a permanent science-art installation on MIT's campus called the Cosmic Ray Chandeliers, which illuminate upon detecting cosmic-ray muons.
Lecture
Symmetry and Equivariance in Neural Networks
Abstract: Symmetry can occur in many forms. For physical systems in 3D, we have the freedom to choose any coordinate system and therefore any physical property must transform predictably under elements of Euclidean symmetry (3D rotations, translations and inversion). For algorithms involving the nodes and edges of graphs, we have symmetry under permutation of how the nodes and edges are ordered in computer memory. Unless coded otherwise, machine learned models make no assumptions about the symmetry of a problem and will be sensitive to e.g. an arbitrary choice of coordinate system or ordering of nodes and edges in an array. One of the primary motivations of explicitly treating symmetry in machine learning models is to eliminate the need for data augmentation. Another motivation is that by encoding symmetry into a method, we get the guarantee that the model will give the “same” answer for an example and a “symmetrically equivalent” example even if the model was not explicitly trained on the “symmetrically equivalent” example. In this lecture, we will discuss several ways to make machine learning models “symmetry-aware” (e.g. input representation vs. loss vs. and model architecture). We will focus on how to handle 3D Euclidean symmetry and permutation symmetry in neural networks, describe unintuitive and beneficial consequences of these symmetries, and discuss how to set up training tasks that are compatible with your assumptions of symmetry.