Computer Science Thesis Oral

Location:
In Person and Virtual - ET - Newell-Simon 3305 and Zoom

Speaker:
FILIPE DE AVILA BELBUTE PERES , Ph.D. Candidate, Computer Science Department, Carnegie Mellon University
https://filipeabperes.github.io/

Combining Deep Learning and Physics Models for Efficient and Robust Architectures

Over the last decade, deep learning has achieved success in diverse domains, becoming one of the most widely employed approaches in artificial intelligence. These successes have also motivated their application in physics domains, such as solving differential equations, or predicting the motion of objects or the behavior of fluids.

Deep learning methods have as their strengths their flexibility, allowing complex dynamics to be learned directly from data, and their proven track-record working directly on unstructured, high-dimensional domains (such as image and video processing). However, deep learning approaches also face some issues, such as difficulty in generalizing outside the training domain, large data requirements, and costly training. Traditional models of physics, on the other hand, have been developed to be universally valid within their domain of application (i.e., generalizable) and require little to no data for modelling.

In this proposal, we introduce methods for leveraging the strengths of both types of approaches, by combining deep learning and physics models. This allows for the development of deep learning architectures that are more data-efficient and robust to generalization than their standard, “physics-unaware” counterparts.

The methods presented fall under two broad categories: differentiable physics layers and physics-informed learning. Differentiable physics layers allow us to embed full physics simulators into deep learning models alongside “traditional” layers, fully constraining their outputs to match the underlying dynamics. By having these simulations be fully differentiable, we maintain the ability to train these systems end-to-end. We present the application of such methods to problems in rigid body and fluid dynamics.

Physics-informed learning methods provide information about the underlying physics in the form of loss terms defined by the relevant differential equations, which act as regularizers pushing the model’s outputs to be physically consistent. We present methods to address common shortcomings of such approaches. We first present a method that allows efficient learning of parameterized systems of differential equations. Then, we present a neural network architecture with sinusoidal activations that addresses the issue of spectral bias in physics-informed learning, and through theoretical and empirical analyses demonstrate how to tune them in order to optimize their performance when solving differential equations.

Thesis Committee:

J. Zico Kolter (Chair)

Zachary Manchester

Katerina Fragkiadaki

Venkat Viswanathan Fei Sha (Google Research)

In Person and Zoom Participation. See announcement.


Add event to Google
Add event to iCal