Séminaire du LPTMS : Marcelo Guzmán (UPenn)

Quand

12/11/2024    
11:00 - 12:00

Type d’évènement

Carte non disponible

Learning functionality under physical constraints: how physics shapes the way machines learn

 

Marcelo Guzmán (University of Pennsylvania)

 

From biological systems to neuromorphic computing, learning is fundamentally constrained by physics. These constraints, ranging from optimization principles (e.g., energy minimization) to conservation laws and stochastic dynamics in the presence of noise, shape learning dynamics and learned functions in ways absent in artificial neural networks (ANNs). In this two-part talk, I explore how physical constraints influence learning by examining two paradigmatic physical learning models: tunable mechanical networks and self-learning resistor networks.

First, I will show that learning in these physical networks is a dual optimization problem. In the case of resistor networks, for example, it is the minimization of a cost with respect to conductances and the minimization of the power dissipated with respect to voltages—the physical constraint. This additional minimization couples cost and power, enabling inference of key network components through simple physical measurements. I will demonstrate that the high-curvature directions around the cost minima —highlighting the key functional components— are captured by the network’s physical susceptibilities. These susceptibilities, encoded in the softest modes of the power, are measurable and provide clear insights into the network’s functionality, suggesting an interpretability advantage over ANNs and a new framework for studying biological systems for which the cost is unknown.

Next, I will focus on the local dynamics of self-learning resistor networks in laboratory settings. Learning in these systems is the outcome of the collective behavior of individual components. While these networks are energy-efficient, they are sensitive to the presence of external noise and internal biases, two physical constraints. I will show how noise and bias affect the learning dynamics when training for two periodically alternating tasks. In ideal conditions, periodic training converges to an optimal solution for both tasks. However, in the presence of noise and bias, learning leads to limit cycles in the space of conductances. Based on theory and experiments, I uncover a complex interplay between the geometry of the solution space (linked to task complexity), bias, and noise, revealing distinct learning phases in terms of the training period. Finally, I will show that under certain conditions, bias can improve the networks’ learning capabilities.

Retour en haut