The focus of our group is to understand how biological and artificial brain perform neural computations. To do so, we bring together Physics and Mathematics to uncover the principles behind the emergence of patterns of neural activity and how they support computations. The goal of our lab is to advance our understanding of how artificial and biological neural networks organize their activity to support computation, function, and behaviour. To do so, our lab focuses on different lines of research.
Linking network connectivity to resulting dynamics
The patterns of connections in a given network have a central role in the resulting dynamics. However, obtaining a rigorous mathematical description of this link remains challenging. To address this, we focus on oscillator networks, where we have recently provided a precise mathematical link between connectivity and dynamics. We are particularly interested in predicting the transient and emergent dynamics as a function of the network connectivity and also in developing control methods. With this, we hope to uncover the network mechanism behind pattern formation, where we can then apply this mathematical framework to study biological and artificial brains.
Emergence of spatiotemporal dynamics in brain networks
What drives the emergence of organized neural activity in the brain? Understanding the link between structural connectivity, neural dynamics, and brain function remains a key challenge in neuroscience. In our lab, we study the role of the physical networks in the brain, in combination with the intrinsic delays in the interactions, on the resulting neural dynamics and how they support brain function. To do so, we use our recent analytical approach to delayed networks. Our ultimate goal is to understand how the structural connections in the brain support behaviour and function and create personalized brain models for individual subjects and patients, which opens a potential application to clinical studies and a path to study neurological health.
Neural computations in recurrent networks
We are interested in the role of the spatiotemporal dynamics in recurrent neural networks, and how they support the computations these networks perform. Recent results have already suggested that we can use simple principles from Mathematics and Physics to create neural networks with a high degree of explainability, where the entire computation can be expressed in a simple mathematical equation. Our main goal is to advance our understanding of how these systems operate and "open the black box" of neural networks. This, in turn, could not only reduce the huge cost of training these systems, but also offer a generalized perspective of computation in artificial and biological neural networks.