Understanding deep learning through over-parameterized neural networks

Publisher:闻天明Release Time:2022-10-25Number of visits:3208

Speaker:    Wei Huang, RIKEN AIP

Time:         10:30-11:30 , Oct .28

Location:   Tencent Meeting

    Meeting ID234-209-496

     Linkhttps://meeting.tencent.com/dm/Qpu9elXja8Fl

Host:          Ye Shi

 

Abstract:

The learning dynamics of neural networks trained by gradient descent are captured by the so-called neural tangent kernel (NTK) in the infinite-width limit. The NTK has been a powerful tool for researchers to understand the optimization and generalization of over-parameterized networks. In this talk, the foundation of the NTK in addition to its application to deep graph networks and active learning will be introduced.

 

Bio:

Wei Huang received a P.h.D degree in computer science from the University of Technology of Sydney, in 2021. He is currently a postdoctoral researcher at RIKEN AIP, Japan, working with Prof. Taiji Suzuki. His research has been published in top conferences including NeurIPS, ICLR and IJCAI. His research interests include deep learning theory, graph neural networks, and contrastive learning.