CMX Lunch Seminar
Graph neural networks (GNNs) have become powerful tools for processing graph-based information in various domains. A desirable property of GNNs is transferability, where a trained network can swap in information from a different graph without retraining and retain its accuracy. A recent method of capturing transferability of GNNs is through the use of graphons, which are symmetric, measurable functions representing the limit of large dense graphs. In this talk, I will introduce the basic notions of graphons and GNNs, while also presenting recent rigorous analytical results on transferability using the two concepts. In particular, I will show that this work addresses transferability between both deterministic weighted graphs and simple random graphs and overcomes issues related to the curse of dimensionality that arise in other GNN results. The proposed GNN architectures offer practical solutions for handling graph data of varying sizes while maintaining performance guarantees without extensive retraining.