Uniform convergence rates for Lipschitz learning on graphs

Bungert L, Calder J, Roith T (2022)


Publication Type: Journal article

Publication year: 2022

Journal

DOI: 10.1093/imanum/drac048

Abstract

Lipschitz learning is a graph-based semisupervised learning method where one extends labels from a labeled to an unlabeled data set by solving the infinity Laplace equation on a weighted graph. In this work we prove uniform convergence rates for solutions of the graph infinity Laplace equation as the number of vertices grows to infinity. Their continuum limits are absolutely minimizing Lipschitz extensions (AMLEs) with respect to the geodesic metric of the domain where the graph vertices are sampled from. We work under very general assumptions on the graph weights, the set of labeled vertices and the continuum domain. Our main contribution is that we obtain quantitative convergence rates even for very sparsely connected graphs, as they typically appear in applications like semisupervised learning. In particular, our framework allows for graph bandwidths down to the connectivity radius. For proving this we first show a quantitative convergence statement for graph distance functions to geodesic distance functions in the continuum. Using the 'comparison with distance functions' principle, we can pass these convergence statements to infinity harmonic functions and AMLEs.

Authors with CRIS profile

Involved external institutions

How to cite

APA:

Bungert, L., Calder, J., & Roith, T. (2022). Uniform convergence rates for Lipschitz learning on graphs. IMA Journal of Numerical Analysis. https://dx.doi.org/10.1093/imanum/drac048

MLA:

Bungert, Leon, Jeff Calder, and Tim Roith. "Uniform convergence rates for Lipschitz learning on graphs." IMA Journal of Numerical Analysis (2022).

BibTeX: Download