LA Probability Forum
The Mondrian process in machine learning is a recursive partition of space with random axis-aligned cuts used to build random forests and Laplace kernel approximations. The construction allows for efficient online algorithms, but the restriction to axis-aligned cuts does not capture dependencies between features. By viewing the Mondrian as a special case of the stable under iterated (STIT) process in stochastic geometry, we resolve open questions about the generalization of cut directions. We utilize the theory of stationary random tessellations to show that STIT random forests achieve minimax rates for Lipschitz and C^2 functions and STIT random features approximate a large class of stationary kernels. This work opens many new questions at the intersection of stochastic geometry and machine learning. Based on joint work with Ngoc Mai Tran.