Special CMX Seminar
Recent years have witnessed growing importance of non-convex methods in many industrial and practical problems. Many low-rank related problems can be solved by reformulating them as nonconvex optimization problems. Surprisingly, these optimization problems usually do not have spurious local minima and all saddle points are strict under the Euclidean parameterized setting. Although a dimension-free polynomial convergence can be guaranteed for many problems, numerical experiments have demonstrated much better performance than what the current theory predicts. Different from previous non-convex methods (with weaker theoretical convergence guarantee) or convex relaxation (with heavier computational cost), in this talk, we will discuss a new global non-convex optimization framework for solving a general inverse problem of low-rank matrices under the Riemannian manifold setting. Given some random measurements of a low-rank matrix, how does a least square loss work via the Riemannian gradient descent (light computational cost version) with random initialization? Our analysis gives rigorous mathematical analysis with respect to both asymptotic convergence behavior and fast convergence rate under isometry or weaker isometry condtions. More specifically, under isometry case a low-rank matrix manifold with rank r (r << n) consists of 2^r branches. We will show that a random initialization with probability 1 falls into an intrinsic branch. Further, it needs O(log n+log(1/epsilon)) iterations to generate an epsilon-accuracy solution. Similar results also hold for low rank matrix recovery by given some random information under mild conditions (weaker isometry case). Potential applications include but not limited to low-rank matrix related problems, such as matrix sensing, matrix completion, low-rank Hankel matrix recovery, phase retrieval, robust PCA, and non-convex flow, etc.