Computer Science Speaking Skills Talk

Friday, May 10, 2019 - 2:00pm to 3:00pm


McWilliams Classroom 4303 Gates Hillman Centers



Approximating Operator Norms via Generalized Krivine rounding

We consider the (l_p,l_r)-Grothendieck problem, which seeks to maximize the bilinear form y^T A x for an input matrix A in R^{m x n} over vectors x,y with ||x||_p=||y||_r=1. The problem is equivalent to computing the p -> r^* operator norm of A, where l_{r^*} is the dual norm to l_r. The case p=r=infinity corresponds to the classical Grothendieck problem.

Our main result is an algorithm for arbitrary p,r >= 2 with approximation ratio (1+epsilon_0)/(sinh^{-1}(1) gamma_{p^*} gamma_{r^*}) for some fixed epsilon_0 <= 0.00863. Here gamma_t denotes the t-th norm of the standard Gaussian. Comparing this with Krivine's approximation ratio (pi/2)/sinh^{-1}(1) for the original Grothendieck problem, our guarantee is off from the best known hardness factor of (gamma_{p^*} gamma_{r^*})^{-1} for the problem by a factor similar to Krivine's defect (up to the constant (1+\epsilon_0)).

Our approximation follows by bounding the value of the natural vector relaxation for the problem which is convex when p,r >= 2. We give a generalization of random hyperplane rounding using Holder-duals of Gaussian projections rather than taking the sign. We relate the performance of this rounding to certain hypergeometric functions, which prescribe necessary transformations to the vector solution before the rounding is applied. Unlike Krivine's Rounding where the relevant hypergeometric function was arcsin, we have to study a family of hypergeometric functions. The bulk of our technical work then involves methods from complex analysis to gain detailed information about the Taylor series coefficients of the inverses of these hypergeometric functions, which then dictate our approximation factor.

Presented in Partial Fulfillment of the CSD Speaking Skills Requirement.

For More Information, Contact:


Speaking Skills