Weizmann Logo
ECCC
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



REPORTS > DETAIL:

Revision(s):

Revision #1 to TR16-117 | 1st August 2016 19:02

From Weak to Strong LP Gaps for all CSPs

RSS-Feed




Revision #1
Authors: Mrinalkanti Ghosh, Madhur Tulsiani
Accepted on: 1st August 2016 19:02
Downloads: 828
Keywords: 


Abstract:

We study the approximability of constraint satisfaction problems (CSPs) by linear programming (LP) relaxations. We show that for every CSP, the approximation obtained by a basic LP relaxation, is no weaker than the approximation obtained using relaxations given by $\Omega\left(\frac{\log n}{\log \log n}\right)$ levels of the Sherali-Adams hierarchy on instances of size $n$.

It was proved by Chan et al. [FOCS 2013] that any polynomial size LP extended formulation is no stronger than relaxations obtained by a super-constant levels of the Sherali-Adams hierarchy. Combining this with our result also implies that the any polynomial size LP extended formulation is no stronger than the basic LP.

Using our techniques, we also simplify and strengthen the result by Khot et al. [STOC 2014] on (strong) approximation resistance for LPs. They provided a necessary and sufficient condition under which $\Omega(\log \log n)$ levels of the Sherali-Adams hierarchy cannot achieve an approximation better than a random assignment. We simplify their proof and strengthen the bound to $\Omega\left(\frac{\log n}{\log \log n}\right)$ levels.



Changes to previous version:

Updated discussion on related works.


Paper:

TR16-117 | 31st July 2016 19:00

From Weak to Strong LP Gaps for all CSPs





TR16-117
Authors: Mrinalkanti Ghosh, Madhur Tulsiani
Publication: 31st July 2016 23:06
Downloads: 971
Keywords: 


Abstract:

We study the approximability of constraint satisfaction problems (CSPs) by linear programming (LP) relaxations. We show that for every CSP, the approximation obtained by a basic LP relaxation, is no weaker than the approximation obtained using relaxations given by $\Omega\left(\frac{\log n}{\log \log n}\right)$ levels of the Sherali-Adams hierarchy on instances of size $n$.

It was proved by Chan et al. [FOCS 2013] that any polynomial size LP extended formulation is no stronger than relaxations obtained by a super-constant levels of the Sherali-Adams hierarchy. Combining this with our result also implies that the any polynomial size LP extended formulation is no stronger than the basic LP.

Using our techniques, we also simplify and strengthen the result by Khot et al. [STOC 2014] on (strong) approximation resistance by LPs. They provided a necessary and sufficient condition under which $\Omega(\log \log n)$ levels of the Sherali-Adams hierarchy cannot achieve an approximation better than a random assignment. We simplify their proof and strengthen the bound to $\Omega\left(\frac{\log n}{\log \log n}\right)$ levels.



ISSN 1433-8092 | Imprint