Weizmann Logo
ECCC
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



REPORTS > DETAIL:

Paper:

TR10-022 | 23rd February 2010 19:53

Lower Bounds and Hardness Amplification for Learning Shallow Monotone Formulas

RSS-Feed




TR10-022
Authors: Vitaly Feldman, Homin Lee, Rocco Servedio
Publication: 23rd February 2010 20:32
Downloads: 3331
Keywords: 


Abstract:

Much work has been done on learning various classes of ``simple'' monotone functions under the uniform distribution. In this paper we give the first unconditional lower bounds for learning problems of this sort by showing that polynomial-time algorithms cannot learn constant-depth monotone Boolean formulas under the uniform distribution in the well-studied Statistical Query model.

Using a recent characterization of Strong Statistical Query learnability due to Feldman, we first show that depth-3 monotone formulas of size $n^{o(1)}$ cannot be learned by any polynomial-time Statistical Query algorithm to accuracy $1 - 1/(\log n)^{\Omega(1)}.$ We then build on this result to show that depth-4 monotone formulas of size $n^{o(1)}$ cannot be learned even to a certain ${\frac 1 2} + o(1)$ accuracy in polynomial time. This improved hardness is achieved using a general technique that we introduce for amplifying the hardness of ``mildly hard'' learning problems in either the PAC or Statistical Query framework. This hardness amplification for learning builds on the ideas in the work of O'Donnell on hardness amplification for approximating functions using small circuits, and may be of independent interest.



ISSN 1433-8092 | Imprint