Weizmann Logo
ECCC
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



REPORTS > DETAIL:

Paper:

TR06-061 | 5th May 2006 00:00

Hardness of Learning Halfspaces with Noise

RSS-Feed




TR06-061
Authors: Venkatesan Guruswami, Prasad Raghavendra
Publication: 5th May 2006 01:22
Downloads: 3017
Keywords: 


Abstract:

Learning an unknown halfspace (also called a perceptron) from
labeled examples is one of the classic problems in machine learning.
In the noise-free case, when a halfspace consistent with all the
training examples exists, the problem can be solved in polynomial
time using linear programming. However, under the promise that a
halfspace consistent with a fraction (1-\eps) of the examples
exists (for some small constant \eps > 0), it was not known how to
efficiently find a halfspace that is correct on even 51% of the
examples. Nor was a hardness result that ruled out getting agreement
on more than 99.9% of the examples known.

In this work, we close this gap in our understanding, and prove that
even a tiny amount of worst-case noise makes the problem of
learning halfspaces intractable in a strong sense. Specifically,
for arbitrary \epsilon,\delta > 0, we prove that given a set of
examples-label pairs from the hypercube a fraction (1-\eps) of
which can be explained by a halfspace, it is NP-hard to find a
halfspace that correctly labels a fraction (1/2+\delta) of the
examples.

The hardness result is tight since it is trivial to get agreement on
1/2 the examples. In learning theory parlance, we prove that
weak proper agnostic learning of halfspaces is hard. This settles a
question that was raised by Blum et al in their work on
learning halfspaces in the presence of random classification
noise, and in some more recent works as well.
Along the way, we also obtain a strong hardness for another basic
computational problem: solving a linear system over the
rationals.



ISSN 1433-8092 | Imprint