Weizmann Logo
ECCC
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



REPORTS > DETAIL:

Paper:

TR04-075 | 21st July 2004 00:00

Some dichotomy theorems for neural learning problems

RSS-Feed




TR04-075
Authors: Michael Schmitt
Publication: 6th September 2004 15:52
Downloads: 3098
Keywords: 


Abstract:

The computational complexity of learning from binary examples is
investigated for linear threshold neurons. We introduce
combinatorial measures that create classes of infinitely many
learning problems with sample restrictions. We analyze how the
complexity of these problems depends on the values for the measures.
The results are established as dichotomy theorems showing that each
problem is either NP-complete or solvable in polynomial time. In
particular, we consider consistency and maximum consistency problems
for neurons with binary weights, and maximum consistency problems
for neurons with arbitrary weights. We determine for each problem
class the dividing line between the NP-complete and polynomial-time
solvable problems. Moreover, all efficiently solvable problems are
shown to have constructive algorithms that require no more than
linear time on a random access machine model. Similar dichotomies
are exhibited for neurons with bounded threshold. The results
demonstrate on the one hand that the consideration of sample
constraints can lead to the discovery of new efficient algorithms
for non-trivial learning problems. On the other hand, hard learning
problems may remain intractable even for severely restricted
samples.



ISSN 1433-8092 | Imprint