Weizmann Logo
ECCC
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



REPORTS > DETAIL:

Revision(s):

Revision #1 to TR06-066 | 10th September 2008 00:00

Attribute-Efficient and Non-adaptive Learning of Parities and DNF Expressions

RSS-Feed




Revision #1
Authors: Vitaly Feldman
Accepted on: 10th September 2008 00:00
Downloads: 3091
Keywords: 


Abstract:

We consider the problems of attribute-efficient PAC learning of two well-studied concept classes: parity functions and DNF expressions over ${0,1}^n$. We show that attribute-efficient learning of parities with respect to the uniform distribution is equivalent to decoding high-rate random linear codes from low number of errors, a long-standing open problem in coding theory. This is the first evidence that attribute-efficient learning of a natural PAC learnable concept class can be computationally hard.

An algorithm is said to use membership queries (MQs) non-adaptively if the points at which the algorithm asks MQs do not depend on the target concept. Using a simple non-adaptive parity learning algorithm and a modification of Levin's algorithm for locating a weakly-correlated parity due to Bshouty et al. (1999), we give the first non-adaptive and attribute-efficient algorithm for learning DNF with respect to the uniform distribution. Our algorithm runs in time $\tilde{O}(ns^4/\epsilon)$ and uses $\tilde{O}(s^4\cdot \log^2{n}/\epsilon)$ non-adaptive MQs, where $s$ is the number of terms in the shortest DNF representation of the target concept. The algorithm improves on the best previous algorithm for learning DNF of Bshouty et al. (1999) and can also be easily modified to tolerate random persistent classification noise in MQs.


Paper:

TR06-066 | 5th May 2006 00:00

On Attribute Efficient and Non-adaptive Learning of Parities and DNF Expressions





TR06-066
Authors: Vitaly Feldman
Publication: 28th May 2006 15:36
Downloads: 3285
Keywords: 


Abstract:

We consider the problems of attribute-efficient PAC learning of two well-studied concept classes: parity functions and DNF expressions over $\{0,1\}^n$. We show that attribute-efficient learning of parities with respect to the uniform distribution is equivalent to decoding high-rate random linear codes from low number of errors, a long-standing open problem in coding theory.

An algorithm is said to use membership queries (MQs) non-adaptively if the points at which the algorithm asks MQs do not depend on the target concept. Using a simple non-adaptive parity learning algorithm and a modification of Levin's algorithm for locating a weakly-correlated parity due to Bshouty et al., we give the first non-adaptive and attribute-efficient algorithm for learning DNF with respect to the uniform distribution. Our algorithm runs in time $\tilde{O}(ns^4/\epsilon)$ and uses $\tilde{O}(s^4/\epsilon)$ non-adaptive MQs where $s$ is the number of terms in the shortest DNF representation of the target concept. The algorithm improves on the best previous algorithm for learning DNF (of Bshouty et al.) and can also be easily modified to tolerate random classification noise in MQs.



ISSN 1433-8092 | Imprint