Weizmann Logo
ECCC
Electronic Colloquium on Computational Complexity

Under the auspices of the Computational Complexity Foundation (CCF)

Login | Register | Classic Style



REPORTS > DETAIL:

Paper:

TR12-107 | 30th August 2012 20:28

Massive Online Teaching to Bounded Learners

RSS-Feed




TR12-107
Authors: Brendan Juba, Ryan Williams
Publication: 30th August 2012 21:34
Downloads: 3302
Keywords: 


Abstract:

We consider a model of teaching in which the learners are consistent and have bounded state, but are otherwise arbitrary. The teacher is non-interactive and ``massively open'': the teacher broadcasts a sequence of examples of an arbitrary target concept, intended for every possible on-line learning algorithm to learn from. We focus on the problem of designing interesting teachers: sequences of examples that allow all capable and consistent learners to efficiently learn concepts, regardless of the underlying algorithm used by the learner. We use two measures of efficiency: the number of mistakes made by the worst-case learner, and the maximum length of the example sequence needed for the worst-case learner. Our results are summarized as follows:

- Given a uniform random sequence of examples of an $n$-bit concept function, learners (capable of consistently learning the concept) with $s(n)$ bits of state are guaranteed to make only $O(n \cdot s(n))$ mistakes and exactly learn the concept, with high probability. This theorem has interesting corollaries; for instance, every concept $c$ has a sequence of examples can teach $c$ to all capable consistent on-line learners implementable with $s(n)$-size circuits, such that every learner makes only $\tilde{O}(s(n)^2)$ mistakes. That is, all resource-bounded algorithms capable of consistently learning a concept can be simultaneously taught that concept with few mistakes, on a single example sequence.

We also show how to efficiently generate such a sequence of examples on-line: using Nisan's pseudorandom generator, each example in the sequence can be generated with polynomial-time overhead per example, with an $O(n \cdot s(n))$-bit initial seed.

- To justify our use of randomness, we prove that any non-trivial derandomization of our sequences would imply circuit lower bounds. For instance, if there is a deterministic $2^{n^{O(1)}}$ time algorithm that generates a sequence of examples, such that all consistent and capable polynomial-size circuit learners learn the all-zeroes concept with less than $2^n$ mistakes, then EXP $\not\subset\ $ P$/$poly.

- We present examples illustrating that the key differences in our model -- our focus on mistakes rather than the total number of examples, and our use of a state bound -- must be considered together to obtain our results.

- We show that for every consistent $s(n)$-state bounded learner ${\cal A}\,$, and every $n$-bit concept that ${\cal A}\,$ is capable of learning, there is a custom ``tutoring'' sequence of only $O(n \cdot s(n))$ examples that teaches ${\cal A}\,$ the concept. That is, in principle, there are no slow learners, only bad teachers: if a state-bounded learner is capable of learning a concept at all, then it can always be taught that concept quickly via some short sequence of examples.



ISSN 1433-8092 | Imprint