TR06-154 Authors: Joshua Buresh-Oppenheim, Valentine Kabanets, Rahul Santhanam

Publication: 15th December 2006 13:59

Downloads: 1257

Keywords:

We consider the problem of amplifying uniform average-case hardness

of languages in $\NP$, where hardness is with respect to $\BPP$

algorithms. We introduce the notion of \emph{monotone}

error-correcting codes, and show that hardness amplification for

$\NP$ is essentially equivalent to constructing efficiently

\emph{locally} encodable and \emph{locally} list-decodable monotone

codes. The previous hardness amplification results for

$\NP$~\cite{Tre03,Tre05} focused on giving a direct construction of

some \emph{locally} encodable/decodable monotone codes, running into

the problem of large amounts of nonuniformity used by the decoding

algorithm. In contrast, we propose the \emph{indirect} approach to

constructing locally encodable/decodable monotone codes, combining

the uniform Direct Product Lemma of~\cite{IJK06} and arbitrary,

\emph{not necessarily locally encodable}, monotone codes. The latter

codes have fewer restrictions, and so may be easier to construct.

We study what parameters are achievable by monotone codes in

general, giving negative and positive results. We present two

constructions of monotone codes. Our first code is a uniquely

decodable code based on the Majority function, and has an efficient

decoding algorithm. Our second code is combinatorially

list-decodable, but we do not have an efficient decoding

algorithm. In conjunction with an appropriate Direct Product Lemma, our

first code yields uniform hardness amplification for $\NP$ from

inverse polynomial to constant average-case hardness. Our second

code, even with a brute-force decoding algorithm, yields further

hardness amplification to $1/2-\log^{-\Omega(1)} n$. Together, these

give an alternative proof of Trevisan's result~\cite{Tre03,Tre05}. Getting any non-brute-force decoding algorithm for our second code

would imply improved parameters for the problem of hardness amplification in $\NP$.