Abstract

Randomness has long been described in many fashions. From a predictability standpoint, a sequence is random if no scheme (under various definitions of computability) can predict the next bit any better than a coin-flip. From an observational standpoint, a sequence is random if it is not compressible, or in other words, it has high Kolmogorov Complexity. Under the prediction paradigm, we combine learning algorithms from Computation Learning Theory with Resource Bounded Measure and Dimension to show various interesting classes are learnable thus have p-dimension 0. We extend some of these techniques to the set of Kolmogorov random strings.

It has long been known that if NP has a sparse complete set under various types of polynomial-time reductions, and in particular, if NP ⊆ P btt(Pctt(SPARSE)), then P = NP. Under the assumption that NP does not have measure 0, a proof that (Pbtt(Pctt(SPARSE))) does have measure 0 would imply that NP does not have sparse complete sets under the composed reduction. We show the stronger result that dimp(P α log n–tt(P ctt(DENSEc))) = 0 by combining Maass and Turán's algorithm for learning halfspaces with Hitchcock's result that [special characters omitted](2cn, 2cn, 2o(n )), the set of all sequences 2cn-time reducible to concepts learnable in 2cn time with a subexponential mistake bound, has p-dimension 0.

In a similar fashion, we show how the notion of reliable reductions of Arvind, et al., yields a “one-sided” learning algorithm, i.e. an he First Learneralgorithm that learns a subset of the target concept. Using this, we can show that Phd(Pctt(DENSE c)) has p-dimension 0. We can then extend this to show that Phd(Pctt(RC)) has p-dimension 0, where RC is the set of random strings as defined by Kolmogorov Complexity. Thus we can conclude that if μp(NP) ≠ 0, then NP does not have any non-dense complete sets under the composed bounded-truth-table-conjunctive reduction, or the composed Hausdorff-conjunctive redunction. Furthermore, we can conclude that NP cannot be characterized by reductions to random strings using only the composed Hausdorff-conjunctive reduction.

The best unconditional circuit lower bound known is that the class MA EXP requires exponential-size circuits, but one of the lowest conditional circuit lower bounds was given by Fortnow and Klivans, who showed that if a class [special characters omitted] of circuits is learnable in the Exact Learning model of Angluin, then EXPNP is not contained in [special characters omitted]. We combine Exact Learning (with membership queries) with the betting games of Buhrman, et al., to improve this bound to show, under a weaker hypothesis, that EXP cannot be contained in [special characters omitted]. However, it seems unlikely that we can improve our results to use Valiant's PAC learning model, which would have been advantageous given the result of Bshouty, et al., that circuits are learnable in ZPPNP. The reason for this is that randomized betting games yield practically no greater advantage than the randomized martingales of Moser, and Regan and Sivakumar.

Details

Title
Randomness, learnability, and betting games
Author
Harkins, Ryan C.
Year
2012
Publisher
ProQuest Dissertations & Theses
ISBN
978-1-267-41066-5
Source type
Dissertation or Thesis
Language of publication
English
ProQuest document ID
1026776374
Copyright
Database copyright ProQuest LLC; ProQuest does not claim copyright in the individual underlying works.