Course description:
This course will focus on theoretical aspects of machine learning. We
will examine questions such as: What kinds of guarantees can one prove
about learning algorithms? What are good algorithms for achieving
certain types of goals? Can we devise models that are both amenable to
mathematical analysis and make sense empirically? What can we say
about the inherent ease or difficulty of learning problems? Addressing
these questions will require pulling in notions and ideas from
statistics, complexity theory, information theory, cryptography, game theory, and
empirical machine learning research.
Littlestone and Warmuth, The Weighted Majority Algorithm. Information and Computation 108(2):212-261, 1994. (The version pointed to here is the tech report UCSC-CRL-91-28.)
Introduces the weighted majority algorithm, along with a number of
variants. Also a great paper.
Nicol� Cesa-Bianchi, Yoav Freund, David Haussler, David
Helmbold, Robert Schapire, and Manfred Warmuth, How
to use expert advice, Journal of the ACM, 44(3):427-485, May 1997.
Yoav Freund and Robert Schapire, Adaptive game playing using
multiplicative weights, Games and Economic Behavior, 29:79-103,
1999.
Continuing on with line of research in the [LW] paper, these give
tighter analyses of multiplicative-weighting expert algorithms and
give a game-theoretic perspective, as well as address a number of other issues.
Avrim Blum, On-Line
Algorithms in Machine Learning (a survey). From "Online
Algorithms: the state of the art", Fiat
and Woeginger eds., LNCS #1442, 1998.
PAC model:
David Haussler Chapter on PAC learning model, and decision-theoretic generalizations, with applications to neural nets. From Mathematical
Perspectives on Neural Networks, Lawrence Erlbaum Associates, 1995, containing reprinted material from "Decision Theoretic
Generalizations of the PAC Model for Neural Net and Other Learning Applications", Information and Computation, Vol. 100,
September, 1992, pp. 78-150. This is a really nice survey of the PAC
model and various sample-complexity results.
David Williamson, John Shawe-Taylor, Bernhard Sch�lkopf, Alex
Smola Sample
Based Generalization Bounds. Gives tighter generalization bounds
where instead of using "the maximum number of ways of labeling a set of 2m
points" you can use "the number of ways of labeling your actual sample".
A. Blum, C. Burch, and J. Langford, On
Learning Monotone Boolean Functions. Proceedings of the
39th Annual Symposium on Foundations of Computer Science (FOCS '98).