Generalized Linear Classifiers in NLP

For the better part of a decade machine learning methods like maximum entropy and support vector machines have been a major part of many NLP applications such as parsing, semantic role labeling, ontology induction, machine translation, and summarization. Many of these models fall into the class of Generalized Linear Classifiers, which are characterized by defining a prediction boundary as a linear combination of input features and their weights. In this course we will cover many of the important aspects of generalized linear classifiers including: training methods, min error vs. max likelihood, distribution free methods, online vs. batch, generative vs. discriminative, structured models, and extensions beyond linear predictors through kernels. The course assumes familiarity with basic concepts from statistics, calculus and linear algebra.

Date: October 22nd, 2007; Lecturer: Ryan McDonald

Outline (subject to change)
10-12 Introduction, feature representations, loss functions, perceptron, margin, SVMs, logistic regression (Max Ent)
13-15 Kernels, structured learning including conditional random fields, applications
15-17 Left-overs, practical

Slides (pdf)[latest slides, will change]

Practical

An implementation of the perceptron algorithm and some extensions handout. Starter code available here, data sets included.

You can find the solution here

Project suggestions

Literature and resources

Back to course page