BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//talks.cam.ac.uk//v3//EN
BEGIN:VTIMEZONE
TZID:Europe/London
BEGIN:DAYLIGHT
TZOFFSETFROM:+0000
TZOFFSETTO:+0100
TZNAME:BST
DTSTART:19700329T010000
RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=-1SU
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0100
TZOFFSETTO:+0000
TZNAME:GMT
DTSTART:19701025T020000
RRULE:FREQ=YEARLY;BYMONTH=10;BYDAY=-1SU
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
CATEGORIES:Statistics
SUMMARY:Classification with unknown class conditional labe
l noise on non-compact feature spaces - Henry Reev
e — University of Birmingham
DTSTART;TZID=Europe/London:20191101T140000
DTEND;TZID=Europe/London:20191101T150000
UID:TALK130057AThttp://talks.cam.ac.uk
URL:http://talks.cam.ac.uk/talk/index/130057
DESCRIPTION:We consider the problem of classification in the p
resence of label noise. In the analysis of classi
fication problems it is typically assumed that the
train and test distributions are one and the same
. In practice\, however\, it is often the case tha
t the labels in the training data have been corrup
ted with some unknown probability. We shall focus
on classification with class conditional label noi
se in which the labels observed by the learner hav
e been corrupted with some unknown probability whi
ch is determined by the true class label.\n\nIn or
der to obtain finite sample rates\, previous appro
aches to classification with unknown class conditi
onal label noise have required that the regression
function attains its extrema uniformly on sets of
positive measure. We consider this problem in the
setting of non-compact metric spaces\, where the
regression function need not attain its extrema.\n
\nIn this setting we determine the minimax optimal
learning rates (up to logarithmic factors). The r
ate displays interesting threshold behaviour: When
the regression function approaches its extrema at
a sufficient rate\, the optimal learning rates ar
e of the same order as those obtained in the label
-noise free setting. If the regression function ap
proaches its extrema more gradually then classific
ation performance necessarily degrades. In additio
n\, we present an algorithm which attains these ra
tes without prior knowledge of either the distribu
tional parameters or the local density.
LOCATION:MR12
CONTACT:Dr Sergio Bacallado
END:VEVENT
END:VCALENDAR