Commit 0b63629b authored by Petteri Pulkkinen's avatar Petteri Pulkkinen
Browse files

Add reference


Signed-off-by: Petteri Pulkkinen's avatarPetteri Pulkkinen <petteri.pulkkinen@aalto.fi>
parent 98304484
......@@ -2672,14 +2672,12 @@ The appraoch is based on the following:
@InProceedings{Jezequel2020,
author = {J{\'e}z{\'e}quel, R{\'e}mi and Gaillard, Pierre and Rudi, Alessandro},
booktitle = {Proceedings of Thirty Third Conference on Learning Theory},
booktitle = {Proceedings of Machine Learning Research},
title = {Efficient improper learning for online logistic regression},
year = {2020},
editor = {Abernethy, Jacob and Agarwal, Shivani},
month = {09--12 Jul},
pages = {2085--2108},
publisher = {PMLR},
series = {Proceedings of Machine Learning Research},
pages = {1--25},
volume = {125},
abstract = {We consider the setting of online logistic regression and consider the regret with respect to the $\ell_2$-ball of radius $B$. It is known (see Hazan et al. (2014)) that any proper algorithm which has logarithmic regret in the number of samples (denoted $n$) necessarily suffers an exponential multiplicative constant in $B$. In this work, we design an efficient improper algorithm that avoids this exponential constant while preserving a logarithmic regret. Indeed, Foster et al. (2018) showed that the lower bound does not apply to improper algorithms and proposed a strategy based on exponential weights with prohibitive computational complexity. Our new algorithm based on regularized empirical risk minimization with surrogate losses satisfies a regret scaling as $O(B\log(Bn))$ with a per-round time-complexity of order $O(d^2 + \log(n))$.},
groups = {OCO applications},
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment