Commit e0dfb48e authored by Alexander Jung's avatar Alexander Jung

new HA2 refsol

parent b794aa9f
......@@ -294,7 +294,7 @@ The dimension is $d=100^2 = 10000$. The gradient is obtained as
\nabla f (\vw) = 2 \lambda \vw+ \frac{-2}{\samplesize}\sum^{\samplesize}_{\sampleidx=1}\vx^{(\sampleidx)} (y^{(\sampleidx)} - \vw^{T} \vx^{(\sampleidx)}).
\end{equation}
For the stopping criterion we might use a fixed number of iterations, which requires to have some understanding (``convergence analysis'') of how
fast gradient descent converges to the optimum. Another option is to monitor the relative decrease of the objective value $f(\vw$, i.e.,
fast gradient descent converges to the optimum. Another option is to monitor the relative decrease of the objective value $f(\vw)$, i.e.,
to stop iterating when $\big|\frac{f(\vw^{(k+1)}) - f(\vw^{(k)})}{f(\vw^{(k)})} \big|$ is below a suitably chosen (small) threshold.
\begin{figure}[h]
\begin{center}
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment