Deep Learning Algorithm Implementations 1.0.0
C++ implementations of fundamental deep learning algorithms
|
RMSprop optimizer with autograd support. More...
#include <optimizers.hpp>
Public Member Functions | |
RMSprop (std::vector< Variable< T > * > parameters, T lr=1e-2, T alpha=0.99, T eps=1e-8, T weight_decay=0.0, T momentum=0.0) | |
Constructor. | |
void | step () override |
Perform one RMSprop step. | |
T | get_lr () const override |
Get learning rate. | |
void | set_lr (T lr) override |
Set learning rate. | |
![]() | |
AutogradOptimizer (std::vector< Variable< T > * > parameters) | |
Constructor. | |
virtual | ~AutogradOptimizer ()=default |
virtual void | zero_grad () |
Zero gradients of all parameters. | |
Additional Inherited Members | |
![]() | |
std::vector< Variable< T > * > | parameters_ |
RMSprop optimizer with autograd support.
Maintains a moving average of squared gradients to normalize the gradient.
Paper: "Lecture 6.5-rmsprop" (Hinton, 2012)
Definition at line 244 of file optimizers.hpp.
dl::optimization::RMSprop< T >::RMSprop | ( | std::vector< Variable< T > * > | parameters, |
T | lr = 1e-2 , |
||
T | alpha = 0.99 , |
||
T | eps = 1e-8 , |
||
T | weight_decay = 0.0 , |
||
T | momentum = 0.0 |
||
) |
Constructor.
parameters | Parameters to optimize |
lr | Learning rate (default: 1e-2) |
alpha | Smoothing constant (default: 0.99) |
eps | Term for numerical stability (default: 1e-8) |
weight_decay | Weight decay (L2 penalty) (default: 0) |
momentum | Momentum factor (default: 0) |
Definition at line 133 of file optimizers.cpp.
|
inlineoverridevirtual |
Get learning rate.
Implements dl::optimization::AutogradOptimizer< T >.
Definition at line 270 of file optimizers.hpp.
|
inlineoverridevirtual |
Set learning rate.
Implements dl::optimization::AutogradOptimizer< T >.
Definition at line 275 of file optimizers.hpp.
|
overridevirtual |
Perform one RMSprop step.
Implements dl::optimization::AutogradOptimizer< T >.
Definition at line 161 of file optimizers.cpp.