Deep Learning Algorithm Implementations 1.0.0
C++ implementations of fundamental deep learning algorithms
Loading...
Searching...
No Matches
dl::optimization::Adam< T > Class Template Reference

Adam optimizer with autograd support. More...

#include <optimizers.hpp>

Inheritance diagram for dl::optimization::Adam< T >:
Collaboration diagram for dl::optimization::Adam< T >:

Public Member Functions

 Adam (std::vector< Variable< T > * > parameters, T lr=1e-3, T beta1=0.9, T beta2=0.999, T eps=1e-8, T weight_decay=0.0)
 Constructor.
 
void step () override
 Perform one Adam step.
 
get_lr () const override
 Get learning rate.
 
void set_lr (T lr) override
 Set learning rate.
 
- Public Member Functions inherited from dl::optimization::AutogradOptimizer< T >
 AutogradOptimizer (std::vector< Variable< T > * > parameters)
 Constructor.
 
virtual ~AutogradOptimizer ()=default
 
virtual void zero_grad ()
 Zero gradients of all parameters.
 

Additional Inherited Members

- Protected Attributes inherited from dl::optimization::AutogradOptimizer< T >
std::vector< Variable< T > * > parameters_
 

Detailed Description

template<typename T>
class dl::optimization::Adam< T >

Adam optimizer with autograd support.

Adaptive learning rate optimizer that computes individual learning rates for different parameters from estimates of first and second moments.

Paper: "Adam: A Method for Stochastic Optimization" (Kingma & Ba, 2014)

Definition at line 131 of file optimizers.hpp.

Constructor & Destructor Documentation

◆ Adam()

template<typename T >
dl::optimization::Adam< T >::Adam ( std::vector< Variable< T > * >  parameters,
lr = 1e-3,
beta1 = 0.9,
beta2 = 0.999,
eps = 1e-8,
weight_decay = 0.0 
)

Constructor.

Parameters
parametersParameters to optimize
lrLearning rate (default: 1e-3)
beta1Coefficient for first moment estimate (default: 0.9)
beta2Coefficient for second moment estimate (default: 0.999)
epsTerm for numerical stability (default: 1e-8)
weight_decayWeight decay (L2 penalty) (default: 0)

Definition at line 45 of file optimizers.cpp.

Member Function Documentation

◆ get_lr()

template<typename T >
T dl::optimization::Adam< T >::get_lr ( ) const
inlineoverridevirtual

Get learning rate.

Implements dl::optimization::AutogradOptimizer< T >.

Definition at line 157 of file optimizers.hpp.

◆ set_lr()

template<typename T >
void dl::optimization::Adam< T >::set_lr ( lr)
inlineoverridevirtual

Set learning rate.

Implements dl::optimization::AutogradOptimizer< T >.

Definition at line 162 of file optimizers.hpp.

◆ step()

template<typename T >
void dl::optimization::Adam< T >::step ( )
overridevirtual

Perform one Adam step.

Implements dl::optimization::AutogradOptimizer< T >.

Definition at line 71 of file optimizers.cpp.


The documentation for this class was generated from the following files: