Deep Learning Algorithm Implementations 1.0.0
C++ implementations of fundamental deep learning algorithms
|
PyTorch-like optimizers with automatic differentiation support. More...
#include <memory>
#include <vector>
#include <unordered_map>
#include "utils/autograd.hpp"
#include "utils/matrix.hpp"
Go to the source code of this file.
Classes | |
class | dl::optimization::AutogradOptimizer< T > |
Base class for autograd-compatible optimizers. More... | |
class | dl::optimization::SGD< T > |
Stochastic Gradient Descent optimizer with autograd support. More... | |
class | dl::optimization::Adam< T > |
Adam optimizer with autograd support. More... | |
class | dl::optimization::AdamW< T > |
AdamW optimizer with autograd support. More... | |
class | dl::optimization::RMSprop< T > |
RMSprop optimizer with autograd support. More... | |
class | dl::optimization::LRScheduler< T > |
Learning rate scheduler base class. More... | |
class | dl::optimization::StepLR< T > |
Step learning rate scheduler Decays learning rate by gamma every step_size epochs. More... | |
Namespaces | |
namespace | dl |
namespace | dl::optimization |
Typedefs | |
using | dl::optimization::SGDD = SGD< double > |
using | dl::optimization::SGDF = SGD< float > |
using | dl::optimization::AdamD = Adam< double > |
using | dl::optimization::AdamF = Adam< float > |
using | dl::optimization::AdamWD = AdamW< double > |
using | dl::optimization::AdamWF = AdamW< float > |
using | dl::optimization::RMSpropD = RMSprop< double > |
using | dl::optimization::RMSpropF = RMSprop< float > |
using | dl::optimization::StepLRD = StepLR< double > |
using | dl::optimization::StepLRF = StepLR< float > |
PyTorch-like optimizers with automatic differentiation support.
Definition in file optimizers.hpp.