Deep Learning Algorithm Implementations 1.0.0
C++ implementations of fundamental deep learning algorithms
Loading...
Searching...
No Matches
dl::activation::LeakyReLU Class Reference

Leaky Rectified Linear Unit (Leaky ReLU) activation function. More...

#include <functions.hpp>

Inheritance diagram for dl::activation::LeakyReLU:
Collaboration diagram for dl::activation::LeakyReLU:

Public Member Functions

 LeakyReLU (double alpha=0.01)
 Constructor with configurable leak parameter.
 
double forward (double x)
 Compute Leaky ReLU forward pass.
 
double backward (double x)
 Compute Leaky ReLU derivative.
 
- Public Member Functions inherited from dl::activation::ActivationFunction
virtual ~ActivationFunction ()=default
 Virtual destructor for proper cleanup.
 

Detailed Description

Leaky Rectified Linear Unit (Leaky ReLU) activation function.

Leaky ReLU addresses the "dying ReLU" problem by allowing a small gradient when the input is negative, preventing neurons from becoming completely inactive.

Mathematical definition:

  • Forward: f(x) = max(αx, x) where α is a small positive constant
  • Derivative: f'(x) = 1 if x > 0, else α
Note
Helps prevent dead neurons compared to standard ReLU

Definition at line 170 of file functions.hpp.

Constructor & Destructor Documentation

◆ LeakyReLU()

dl::activation::LeakyReLU::LeakyReLU ( double  alpha = 0.01)

Constructor with configurable leak parameter.

Parameters
alphaLeak coefficient for negative inputs (default: 0.01)

Definition at line 63 of file functions.cpp.

Member Function Documentation

◆ backward()

double dl::activation::LeakyReLU::backward ( double  x)

Compute Leaky ReLU derivative.

Parameters
xInput value
Returns
1 if x > 0, else alpha

Definition at line 73 of file functions.cpp.

◆ forward()

double dl::activation::LeakyReLU::forward ( double  x)

Compute Leaky ReLU forward pass.

Parameters
xInput value
Returns
max(alpha * x, x)

Definition at line 67 of file functions.cpp.


The documentation for this class was generated from the following files: