Optimisers module for FTorch.
Type for holding a torch optimizer.
| Type | Visibility | Attributes | Name | Initial | |||
|---|---|---|---|---|---|---|---|
| type(c_ptr), | public | :: | p | = | c_null_ptr |
pointer to the optimizer in memory |
| final :: torch_optim_delete |
| procedure, public :: step => torch_optim_step | |
| procedure, public :: zero_grad => torch_optim_zero_grad |
Create an Adam optimizer
| Type | Intent | Optional | Attributes | Name | ||
|---|---|---|---|---|---|---|
| type(torch_optim), | intent(out) | :: | optim |
Optimizer we are creating |
||
| type(torch_tensor), | intent(in), | dimension(:) | :: | parameters |
Array of parameter tensors |
|
| real(kind=real64), | intent(in), | optional | :: | learning_rate |
learning rate for the optimization algorithm (default: 0.001) |
|
| real(kind=real64), | intent(in), | optional | :: | beta_1 |
beta 1 for the optimization algorithm (default: 0.9) |
|
| real(kind=real64), | intent(in), | optional | :: | beta_2 |
beta 2 for the optimization algorithm (default: 0.999) |
|
| real(kind=real64), | intent(in), | optional | :: | eps |
eps for the optimization algorithm (default: 1.0e-8) |
|
| real(kind=real64), | intent(in), | optional | :: | weight_decay |
weight_decay for the optimization algorithm (default: 0.0) |
|
| logical, | intent(in), | optional | :: | amsgrad |
enable AMSGrad variant (default: .false.) |
Create an AdamW optimizer
| Type | Intent | Optional | Attributes | Name | ||
|---|---|---|---|---|---|---|
| type(torch_optim), | intent(out) | :: | optim |
Optimizer we are creating |
||
| type(torch_tensor), | intent(in), | dimension(:) | :: | parameters |
Array of parameter tensors |
|
| real(kind=real64), | intent(in), | optional | :: | learning_rate |
learning rate for the optimization algorithm (default: 0.001) |
|
| real(kind=real64), | intent(in), | optional | :: | beta_1 |
beta 1 for the optimization algorithm (default: 0.9) |
|
| real(kind=real64), | intent(in), | optional | :: | beta_2 |
beta 2 for the optimization algorithm (default: 0.999) |
|
| real(kind=real64), | intent(in), | optional | :: | eps |
eps for the optimization algorithm (default: 1.0e-8) |
|
| real(kind=real64), | intent(in), | optional | :: | weight_decay |
weight_decay for the optimization algorithm (default: 0.01) |
|
| logical, | intent(in), | optional | :: | amsgrad |
enable AMSGrad variant (default: .false.) |
Create an SGD optimizer
| Type | Intent | Optional | Attributes | Name | ||
|---|---|---|---|---|---|---|
| type(torch_optim), | intent(out) | :: | optim |
Optimizer we are creating |
||
| type(torch_tensor), | intent(in), | dimension(:) | :: | parameters |
Array of parameter tensors |
|
| real(kind=real64), | intent(in), | optional | :: | learning_rate |
learning rate for the optimization algorithm (default: 0.001) |
|
| real(kind=real64), | intent(in), | optional | :: | momentum |
momentum for the optimization algorithm (default: 0.0) |
|
| real(kind=real64), | intent(in), | optional | :: | weight_decay |
weight_decay for the optimization algorithm (default: 0.0) |
|
| real(kind=real64), | intent(in), | optional | :: | dampening |
dampening for the optimization algorithm (default: 0.0) |
|
| logical, | intent(in), | optional | :: | nesterov |
enable Nesterov momentum. Only applicable when momentum is non-zero. (default: .false.) |
Deallocate a Torch optimizer
| Type | Intent | Optional | Attributes | Name | ||
|---|---|---|---|---|---|---|
| type(torch_optim), | intent(inout) | :: | optim |
Optimizer to deallocate |
Step a Torch optimizer
| Type | Intent | Optional | Attributes | Name | ||
|---|---|---|---|---|---|---|
| class(torch_optim), | intent(in) | :: | optim |
Optimizer to step |
Zero Gradients on tensors associated with a Torch optimizer
| Type | Intent | Optional | Attributes | Name | ||
|---|---|---|---|---|---|---|
| class(torch_optim), | intent(in) | :: | optim |
Optimizer to zero gradients for |