If you use a version of FTorch from before commit
c488f20
(January 2025) you may notice that the device_type
argument for
torch_model_load
changed from being optional to being compulsory. This is
because the optional argument defaulted to torch_kCPU
, which is not suitable
for GPU workloads. For recent FTorch configurations, simply specify the device
type as the third argument. For example:
type(torch_module) :: model
character(len=17), parameter :: filename = "my_saved_model.pt"
model = torch_module_load(model, filename, torch_kCPU)
If you use a version of FTorch from before commit e92ad9e (June 2024) you will notice that the latest API documentation is not suitable. This is because a number of breaking changes were made to the FTorch API in preparation for implementing new functionalities.
This page describes how to migrate from code (pre-e92ad9e) to the most recent version.
If you are already using a more recent version there is no need to read this page.
We realise that this forms an inconvenience to those of you who are actively using FTorch and is not something we did lightly. These changes were necessary to improve functionality and we have made them in one go as we move towards a stable API and first release in the very near future. Once the first release is set then the API becomes standardised then changes like this will be avoided. We hope that this is the last time we have such a shift.
The changes allow us to implement two new features:
torch_tensor
s are created using a subroutine call, not a functionPreviously you would have created a Torch tensor and assigned some fortran data to it as follows:
real, dimension(5), target :: fortran_data
type(torch_tensor) :: my_tensor
integer :: tensor_layout(1) = [1]
my_tensor = torch_tensor_from_array(fortran_data, tensor_layout, torch_kCPU)
Now a call is made to a subroutine with the tensor as the first argument:
real, dimension(5), target :: fortran_data
type(torch_tensor) :: my_tensor
integer :: tensor_layout(1) = [1]
call torch_tensor_from_array(my_tensor, fortran_data, tensor_layout, torch_kCPU)
module
becomes model
and loading becomes a subroutine call, not a functionPreviously a neural net was referred to as a 'module
' and loaded using appropriately
named functions and types.
type(torch_module) :: model
model = torch_module_load(args(1))
call torch_module_forward(model, in_tensors, out_tensors)
Following user feedback we now refer to a neural net and its associated types and calls
as a 'model
'.
The process of loading a net is also now a subroutine call for consistency with the
tensor creation operations:
type(torch_model) :: model
call torch_model_load(model, 'path_to_saved_net.pt', torch_kCPU)
call torch_model_forward(model, in_tensors, out_tensors)
Note that the device_type
argument has also been specified in the call to
torch_model_load
, for the reason mentioned above.
n_inputs
is no longer requiredPreviously when you called the forward method on a net you had to specify the number of tensors in the array of inputs:
call torch_model_forward(model, in_tensors, n_inputs, out_tensors)
Now all that is supplied to the forward call is the model, and the arrays of input and
output tensors. No need for n_inputs
(or n_outputs
)!
call torch_model_forward(model, in_tensors, out_tensors)
torch_tensor
sPreviously you passed an array of torch_tensor
types as inputs, and a single torch_tensor
to the forward method:
type(torch_tensor), dimension(n_inputs) :: input_tensor_array
type(torch_tensor) :: output_tensor
...
call torch_model_forward(model, input_tensor_array, n_inputs, output_tensor)
Now both the inputs and the outputs need to be an array of torch_tensor
types:
type(torch_tensor), dimension(n_inputs) :: input_tensor_array
type(torch_tensor), dimension(n_outputs) :: output_tensor_array
...
call torch_model_forward(model, input_tensor_array, output_tensor_array)