FTorch provides a torch_tensor derived type,
which exposes the functionality of the torch::Tensor C++ class. The interface
is designed to be familiar to Fortran programmers, whilst retaining strong
similarity with torch::Tensor and the torch.Tensor Python class.
Under the hood, the torch_tensor type holds a
pointer to a torch::Tensor object in C++ (implemented using c_ptr from the
iso_c_binding intrinsic module). This allows us to avoid unnecessary data
copies between C++ and Fortran.
We provide several subroutines for constructing torch_tensor objects. These include:
target property. The array will continue to
be pointed to even when operations are applied to the tensor, so this
subroutine can be used 'in advance' to set up an array for outputting data.
torch_tensor_from_array may be called
with or without the layout argument - an array which specifies the order in
which indices should be looped over. The default layout is [1,2,...,n]
implies that data will be read into the same indices by Torch. (See the
transposing user guide page for more details.It is compulsory to call one of the constructors before interacting with it in any of the ways described in the following. Each of the constructors sets the pointer attribute of the torch_tensor; without this being set, most of the other operations are meaningless.
We provide several subroutines for interrogating torch_tensor objects. These include:
torch_kInt8,
torch_kFloat32, etc.torch_kCPU, torch_kCUDA, torch_kXPU, etc.-1 (the default).
For GPU devices, the index should be non-negative (defaulting to 0).Procedures for interrogation are implemented as methods as well as stand-alone
procedures. For example, tensor%get_rank can be used in place of
torch_tensor_get_rank, omitting the first
argument (which would be the tensor itself). The naming pattern is similar for
the other methods (simply drop the preceding torch_tensor_).
We provide a subroutine for deallocating the memory associated with a torch_tensor object: torch_tensor_delete. An interface torch_delete is provided such that this can also be applied to arrays of tensors.
Manually deallocating a tensor that you declared and constructed in your code is actually optional. If the tensor was declared in a subroutine then torch_tensor_delete will get called as the finalizer of torch_tensor when it goes out of scope. If the tensor was declared in a program then the finalizer won't get called, but this is not considered to be an issue, in the same way that in Fortran it doesn't matter if allocated arrays aren't deallocated at the end of the program. Note that if you are building FTorch with gfortran and the 2008 standard then you may see a warning related to finalizers - see the common warnings entry for more information.
See the Fortran-lang page on object-oriented Fortran for further details about finalization.
We provide the following subroutines for manipulating the data values associated with a torch_tensor object:
torch_tensor%zero), which sets all the data entries associated with a
tensor to zero.Note
For a concrete example of how to construct, interrogate, manipulate, and delete Torch tensors, see the tensor manipulation worked example.
Mathematical operators involving Tensors are overloaded, so that we can compute expressions involving outputs from one or more ML models.
Whilst it's possible to import such functionality with a bare
use ftorch
statement, the best practice is to import specifically the operators that you
wish to use. Note that the assignment operator = has a slightly different
notation:
use ftorch, only: assignment(=), operator(+), operator(-), operator(*), &
operator(/), operator(**)
Particular care should be taken with the overloaded assignment operator.
Whenever you execute code involving
torch_tensors on each side of an equals sign,
the overloaded assignment operator should be triggered. As such, if you aren't
using the bare use ftorch import then you should ensure you specify
use ftorch, only: assignment(=) (as well as any other module members you
require).
For a straightforward assignment of two
torch_tensors a and b,
b = a
the overloaded assignment operator is called once.
For overloaded operators the situation is more complex. Consider the overloaded addition operator (the same applies for the rest). When we execute the line
c = a + b
the addition is evaluated first. It is implemented as a Fortran function and its
return value is an intermediate tensor. The setup is such that this is created
using torch_tensor_empty under the hood
(inheriting all the properties of the tensors being added1).
Following this, the intermediate tensor is assigned to c.
Finally, the finalizer for torch_tensor is
called for the intermediate tensor because it goes out of scope.
Similarly as above, in the case where you have some function func that returns
a torch_tensor, an intermediate
torch_tensor will be created, assigned, and
destroyed because the call will have the form
a = func()
We have also exposed the operators for taking the sum or mean over the entries in a tensor, which can be achieved with the subroutines torch_tensor_sum and torch_tensor_mean, respectively.
Note
For a concrete example of how to compute mathematical expressions involving Torch tensors, see the autograd worked example.
Note: In most cases, these should be the same, so that the operator makes
sense. In the case of the requires_grad property, the values might differ, and
the result should be the logical .and. of the two values. ↩