In order to use FTorch users will typically need to follow these steps:
These are outlined in detail below.
The trained PyTorch model needs to be exported to
TorchScript.
This can be done from within your code using the
jit.script
or
jit.trace
functionalities from within Python.
If you are not familiar with these we provide a tool
pt2ts.py
as part of this distribution which contains an easily adaptable script to save your
PyTorch model as TorchScript.
To use the trained Torch model from within Fortran we need to import the ftorch
module and use the binding routines to load the model, convert the data,
and run inference.
A very simple example is given below.
This minimal snippet loads a saved Torch model, creates an input consisting of a
10x10
matrix of ones, and runs the model to infer output.
This is for illustrative purposes only, and we recommend following the
examples
before writing your own code to fully explore the features.
! Import library for interfacing with PyTorch
use ftorch
implicit none
! Generate an object to hold the Torch model
type(torch_model) :: model
! Set up array of n_inputs input tensors and array of n_outputs output tensors
! Note: In this example there is only one input tensor (n_inputs = 1) and one
! output tensor (n_outputs = 1)
integer, parameter :: n_inputs = 1
integer, parameter :: n_outputs = 1
type(torch_tensor), dimension(n_inputs) :: model_input_arr
type(torch_tensor), dimension(n_outputs) :: model_output_arr
! Set up the model inputs and output as Fortran arrays
real, dimension(10,10), target :: input
real, dimension(5), target :: output
! Set up number of dimensions of input tensor and axis order
integer, parameter :: in_dims = 2
integer :: in_layout(in_dims) = [1,2]
integer, parameter :: out_dims = 1
integer :: out_layout(out_dims) = [1]
! Initialise the Torch model to be used
torch_model_load(model, "/path/to/saved/model.pt")
! Initialise the inputs as Fortran array of ones
input = 1.0
! Wrap Fortran data as no-copy Torch Tensors
! There may well be some reshaping required depending on the
! structure of the model which is not covered here (see examples)
call torch_tensor_from_array(model_input_arr(1), input, in_layout, torch_kCPU)
call torch_tensor_from_array(model_output_arr(1), output, out_layout, torch_kCPU)
! Run model forward method and Infer
! Again, there may be some reshaping required depending on model design
call torch_model_forward(model, model_input_arr, model_output_arr)
! Write out the result of running the model
write(*,*) output
! Clean up
call torch_delete(model)
call torch_delete(model_input_arr)
call torch_delete(model_output_arr)
The code now needs to be compiled and linked against our installed library. Here we describe how to do this for two build systems, CMake and make.
If our project were using CMake we would need the following in the CMakeLists.txt
file to find the FTorch installation and link it to the executable.
This can be done by adding the following to the CMakeLists.txt
file:
find_package(FTorch)
target_link_libraries( <executable> PRIVATE FTorch::ftorch )
message(STATUS "Building with Fortran PyTorch coupling")
and using the -DCMAKE_PREFIX_PATH=</path/to/install/location>
flag when running CMake.
Note: If you used the
CMAKE_INSTALL_PREFIX
argument when building and installing the library then you should use the same path for</path/to/install/location>
.
To build with make we need to include the library when compiling and link the executable against it.
To compile with make we need add the following compiler flag when compiling files that use FTorch:
FCFLAGS += -I<path/to/install/location>/include/ftorch
When compiling the final executable add the following link flag:
LDFLAGS += -L<path/to/install/location>/lib -lftorch
You may also need to add the location of the .so
files to your LD_LIBRARY_PATH
unless installing in a default location:
export LD_LIBRARY_PATH = $LD_LIBRARY_PATH:<path/to/install/location>/lib
Note: Depending on your system and architecture
lib
may belib64
or something similar.
In order to run a model on GPU, two main changes to the above process are required:
torch_tensor_from_array
in Fortran, the device for the input
tensor(s) should be set to torch_kCUDA
, rather than torch_kCPU
.For more information refer to the GPU Documentation
The repository comes with a number of documented worked examples.
These are designed to introduce users to FTorch and how to use the various features.
A subset of the examples are included as integration tests as part of FTorch's test suite.
This worked example provides a simple but complete demonstration of how to use the library. It uses simple PyTorch 'net' that takes an input vector of length 5 and applies a single Linear layer to multiply it by 2. The aim is to demonstrate the most basic features of coupling before worrying about more complex issues that are covered in later examples.
This worked example provides a more realistic demonstration of how to use the library, using ResNet-18 to classify an image. As the input to this model is four-dimensional (batch size, colour, x, y), care must be taken dealing with the data array in Python and Fortran. See when to transpose arrays for more details.
This worked example builds on the SimpleNet demo and shows how to account for the case of sending different data to multiple GPU devices.
This worked example considers a variant of the SimpleNet demo, which demonstrates how to account for multiple input tensors and multiple output tensors.
This worked example
is currently under development. Eventually, it will demonstrate how to perform
automatic differentiation in FTorch by leveraging PyTorch's Autograd module.
Currently, it just demonstrates how to use torch_tensor_to_array
.