The FTorch repository comes with a number of documented worked examples.
These are designed to introduce users to FTorch and how to use the various features.
A subset of the examples are used as integration tests as part of FTorch's test suite.
This worked example
provides a simple demonstration of how to create, manipulate,
interrogate, and destroy instances of the torch_tensor derived type. This is
one of the core derived types in the FTorch library, providing an interface to
the torch::Tensor C++ class. Like torch::Tensor, the torch_tensor derived
type is designed to have a similar API to PyTorch's torch.Tensor class.
This worked example provides a simple but complete demonstration of how to use the library. It uses simple PyTorch 'net' that takes an input vector of length 5 and applies a single Linear layer to multiply it by 2. The aim is to demonstrate the most basic features of coupling before worrying about more complex issues that are covered in later examples.
This worked example provides a more realistic demonstration of how to use the library, using ResNet-18 to classify an image. As the input to this model is four-dimensional (batch size, colour, x, y), care must be taken dealing with the data array in Python and Fortran. See when to transpose arrays for more details.
This worked example demonstrates how to use a PyTorch model trained on 1D vectors to perform inference on batched and higher-dimensional data from Fortran. It covers unbatched, batched, and multidimensional cases.
This worked example considers a variant of the SimpleNet demo, which demonstrates how to account for multiple input tensors and multiple output tensors.
This worked example demonstrates best practices for performing inference on the same network with different input multiple times in the same workflow.
This worked example builds on the SimpleNet demo and shows how to account for the case of sending different data to multiple GPU devices.
This worked example demonstrates how to run the SimpleNet example in the context of MPI parallelism, running the net with different input arrays on each MPI rank.
These worked examples demonstrate how to perform automatic differentiation of operations available in FTorch by leveraging PyTorch's Autograd module. The first example shows how to differentiate through mathematical expressions involving Torch tensors, while the second example shows how to differentiate propagation through a simple neural network.
These worked examples demonstrate how to make use of optimizers to compute optimization steps as part of a training process. Equivalent Python and Fortran codes demonstrate the 'training' of a tensor to map input data to target data, including a demonstration that the results are identical.