Troubleshooting

If you are experiencing problems building or using FTorch please see below for guidance on common problems.

Windows

If possible we recommend using the Windows Subsystem for Linux (WSL) to build the library. In this case the build process is the same as for a Linux environment.

If you need to build in native Windows please read the following information:

Visual Studio

It is possible to build using Visual Studio and the Intel Fortran Compiler
In this case you must install

You will then need to load the intel Fortran compilers using setvars.bat which is found in the Intel compiler install directory (see the intel docs) for more details.
From CMD this can be done with:

"C:\Program Files (x86)\Intel\oneAPI\setvars.bat"

Finally you will need to add -G "NMake Makefiles" to the cmake command in the regular install instructions.
So the basic command to build from CMD becomes:

cmake -G "NMake Makefiles" -DCMAKE_PREFIX_PATH="C:\Users\<path-to-libtorch-download>\libtorch" -DCMAKE_BUILD_TYPE=Release ..
cmake --build .
cmake --install .

If using powershell the setvars and build commands become:

cmd /k '"C:\Program Files (x86)\Intel\oneAPI\setvars.bat" && powershell'
cmake -G "NMake Makefiles" -DCMAKE_PREFIX_PATH="C:\Users\<path-to-libtorch-download>\libtorch" -DCMAKE_BUILD_TYPE=Release ..
cmake --build .
cmake --install .

MinGW

It may be tempting to build on Windows using MinGW. However, libtorch does not currently support MinGW. Instead please build using Visual Studio and the intel Fortran compiler (ifort) as detailed in the project README.

Apple Silicon

At the time of writing, libtorch is currently only officially available for x86 architectures (according to pytorch.org). However, the version of PyTorch provided by pip install provides an ARM binary for libtorch which works on Apple Silicon. Therefore you should pip install torch in this situation and follow the guidance on locating Torch within a virtual environment (venv) for CMake.

FAQ

Why are inputs to torch models an array?

The reason input and output tensors to torch_model_forward are contained in arrays is because it is possible to pass multiple input tensors to the forward() method of a torch net, and it is possible for the net to return multiple output arrays.
The nature of Fortran means that it is not possible to set an arbitrary number of inputs to the torch_model_forward subroutine, so instead we use a single array of input tensors which can have an arbitrary length. Similarly, a single array of output tensors is used.

Note that this does not refer to batching data. This should be done in the same way as in Torch; by extending the dimensionality of the input tensors.

Do I need to set torch.no_grad() or torch.eval() somewhere like in PyTorch?

By default we disable gradient calculations for tensors and models and place models in evaluation mode for efficiency. These can be adjusted using the requires_grad and is_training optional arguments in the Fortran interface. See the API procedures documentation for details.