If you would like to contribute to the FTorch project, or modify the code at a deeper level, please see below for guidance.
Contributions and collaborations are welcome.
For bugs, feature requests, and clear suggestions for improvement please open an issue.
If you have built something upon FTorch that would be useful to others, or can address an open issue, please fork the repository and open a pull request.
Everyone participating in the FTorch project, and in particular in the issue tracker, pull requests, and social media activity, is expected to treat other people with respect and, more generally, to follow the guidelines articulated in the Python Community Code of Conduct.
If you have a Torch functionality that you wish to bring in from the C++ API to the FTorch Fortran API the steps are generally as follows:
ctorch.cpp
to create a C++ version of the function that accesses torch::<item>
.ctorch.h
ftorch.fypp
to create a Fortran version of the function
that binds to the version in ctorch.cpp
.Details of C++ functionalities available to be wrapped can be found in the LibTorch C++ API.
As this is an open-source project we appreciate any contributions back from users that have extended the functionality. If you have done something but don't know where to start with open-source contributions please get in touch!*
*Our preferred method of contact is via Github issues and discussions, but if you are unfamiliar with this you can email ICCS asking for the FTorch developers.
Most of the development tools (pre-processing,code styling) happen to be pip-installable packages. You can install them by running
pip install -r requirements-dev.txt
inside a Python virtual environment from the base FTorch directory.
The Fortran source code for FTorch is contained in src/ftorch.f90
.
However, this file should not be edited directly, but instead generated from
src/ftorch.fypp
.
This is a file that is set up to be run through the
Fypp preprocessor.
We use this because we want to create a pleasant interface of single function calls.
The nature of Fortran means that this requires a lot of repeated combinations of
array shapes and data types under interface structures.
By using Fypp we can generate these programatically.
Fypp is a pip-installable package that comes bundled with the developer requirements.
To generate the Fortran code run:
fypp src/ftorch.fypp src/ftorch.f90
Note: Generally it would be advisable to provide only the .fypp
source code to
reduce duplication and confusion. However, because it is a relatively small file
and many of our users wish to "clone-and-go" rather than develop, we provide both.
Development should only take place in ftorch.fypp
, however.
When extending or modifying functionality related to C++ header and/or source
files src/ctorch.h
and src/ctorch.cpp
, we refer to the Torch
C++ documentation and more specifically the
C++ API documentation
pages on the PyTorch website for details.
GPU device-specific code is handled in FTorch using codes defined in the root
CMakeLists.txt
file:
set(GPU_DEVICE_NONE 0)
set(GPU_DEVICE_CUDA 1)
set(GPU_DEVICE_XPU 12)
set(GPU_DEVICE_MPS 13)
These device codes are chosen to be consistent with the numbering used in
PyTorch (see
https://github.com/pytorch/pytorch/blob/main/c10/core/DeviceType.h). When a user
specifies -DGPU_DEVICE=XPU
(for example) in the FTorch CMake build, this is
mapped to the appropriate device code (in this case 12). The chosen device code
and all other ones defined are passed to the C++ compiler in the following step:
target_compile_definitions(
${LIB_NAME}
PRIVATE ${COMPILE_DEFS} GPU_DEVICE=${GPU_DEVICE_CODE}
GPU_DEVICE_NONE=${GPU_DEVICE_NONE} GPU_DEVICE_CUDA=${GPU_DEVICE_CUDA}
GPU_DEVICE_XPU=${GPU_DEVICE_XPU} GPU_DEVICE_MPS=${GPU_DEVICE_MPS})
The chosen device code will enable the appropriate C pre-processor conditions in the C++ source so that that the code relevant to that device type becomes active.
An example illustrating why this approach was taken is that if we removed the
device codes and pre-processor conditions and tried to build with a CPU-only or
CUDA LibTorch installation then compile errors would arise from the use of the
torch::xpu
module in src/ctorch.cpp
.
In order to streamline the process of uploading we provide a pre-commit hook in
.githooks/pre-commit
.
This will check that both the .fypp
and .f90
files have been updated together in a
synchronous fashion before a commmit can take place.
If this does not happen then the second line of defence (GitHub continuous integration)
will fail following the commit.
Use of the hook is not automatic and needs to be enabled by the developer
(after they have inspected it and are happy with its contents).
Hooks can be enabled by placing them in the .git
directory with the following commands:
cp .githooks/pre-commit .git/hooks/
chmod +x .git/hooks/pre-commit
FTorch source code is subject to a number of static analysis checks to ensure that it conforms to quality and legibility. These tools are a mixture of formatters and linters.
The tools we use are as follows on a language-by-language basis:
Instructions on installing these tools can be found in their respective documentations.
Note that all but ShellCheck may be installed with pip. Check out the dev tools installation section for instructions on how to do so.
Contributors should run them over their code and ensure that it conforms before submitting a pull request. If there is a good reason to ignore a particular rule this should be justified in the pull request and ideally documented in the code.
There is a GitHub action as part of the continuous integration that will perform these checks on all opened pull requests before they are merged.
torch::Error
and std::exception
in the C++ functions by catching and
printing to screen before exiting cleanly.FTorch has follows semantic versioning.
The project version should be updated accordingly through the PACKAGE_VERSION
in
CMakeLists.txt for each new release.
A log of notable changes to the software is kept in CHANGELOG.md
.
This follows the conventions of Keep a Changelog and should
be updated by contributors and maintainers as part of a pull request when appropriate.
"Notable" includes new features, bugfixes, dependency updates etc.
"Notable" does not cover typo corrections, documentation rephrasing and restyling,
or correction of other minor infelicities that do not impact the user or developer.
New minor releases are made when deemed appropriate by maintainers by adding a tag to the commit and creating a corresponding GitHub Release. The minor number of the version should be incremented, the entry for the version finalised in the changelog, and a clean log for 'Unreleased' changes created.
New patch releases are made whenever a bugfix is merged. The patch number of the version should be incremented, a tag attached to the commit, and a note made under the current 'Unreleased' patches header in the changelog.
The API documentation for FTorch is generated using FORD. For detailed information refer to the FORD User Guide, but as a quick-start:
!!
is used to signify documentation.!>
.FORD is pip installable:
pip install ford
To generate the documentation run:
ford FTorch.md
from the root of the repository.
FTorch.md
is the FORD index file, API documentation is automatically generated, and
any further items are contained in pages/
as markdown files.
Documentation of the C functions in ctorch.h
is provided
by Doxygen.
Note that we need to define the macros for GPU devices that are passed to ftorch.F90
via the C preprocessor in FTorch.md
to match those in the CMakeLists.txt.