All notable changes to the project will be documented in this file.
The format is based on Keep a Changelog, and this project adheres to Semantic Versioning. For specific details see the FTorch online documentation.
get_stride method #416UNIX preprocessor variable that selected the right C-integer type for 64bit int. Use int64_t instead #416MULTI_GPU to control the build of multi GPU integration tests in #410requires_grad property hooked up to torch_tensor in #288torch_tensor_backward and
torch_tensor_get_gradient in #286retain_graph argument for torch_tensor_backward in
#342.torch_tensor_zero and class method alias in
#338.torch_tensor_from_array with default layout in
#348.torch_tensor_backward so that it can accept a custom external
gradient tensor as its second argument. The current API is broken slightly
because the original signature (with assumed unit external gradient) now only
works if the first argument is a 1-dimensional tensor with a single entry,
i.e., a scalar. However, this does move FTorch's API to be in line with that
of (Py)Torch.libftorch.so) produced by cmake installation now has RUNPATH that contains path to Torch library directory. Downstream targets linking against FTorch can now find the Torch dependency automatically and will compile successfully #437.CMakeLists.txt where find_package(FTorch) was present, now using REQUIRE if not building tests to stop the cmake configuation process early for users who only wish to build examples in #434torch_tensor in #289torch_tensor_from_array with default layout in tests and
examples in #348.ctorch.cpp improved in #347.torch_tensor_array_delete was removed in favour of using elemental
torch_tensor_delete instead in #545.
Users should be using the torch_delete interface so not be impacted and thus this
is not considered a breaking change.torch_tensors (torch_tensor_delete) was made elemental in
#545. This fixes a possible
issue whereby users could experience a memory leak if not explicitly deleting arrays
of tensors. Now, being elemental, the finalizer will get called on both single tensors
and arrays of tensors when they go out of scope. As such torch_tensor_array_delete
was removed (use torch_tensor_delete instead).torch_tensor_from_array have the pointer, contiguous
properties rather than target
#530. This change
technically breaks the API because it becomes no longer possible to pass
temporary Fortran arrays to the second argument of torch_tensor_from_array.
However, that was a bug rather than a feature, so any workflow crashes due to
this change will provide the user with information on how to remove the error.