Shortcuts

torch.set_deterministic

torch.set_deterministic(d)[source]

Sets whether native PyTorch operations must use deterministic algorithms. When True, operations without deterministic algorithms will throw a :class:RuntimeError when called.

Warning

This feature is a beta feature, so it does not affect every nondeterministic operation yet. The following operations are affected by this flag.

The following normally-nondeterministic operations will act deterministically when d=True:

The following normally-nondeterministic operations will throw a RuntimeError when d=True:

A handful of CUDA operations are nondeterministic if the CUDA version is 10.2 or greater, unless the environment variable CUBLAS_WORKSPACE_CONFIG=:4096:8 or CUBLAS_WORKSPACE_CONFIG=:16:8 is set. See the CUDA documentation for more details: https://docs.nvidia.com/cuda/cublas/index.html#cublasApi_reproducibility If one of these environment variable configurations is not set, a RuntimeError will be raised from these operations when called with CUDA tensors:

Note that deterministic operations tend to have worse performance than non-deterministic operations.

Parameters

d (bool) – If True, force operations to be deterministic. If False, allow non-deterministic operations.

Docs

Access comprehensive developer documentation for PyTorch

View Docs

Tutorials

Get in-depth tutorials for beginners and advanced developers

View Tutorials

Resources

Find development resources and get your questions answered

View Resources