requires_grad_¶
Documentation¶
-
class
treetensor.torch.
Tensor
(data, *args, constraint=None, **kwargs)[source]¶ -
requires_grad_
(requires_grad=True)[source]¶ Change if autograd should record operations on this tensor: sets this tensor’s
requires_grad
attribute in-place. Returns this tensor.Examples:
>>> import torch >>> import treetensor.torch as ttorch >>> tt = ttorch.randn({ ... 'a': (2, 3), ... 'b': {'x': (3, 4)}, ... }) >>> tt.requires_grad_(True) >>> tt <Tensor 0x7feec3c22240> ├── a --> tensor([[ 1.4754, 1.1167, 1.5431], │ [-0.5816, 0.4746, 0.8392]], requires_grad=True) └── b --> <Tensor 0x7feec3c22128> └── x --> tensor([[ 0.3361, 0.8194, 0.1297, -0.5547], [ 0.2531, -0.0637, 0.9822, 2.1618], [ 2.0140, -0.0929, 0.9304, 1.5430]], requires_grad=True)
-
Torch Version Related
This documentation is based on torch.Tensor.requires_grad_ in torch v1.9.0+cu102. Its arguments’ arrangements depend on the version of pytorch you installed.
If some arguments listed here are not working properly, please check your pytorch’s version with the following command and find its documentation.
1 | python -c 'import torch;print(torch.__version__)' |
The arguments and keyword arguments supported in torch v1.9.0+cu102 is listed below.
Description From Torch v1.9.0+cu102¶
-
class
torch.
Tensor
¶ -
requires_grad_
(requires_grad=True) → Tensor¶ Change if autograd should record operations on this tensor: sets this tensor’s
requires_grad
attribute in-place. Returns this tensor.requires_grad_()
’s main use case is to tell autograd to begin recording operations on a Tensortensor
. Iftensor
hasrequires_grad=False
(because it was obtained through a DataLoader, or required preprocessing or initialization),tensor.requires_grad_()
makes it so that autograd will begin to record operations ontensor
.- Args:
- requires_grad (bool): If autograd should record operations on this tensor.
Default:
True
.
Example:
>>> # Let's say we want to preprocess some saved weights and use >>> # the result as new weights. >>> saved_weights = [0.1, 0.2, 0.3, 0.25] >>> loaded_weights = torch.tensor(saved_weights) >>> weights = preprocess(loaded_weights) # some function >>> weights tensor([-0.5503, 0.4926, -2.1158, -0.8303]) >>> # Now, start to record operations done to weights >>> weights.requires_grad_() >>> out = weights.pow(2).sum() >>> out.backward() >>> weights.grad tensor([-1.1007, 0.9853, -4.2316, -1.6606])
-