to¶
Documentation¶
-
class
treetensor.torch.
Tensor
(data, *args, constraint=None, **kwargs)[source]¶ -
to
(*args, **kwargs)[source]¶ Turn the original tree tensor to another format.
Example:
>>> import torch >>> import treetensor.torch as ttorch >>> ttorch.tensor({ ... 'a': [[1, 11], [2, 22], [3, 33]], ... 'b': {'x': [[4, 5], [6, 7]]}, ... }).to(pytorch.float64) <Tensor 0x7ff363bb6518> ├── a --> tensor([[ 1., 11.], │ [ 2., 22.], │ [ 3., 33.]], dtype=torch.float64) └── b --> <Tensor 0x7ff363bb6ef0> └── x --> tensor([[4., 5.], [6., 7.]], dtype=torch.float64)
-
Torch Version Related
This documentation is based on torch.Tensor.to in torch v1.9.0+cu102. Its arguments’ arrangements depend on the version of pytorch you installed.
If some arguments listed here are not working properly, please check your pytorch’s version with the following command and find its documentation.
1 | python -c 'import torch;print(torch.__version__)' |
The arguments and keyword arguments supported in torch v1.9.0+cu102 is listed below.
Description From Torch v1.9.0+cu102¶
-
class
torch.
Tensor
¶ -
to
(*args, **kwargs) → Tensor¶ Performs Tensor dtype and/or device conversion. A
torch.dtype
andtorch.device
are inferred from the arguments ofself.to(*args, **kwargs)
.Note
If the
self
Tensor already has the correcttorch.dtype
andtorch.device
, thenself
is returned. Otherwise, the returned tensor is a copy ofself
with the desiredtorch.dtype
andtorch.device
.Here are the ways to call
to
:-
to
(dtype, non_blocking=False, copy=False, memory_format=torch.preserve_format) → Tensor Returns a Tensor with the specified
dtype
- Args:
memory_format (
torch.memory_format
, optional): the desired memory format of returned Tensor. Default:torch.preserve_format
.
-
to
(device=None, dtype=None, non_blocking=False, copy=False, memory_format=torch.preserve_format) → Tensor Returns a Tensor with the specified
device
and (optional)dtype
. Ifdtype
isNone
it is inferred to beself.dtype
. Whennon_blocking
, tries to convert asynchronously with respect to the host if possible, e.g., converting a CPU Tensor with pinned memory to a CUDA Tensor. Whencopy
is set, a new Tensor is created even when the Tensor already matches the desired conversion.- Args:
memory_format (
torch.memory_format
, optional): the desired memory format of returned Tensor. Default:torch.preserve_format
.
-
to
(other, non_blocking=False, copy=False) → Tensor Returns a Tensor with same
torch.dtype
andtorch.device
as the Tensorother
. Whennon_blocking
, tries to convert asynchronously with respect to the host if possible, e.g., converting a CPU Tensor with pinned memory to a CUDA Tensor. Whencopy
is set, a new Tensor is created even when the Tensor already matches the desired conversion.
Example:
>>> tensor = torch.randn(2, 2) # Initially dtype=float32, device=cpu >>> tensor.to(torch.float64) tensor([[-0.5044, 0.0005], [ 0.3310, -0.0584]], dtype=torch.float64) >>> cuda0 = torch.device('cuda:0') >>> tensor.to(cuda0) tensor([[-0.5044, 0.0005], [ 0.3310, -0.0584]], device='cuda:0') >>> tensor.to(cuda0, dtype=torch.float64) tensor([[-0.5044, 0.0005], [ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0') >>> other = torch.randn((), dtype=torch.float64, device=cuda0) >>> tensor.to(other, non_blocking=True) tensor([[-0.5044, 0.0005], [ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0')
-
-