Comparison Between TreeValue and Tianshou Batch¶
In this section, we will take a look at the feature and performance of the Tianshou Batch library, which is developed by Tsinghua Machine Learning Group.
Before starting the comparison, let us define some thing.
[1]:
import torch
_TREE_DATA_1 = {'a': 1, 'b': 2, 'x': {'c': 3, 'd': 4}}
_TREE_DATA_2 = {
'a': torch.randn(2, 3),
'x': {
'c': torch.randn(3, 4)
},
}
_TREE_DATA_3 = {
'obs': torch.randn(4, 84, 84),
'action': torch.randint(0, 6, size=(1,)),
'reward': torch.rand(1),
}
Read and Write Operation¶
Reading and writing are the two most common operations in the tree data structure based on the data model (TreeValue and Tianshou Batch both belong to this type), so this section will compare the reading and writing performance of these two libraries.
TreeValue’s Get and Set¶
[2]:
from treevalue import FastTreeValue
t = FastTreeValue(_TREE_DATA_2)
[3]:
t
[3]:
<FastTreeValue 0x7f8bbc9ff9d0>
├── 'a' --> tensor([[-0.3743, -0.9320, -0.5447],
│ [-2.2296, 0.0064, -0.0896]])
└── 'x' --> <FastTreeValue 0x7f8bbc9ff2b0>
└── 'c' --> tensor([[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452]])
[4]:
t.a
[4]:
tensor([[-0.3743, -0.9320, -0.5447],
[-2.2296, 0.0064, -0.0896]])
[5]:
%timeit t.a
49.4 ns ± 0.39 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each)
[6]:
new_value = torch.randn(2, 3)
t.a = new_value
t
[6]:
<FastTreeValue 0x7f8bbc9ff9d0>
├── 'a' --> tensor([[-0.5772, 0.2319, -1.2415],
│ [-1.3844, 0.1663, -0.6257]])
└── 'x' --> <FastTreeValue 0x7f8bbc9ff2b0>
└── 'c' --> tensor([[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452]])
[7]:
%timeit t.a = new_value
53.4 ns ± 0.121 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each)
Tianshou Batch’s Get and Set¶
[8]:
from tianshou.data import Batch
b = Batch(**_TREE_DATA_2)
[9]:
b
[9]:
Batch(
a: tensor([[-0.3743, -0.9320, -0.5447],
[-2.2296, 0.0064, -0.0896]]),
x: Batch(
c: tensor([[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452]]),
),
)
[10]:
b.a
[10]:
tensor([[-0.3743, -0.9320, -0.5447],
[-2.2296, 0.0064, -0.0896]])
[11]:
%timeit b.a
41.1 ns ± 0.336 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each)
[12]:
new_value = torch.randn(2, 3)
b.a = new_value
b
[12]:
Batch(
a: tensor([[-6.5475e-01, -4.4569e-01, 5.2007e-01],
[-1.1818e+00, 1.8087e-04, 9.2330e-01]]),
x: Batch(
c: tensor([[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452]]),
),
)
[13]:
%timeit b.a = new_value
374 ns ± 3.09 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
Initialization¶
TreeValue’s Initialization¶
[14]:
%timeit FastTreeValue(_TREE_DATA_1)
608 ns ± 3.41 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
Tianshou Batch’s Initialization¶
[15]:
%timeit Batch(**_TREE_DATA_1)
8.52 µs ± 101 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)
Deep Copy Operation¶
[16]:
import copy
Deep Copy of TreeValue¶
[17]:
t3 = FastTreeValue(_TREE_DATA_3)
%timeit copy.deepcopy(t3)
132 µs ± 852 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
Deep Copy of Tianshou Batch¶
[18]:
b3 = Batch(**_TREE_DATA_3)
%timeit copy.deepcopy(b3)
130 µs ± 1.15 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
Stack, Concat and Split Operation¶
Performance of TreeValue¶
[19]:
trees = [FastTreeValue(_TREE_DATA_2) for _ in range(8)]
[20]:
t_stack = FastTreeValue.func(subside=True)(torch.stack)
t_stack(trees)
[20]:
<FastTreeValue 0x7f8ad2b2fc40>
├── 'a' --> tensor([[[-0.3743, -0.9320, -0.5447],
│ [-2.2296, 0.0064, -0.0896]],
│
│ [[-0.3743, -0.9320, -0.5447],
│ [-2.2296, 0.0064, -0.0896]],
│
│ [[-0.3743, -0.9320, -0.5447],
│ [-2.2296, 0.0064, -0.0896]],
│
│ [[-0.3743, -0.9320, -0.5447],
│ [-2.2296, 0.0064, -0.0896]],
│
│ [[-0.3743, -0.9320, -0.5447],
│ [-2.2296, 0.0064, -0.0896]],
│
│ [[-0.3743, -0.9320, -0.5447],
│ [-2.2296, 0.0064, -0.0896]],
│
│ [[-0.3743, -0.9320, -0.5447],
│ [-2.2296, 0.0064, -0.0896]],
│
│ [[-0.3743, -0.9320, -0.5447],
│ [-2.2296, 0.0064, -0.0896]]])
└── 'x' --> <FastTreeValue 0x7f8bbc9ffa30>
└── 'c' --> tensor([[[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452]],
[[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452]],
[[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452]],
[[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452]],
[[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452]],
[[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452]],
[[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452]],
[[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452]]])
[21]:
%timeit t_stack(trees)
23.7 µs ± 30.8 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
[22]:
t_cat = FastTreeValue.func(subside=True)(torch.cat)
t_cat(trees)
[22]:
<FastTreeValue 0x7f8ad2b2fd90>
├── 'a' --> tensor([[-0.3743, -0.9320, -0.5447],
│ [-2.2296, 0.0064, -0.0896],
│ [-0.3743, -0.9320, -0.5447],
│ [-2.2296, 0.0064, -0.0896],
│ [-0.3743, -0.9320, -0.5447],
│ [-2.2296, 0.0064, -0.0896],
│ [-0.3743, -0.9320, -0.5447],
│ [-2.2296, 0.0064, -0.0896],
│ [-0.3743, -0.9320, -0.5447],
│ [-2.2296, 0.0064, -0.0896],
│ [-0.3743, -0.9320, -0.5447],
│ [-2.2296, 0.0064, -0.0896],
│ [-0.3743, -0.9320, -0.5447],
│ [-2.2296, 0.0064, -0.0896],
│ [-0.3743, -0.9320, -0.5447],
│ [-2.2296, 0.0064, -0.0896]])
└── 'x' --> <FastTreeValue 0x7f8ad2b7db80>
└── 'c' --> tensor([[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452],
[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452],
[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452],
[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452],
[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452],
[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452],
[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452],
[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452]])
[23]:
%timeit t_cat(trees)
21.7 µs ± 31 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
[24]:
t_split = FastTreeValue.func(rise=True)(torch.split)
tree = FastTreeValue({
'obs': torch.randn(8, 4, 84, 84),
'action': torch.randint(0, 6, size=(8, 1,)),
'reward': torch.rand(8, 1),
})
%timeit t_split(tree, 1)
50.6 µs ± 245 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
Performance of Tianshou Batch¶
[25]:
batches = [Batch(**_TREE_DATA_2) for _ in range(8)]
Batch.stack(batches)
[25]:
Batch(
x: Batch(
c: tensor([[[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452]],
[[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452]],
[[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452]],
[[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452]],
[[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452]],
[[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452]],
[[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452]],
[[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452]]]),
),
a: tensor([[[-0.3743, -0.9320, -0.5447],
[-2.2296, 0.0064, -0.0896]],
[[-0.3743, -0.9320, -0.5447],
[-2.2296, 0.0064, -0.0896]],
[[-0.3743, -0.9320, -0.5447],
[-2.2296, 0.0064, -0.0896]],
[[-0.3743, -0.9320, -0.5447],
[-2.2296, 0.0064, -0.0896]],
[[-0.3743, -0.9320, -0.5447],
[-2.2296, 0.0064, -0.0896]],
[[-0.3743, -0.9320, -0.5447],
[-2.2296, 0.0064, -0.0896]],
[[-0.3743, -0.9320, -0.5447],
[-2.2296, 0.0064, -0.0896]],
[[-0.3743, -0.9320, -0.5447],
[-2.2296, 0.0064, -0.0896]]]),
)
[26]:
%timeit Batch.stack(batches)
63 µs ± 243 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
[27]:
Batch.cat(batches)
[27]:
Batch(
x: Batch(
c: tensor([[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452],
[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452],
[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452],
[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452],
[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452],
[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452],
[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452],
[-0.0560, 1.0876, 0.1732, 2.0784],
[ 1.2565, 0.5128, 0.9535, 0.1456],
[ 1.4677, 0.0500, 0.5396, 0.0452]]),
),
a: tensor([[-0.3743, -0.9320, -0.5447],
[-2.2296, 0.0064, -0.0896],
[-0.3743, -0.9320, -0.5447],
[-2.2296, 0.0064, -0.0896],
[-0.3743, -0.9320, -0.5447],
[-2.2296, 0.0064, -0.0896],
[-0.3743, -0.9320, -0.5447],
[-2.2296, 0.0064, -0.0896],
[-0.3743, -0.9320, -0.5447],
[-2.2296, 0.0064, -0.0896],
[-0.3743, -0.9320, -0.5447],
[-2.2296, 0.0064, -0.0896],
[-0.3743, -0.9320, -0.5447],
[-2.2296, 0.0064, -0.0896],
[-0.3743, -0.9320, -0.5447],
[-2.2296, 0.0064, -0.0896]]),
)
[28]:
%timeit Batch.cat(batches)
119 µs ± 364 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
[29]:
batch = Batch({
'obs': torch.randn(8, 4, 84, 84),
'action': torch.randint(0, 6, size=(8, 1,)),
'reward': torch.rand(8, 1)}
)
%timeit list(Batch.split(batch, 1, shuffle=False, merge_last=True))
280 µs ± 2.31 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
[ ]: