Comparison Between TreeValue and Tianshou Batch

In this section, we will take a look at the feature and performance of the Tianshou Batch library, which is developed by Tsinghua Machine Learning Group.

Before starting the comparison, let us define some thing.

[1]:
import torch

_TREE_DATA_1 = {'a': 1, 'b': 2, 'x': {'c': 3, 'd': 4}}
_TREE_DATA_2 = {
    'a': torch.randn(2, 3),
    'x': {
        'c': torch.randn(3, 4)
    },
}
_TREE_DATA_3 = {
    'obs': torch.randn(4, 84, 84),
    'action': torch.randint(0, 6, size=(1,)),
    'reward': torch.rand(1),
}

Read and Write Operation

Reading and writing are the two most common operations in the tree data structure based on the data model (TreeValue and Tianshou Batch both belong to this type), so this section will compare the reading and writing performance of these two libraries.

TreeValue’s Get and Set

[2]:
from treevalue import FastTreeValue

t = FastTreeValue(_TREE_DATA_2)
[3]:
t
[3]:
../_images/comparison_tianshou_batch.result_8_0.svg
[4]:
t.a
[4]:
tensor([[ 1.2617,  0.6692,  0.3927],
        [ 0.1078, -0.8699,  0.2366]])
[5]:
%timeit t.a
85.7 ns ± 1.4 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each)
[6]:
new_value = torch.randn(2, 3)
t.a = new_value

t
[6]:
../_images/comparison_tianshou_batch.result_11_0.svg
[7]:
%timeit t.a = new_value
85.9 ns ± 0.631 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each)

Tianshou Batch’s Get and Set

[8]:
from tianshou.data import Batch

b = Batch(**_TREE_DATA_2)
[9]:
b
[9]:
Batch(
    a: tensor([[ 1.2617,  0.6692,  0.3927],
               [ 0.1078, -0.8699,  0.2366]]),
    x: Batch(
           c: tensor([[-0.7692,  0.8994,  0.8449,  0.5440],
                      [-1.1863, -1.6055, -3.1928,  0.0949],
                      [-0.0421, -2.1891,  1.2264, -0.7729]]),
       ),
)
[10]:
b.a
[10]:
tensor([[ 1.2617,  0.6692,  0.3927],
        [ 0.1078, -0.8699,  0.2366]])
[11]:
%timeit b.a
79.5 ns ± 1.18 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each)
[12]:
new_value = torch.randn(2, 3)
b.a = new_value

b
[12]:
Batch(
    a: tensor([[-1.3736,  0.7020,  0.7005],
               [-0.2142, -0.7718, -0.4262]]),
    x: Batch(
           c: tensor([[-0.7692,  0.8994,  0.8449,  0.5440],
                      [-1.1863, -1.6055, -3.1928,  0.0949],
                      [-0.0421, -2.1891,  1.2264, -0.7729]]),
       ),
)
[13]:
%timeit b.a = new_value
626 ns ± 7.53 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)

Initialization

TreeValue’s Initialization

[14]:
%timeit FastTreeValue(_TREE_DATA_1)
9.37 µs ± 133 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)

Tianshou Batch’s Initialization

[15]:
%timeit Batch(**_TREE_DATA_1)
11.9 µs ± 233 ns per loop (mean ± std. dev. of 7 runs, 100,000 loops each)

Deep Copy Operation

[16]:
import copy

Deep Copy of TreeValue

[17]:
t3 = FastTreeValue(_TREE_DATA_3)
%timeit copy.deepcopy(t3)
171 µs ± 1.81 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)

Deep Copy of Tianshou Batch

[18]:
b3 = Batch(**_TREE_DATA_3)
%timeit copy.deepcopy(b3)
173 µs ± 2.19 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)

Stack, Concat and Split Operation

Performance of TreeValue

[19]:
trees = [FastTreeValue(_TREE_DATA_2) for _ in range(8)]
[20]:
t_stack = FastTreeValue.func(subside=True)(torch.stack)

t_stack(trees)
[20]:
../_images/comparison_tianshou_batch.result_34_0.svg
[21]:
%timeit t_stack(trees)
39 µs ± 380 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
[22]:
t_cat = FastTreeValue.func(subside=True)(torch.cat)

t_cat(trees)
[22]:
../_images/comparison_tianshou_batch.result_36_0.svg
[23]:
%timeit t_cat(trees)
36 µs ± 163 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
[24]:
t_split = FastTreeValue.func(rise=True)(torch.split)
tree = FastTreeValue({
    'obs': torch.randn(8, 4, 84, 84),
    'action': torch.randint(0, 6, size=(8, 1,)),
    'reward': torch.rand(8, 1),
})

%timeit t_split(tree, 1)
75.6 µs ± 680 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)

Performance of Tianshou Batch

[25]:
batches = [Batch(**_TREE_DATA_2) for _ in range(8)]

Batch.stack(batches)
[25]:
Batch(
    x: Batch(
           c: tensor([[[-0.7692,  0.8994,  0.8449,  0.5440],
                       [-1.1863, -1.6055, -3.1928,  0.0949],
                       [-0.0421, -2.1891,  1.2264, -0.7729]],

                      [[-0.7692,  0.8994,  0.8449,  0.5440],
                       [-1.1863, -1.6055, -3.1928,  0.0949],
                       [-0.0421, -2.1891,  1.2264, -0.7729]],

                      [[-0.7692,  0.8994,  0.8449,  0.5440],
                       [-1.1863, -1.6055, -3.1928,  0.0949],
                       [-0.0421, -2.1891,  1.2264, -0.7729]],

                      [[-0.7692,  0.8994,  0.8449,  0.5440],
                       [-1.1863, -1.6055, -3.1928,  0.0949],
                       [-0.0421, -2.1891,  1.2264, -0.7729]],

                      [[-0.7692,  0.8994,  0.8449,  0.5440],
                       [-1.1863, -1.6055, -3.1928,  0.0949],
                       [-0.0421, -2.1891,  1.2264, -0.7729]],

                      [[-0.7692,  0.8994,  0.8449,  0.5440],
                       [-1.1863, -1.6055, -3.1928,  0.0949],
                       [-0.0421, -2.1891,  1.2264, -0.7729]],

                      [[-0.7692,  0.8994,  0.8449,  0.5440],
                       [-1.1863, -1.6055, -3.1928,  0.0949],
                       [-0.0421, -2.1891,  1.2264, -0.7729]],

                      [[-0.7692,  0.8994,  0.8449,  0.5440],
                       [-1.1863, -1.6055, -3.1928,  0.0949],
                       [-0.0421, -2.1891,  1.2264, -0.7729]]]),
       ),
    a: tensor([[[ 1.2617,  0.6692,  0.3927],
                [ 0.1078, -0.8699,  0.2366]],

               [[ 1.2617,  0.6692,  0.3927],
                [ 0.1078, -0.8699,  0.2366]],

               [[ 1.2617,  0.6692,  0.3927],
                [ 0.1078, -0.8699,  0.2366]],

               [[ 1.2617,  0.6692,  0.3927],
                [ 0.1078, -0.8699,  0.2366]],

               [[ 1.2617,  0.6692,  0.3927],
                [ 0.1078, -0.8699,  0.2366]],

               [[ 1.2617,  0.6692,  0.3927],
                [ 0.1078, -0.8699,  0.2366]],

               [[ 1.2617,  0.6692,  0.3927],
                [ 0.1078, -0.8699,  0.2366]],

               [[ 1.2617,  0.6692,  0.3927],
                [ 0.1078, -0.8699,  0.2366]]]),
)
[26]:
%timeit Batch.stack(batches)
96.7 µs ± 865 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
[27]:
Batch.cat(batches)
[27]:
Batch(
    x: Batch(
           c: tensor([[-0.7692,  0.8994,  0.8449,  0.5440],
                      [-1.1863, -1.6055, -3.1928,  0.0949],
                      [-0.0421, -2.1891,  1.2264, -0.7729],
                      [-0.7692,  0.8994,  0.8449,  0.5440],
                      [-1.1863, -1.6055, -3.1928,  0.0949],
                      [-0.0421, -2.1891,  1.2264, -0.7729],
                      [-0.7692,  0.8994,  0.8449,  0.5440],
                      [-1.1863, -1.6055, -3.1928,  0.0949],
                      [-0.0421, -2.1891,  1.2264, -0.7729],
                      [-0.7692,  0.8994,  0.8449,  0.5440],
                      [-1.1863, -1.6055, -3.1928,  0.0949],
                      [-0.0421, -2.1891,  1.2264, -0.7729],
                      [-0.7692,  0.8994,  0.8449,  0.5440],
                      [-1.1863, -1.6055, -3.1928,  0.0949],
                      [-0.0421, -2.1891,  1.2264, -0.7729],
                      [-0.7692,  0.8994,  0.8449,  0.5440],
                      [-1.1863, -1.6055, -3.1928,  0.0949],
                      [-0.0421, -2.1891,  1.2264, -0.7729],
                      [-0.7692,  0.8994,  0.8449,  0.5440],
                      [-1.1863, -1.6055, -3.1928,  0.0949],
                      [-0.0421, -2.1891,  1.2264, -0.7729],
                      [-0.7692,  0.8994,  0.8449,  0.5440],
                      [-1.1863, -1.6055, -3.1928,  0.0949],
                      [-0.0421, -2.1891,  1.2264, -0.7729]]),
       ),
    a: tensor([[ 1.2617,  0.6692,  0.3927],
               [ 0.1078, -0.8699,  0.2366],
               [ 1.2617,  0.6692,  0.3927],
               [ 0.1078, -0.8699,  0.2366],
               [ 1.2617,  0.6692,  0.3927],
               [ 0.1078, -0.8699,  0.2366],
               [ 1.2617,  0.6692,  0.3927],
               [ 0.1078, -0.8699,  0.2366],
               [ 1.2617,  0.6692,  0.3927],
               [ 0.1078, -0.8699,  0.2366],
               [ 1.2617,  0.6692,  0.3927],
               [ 0.1078, -0.8699,  0.2366],
               [ 1.2617,  0.6692,  0.3927],
               [ 0.1078, -0.8699,  0.2366],
               [ 1.2617,  0.6692,  0.3927],
               [ 0.1078, -0.8699,  0.2366]]),
)
[28]:
%timeit Batch.cat(batches)
176 µs ± 1.34 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
[29]:
batch = Batch({
    'obs': torch.randn(8, 4, 84, 84),
    'action': torch.randint(0, 6, size=(8, 1,)),
    'reward': torch.rand(8, 1)}
)

%timeit list(Batch.split(batch, 1, shuffle=False, merge_last=True))
445 µs ± 8.75 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
[ ]: