lightrft.utils.distributed_sampler¶
- class lightrft.utils.distributed_sampler.DistributedSampler(*args: Any, **kwargs: Any)[source]¶
Bases:
+_T_coSampler that restricts data loading to a subset of the dataset.
It is especially useful in conjunction with
torch.nn.parallel.DistributedDataParallel. In such a case, each process can pass aDistributedSamplerinstance as aDataLoadersampler, and load a subset of the original dataset that is exclusive to it.Note
Dataset is assumed to be of constant size and that any instance of it always returns the same elements in the same order.
- Parameters:
dataset – Dataset used for sampling.
num_replicas (int, optional) – Number of processes participating in distributed training. By default,
world_sizeis retrieved from the current distributed group.rank (int, optional) – Rank of the current process within
num_replicas. By default,rankis retrieved from the current distributed group.shuffle (bool, optional) – If
True(default), sampler will shuffle the indices.seed –
- random seed used to shuffle the sampler if
shuffle=True. This number should be identical across all processes in the distributed group. Default:0.- drop_last (bool, optional): if
True, then the sampler will drop the tail of the data to make it evenly divisible across the number of replicas. If
False, the sampler will add extra indices to make the data evenly divisible across the replicas. Default:False.
Warning
In distributed mode, calling the
set_epoch()method at the beginning of each epoch before creating theDataLoaderiterator is necessary to make shuffling work properly across multiple epochs. Otherwise, the same ordering will be always used.Example:
>>> # xdoctest: +SKIP >>> sampler = DistributedSampler(dataset) if is_distributed else None >>> loader = DataLoader(dataset, shuffle=(sampler is None), ... sampler=sampler) >>> for epoch in range(start_epoch, n_epochs): ... if is_distributed: ... sampler.set_epoch(epoch) ... train(loader)