nncore.parallel
Data Container
- class nncore.parallel.container.DataContainer(data, stack=True, pad_value=0, pad_dims=-1, cpu_only=False)[source]
A wrapper for data to make it easily be padded and be scattered to different GPUs.
- Parameters:
data (any) – The object to be wrapped.
stack (bool, optional) – Whether to stack the data during scattering. This argument is valid only when the data is a
torch.Tensor
. Default:True
.pad_value (int, optional) – The padding value. Default:
0
.pad_dims (int, optional) – Number of dimensions to be padded. Expected values include
None
,-1
,1
,2
, and3
. Default:-1
.cpu_only (bool, optional) – Whether to keep the data on CPU only Default:
False
.
Data Parallel
Collate
- nncore.parallel.collate.collate(batch, samples_per_gpu=-1)[source]
A collate function for
DataLoader
withDataContainer
support.- Parameters:
batch (any) – The batch of data to be collated.
samples_per_gpu (int, optional) – Number of samples per GPU.
-1
means moving all the data to a single GPU. Default:-1
.