nncore.parallel

Data Container

class nncore.parallel.container.DataContainer(data, stack=True, pad_value=0, pad_dims=-1, cpu_only=False)[source]

A wrapper for data to make it easily be padded and be scattered to different GPUs.

Parameters:
  • data (any) – The object to be wrapped.

  • stack (bool, optional) – Whether to stack the data during scattering. This argument is valid only when the data is a torch.Tensor. Default: True.

  • pad_value (int, optional) – The padding value. Default: 0.

  • pad_dims (int, optional) – Number of dimensions to be padded. Expected values include None, -1, 1, 2, and 3. Default: -1.

  • cpu_only (bool, optional) – Whether to keep the data on CPU only Default: False.

Data Parallel

class nncore.parallel.parallel.NNDataParallel(*args: Any, **kwargs: Any)[source]

A nn.DataParallel class with DataContainer support. This class only bundles single-device modules.

class nncore.parallel.parallel.NNDistributedDataParallel(*args: Any, **kwargs: Any)[source]

A nn.DistributedDataParallel class with DataContainer support. This class only bundles single-device modules.

Collate

nncore.parallel.collate.collate(batch, samples_per_gpu=-1)[source]

A collate function for DataLoader with DataContainer support.

Parameters:
  • batch (any) – The batch of data to be collated.

  • samples_per_gpu (int, optional) – Number of samples per GPU. -1 means moving all the data to a single GPU. Default: -1.