megatron.core.parallel_state.initialize_model_parallel#

megatron.core.parallel_state.initialize_model_parallel(tensor_model_parallel_size: int = 1, pipeline_model_parallel_size: int = 1, virtual_pipeline_model_parallel_size: int | None = None, pipeline_model_parallel_split_rank: int | None = None) None#

Initialize model data parallel groups.

Parameters:
  • tensor_model_parallel_size – number of GPUs used for tensor model parallelism.

  • pipeline_model_parallel_size – number of GPUs used for pipeline model parallelism.

  • virtual_pipeline_model_parallel_size – number of virtual stages (interleaved pipeline).

  • pipeline_model_parallel_split_rank – for models with both encoder and decoder, rank in pipeline with split point.

Let’s say we have a total of 16 GPUs denoted by g0 … g15 and we use 2 GPUs to parallelize the model tensor, and 4 GPUs to parallelize the model pipeline. The present function will create 8 tensor model-parallel groups, 4 pipeline model-parallel groups and 8 data-parallel groups as:

8 data_parallel groups:

[g0, g2], [g1, g3], [g4, g6], [g5, g7], [g8, g10], [g9, g11], [g12, g14], [g13, g15]

8 tensor model-parallel groups:

[g0, g1], [g2, g3], [g4, g5], [g6, g7], [g8, g9], [g10, g11], [g12, g13], [g14, g15]

4 pipeline model-parallel groups:

[g0, g4, g8, g12], [g1, g5, g9, g13], [g2, g6, g10, g14], [g3, g7, g11, g15]

Note that for efficiency, the caller should make sure adjacent ranks are on the same DGX box. For example if we are using 2 DGX-1 boxes with a total of 16 GPUs, rank 0 to 7 belong to the first box and ranks 8 to 15 belong to the second box.