megatron.model.bert_model.BertLMHead#

class megatron.model.bert_model.BertLMHead(mpu_vocab_size, hidden_size, init_method, layernorm_epsilon, parallel_output)#

Bases: MegatronModule

Masked LM head for Bert

Parameters:
  • mpu_vocab_size – model parallel size of vocabulary.

  • hidden_size – hidden size

  • init_method – init method for weight initialization

  • layernorm_epsilon – tolerance for layer norm divisions

  • parallel_output – whether output logits being distributed or not.

forward(hidden_states, word_embeddings_weight)#

Define the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.