megatron.model.fused_softmax.FusedScaleMaskSoftmax#
- class megatron.model.fused_softmax.FusedScaleMaskSoftmax(input_in_fp16, input_in_bf16, attn_mask_type, scaled_masked_softmax_fusion, mask_func, softmax_in_fp32, scale)#
Bases:
Module
fused operation: scaling + mask + softmax
- Parameters:
input_in_fp16 – flag to indicate if input in fp16 data format.
input_in_bf16 – flag to indicate if input in bf16 data format.
attn_mask_type – attention mask type (pad or causal)
scaled_masked_softmax_fusion – flag to indicate user want to use softmax fusion
mask_func – mask function to be applied.
softmax_in_fp32 – if true, softmax in performed at fp32 precision.
scale – scaling factor used in input tensor scaling.
- forward(input, mask)#
Define the computation performed at every call.
Should be overridden by all subclasses.
Note
Although the recipe for forward pass needs to be defined within this function, one should call the
Module
instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.