MolecularDiffusion.modules.layers.equiformer_v2.activation¶
Copyright (c) Meta Platforms, Inc. and affiliates.
Classes¶
Assume we only have one resolution. |
|
Module Contents¶
- class MolecularDiffusion.modules.layers.equiformer_v2.activation.GateActivation(lmax, mmax, num_channels)¶
Bases:
torch.nn.Module- forward(gating_scalars, input_tensors)¶
gating_scalars: shape [N, lmax * num_channels] input_tensors: shape [N, (lmax + 1) ** 2, num_channels]
- gate_act¶
- lmax¶
- mmax¶
- num_channels¶
- scalar_act¶
- class MolecularDiffusion.modules.layers.equiformer_v2.activation.S2Activation(lmax, mmax)¶
Bases:
torch.nn.ModuleAssume we only have one resolution.
- forward(inputs, SO3_grid)¶
- act¶
- lmax¶
- mmax¶
- class MolecularDiffusion.modules.layers.equiformer_v2.activation.ScaledSiLU(inplace=False)¶
Bases:
torch.nn.Module- extra_repr()¶
- forward(inputs)¶
- inplace = False¶
- scale_factor = 1.6791767923989418¶
- class MolecularDiffusion.modules.layers.equiformer_v2.activation.ScaledSigmoid¶
Bases:
torch.nn.Module- forward(x)¶
- scale_factor = 1.8467055342154763¶
- class MolecularDiffusion.modules.layers.equiformer_v2.activation.ScaledSmoothLeakyReLU¶
Bases:
torch.nn.Module- extra_repr()¶
- forward(x)¶
- act¶
- scale_factor = 1.531320475574866¶
- class MolecularDiffusion.modules.layers.equiformer_v2.activation.ScaledSwiGLU(in_channels, out_channels, bias=True)¶
Bases:
torch.nn.Module- forward(inputs)¶
- act¶
- in_channels¶
- out_channels¶
- w¶
- class MolecularDiffusion.modules.layers.equiformer_v2.activation.SeparableS2Activation(lmax, mmax)¶
Bases:
torch.nn.Module- forward(input_scalars, input_tensors, SO3_grid)¶
- lmax¶
- mmax¶
- s2_act¶
- scalar_act¶
- class MolecularDiffusion.modules.layers.equiformer_v2.activation.SmoothLeakyReLU(negative_slope=0.2)¶
Bases:
torch.nn.Module- extra_repr()¶
- forward(x)¶
- alpha = 0.2¶