MolecularDiffusion.modules.models.shepherd_arch.equiformer_operations¶
This file contains a selection of e3nn neural network operations defined by the original Equiformer, copied from the repo: https://github.com/atomicarchitects/equiformer
This file is a (partial) concatenation of: equiformer/nets/graph_norm.py equiformer/nets/tensor_product_rescale.py equiformer/nets/fast_activation.py equiformer/nets/graph_attention_transformer.py
Classes¶
Directly apply activation when irreps is type-0. |
|
Convert vectors of shape [N, num_heads, irreps_head] into |
|
Instance normalization for orthonormal representations |
|
Use two (FCTP + Gate) |
|
Use separable FCTP for spatial convolution. |
|
Reshape vectors of shape [N, irreps_mid] to vectors of shape |
Functions¶
|
The irreps of output is pre-determined. |
|
|
|
|
|
|
|
|
|
|
|
Module Contents¶
- class MolecularDiffusion.modules.models.shepherd_arch.equiformer_operations.Activation(irreps_in, acts)¶
Bases:
torch.nn.ModuleDirectly apply activation when irreps is type-0.
- extra_repr()¶
- forward(features, dim=-1)¶
- acts¶
- irreps_in¶
- irreps_out¶
- class MolecularDiffusion.modules.models.shepherd_arch.equiformer_operations.AttnHeads2Vec(irreps_head)¶
Bases:
torch.nn.ModuleConvert vectors of shape [N, num_heads, irreps_head] into vectors of shape [N, irreps_head * num_heads].
- forward(x)¶
- head_indices = []¶
- irreps_head¶
- class MolecularDiffusion.modules.models.shepherd_arch.equiformer_operations.ConcatIrrepsTensor(irreps_1, irreps_2)¶
Bases:
torch.nn.Module- check_sorted(irreps)¶
- forward(feature_1, feature_2)¶
- get_ir_index(ir, irreps)¶
- get_irreps_dim(irreps)¶
- ir_mul_list = []¶
- irreps_1¶
- irreps_2¶
- irreps_out¶
- class MolecularDiffusion.modules.models.shepherd_arch.equiformer_operations.EquivariantGraphNorm(irreps, eps=1e-05, affine=True, reduce='mean', normalization='component')¶
Bases:
torch.nn.ModuleInstance normalization for orthonormal representations It normalizes by the norm of the representations. Note that the norm is invariant only for orthonormal representations. Irreducible representations wigner_D are orthonormal. :param irreps: representation :type irreps: Irreps :param eps: avoid division by zero when we normalize by the variance :type eps: float :param affine: do we have weight and bias parameters :type affine: bool :param reduce: method used to reduce :type reduce: {‘mean’, ‘max’}
- forward(node_input, batch, **kwargs)¶
evaluate :param node_input: tensor of shape
(batch, ..., irreps.dim):type node_input: torch.Tensor- Returns:
tensor of shape
(batch, ..., irreps.dim)- Return type:
torch.Tensor
- affine = True¶
- eps = 1e-05¶
- irreps¶
- mean_shift¶
- normalization = 'component'¶
- reduce = 'mean'¶
- class MolecularDiffusion.modules.models.shepherd_arch.equiformer_operations.EquivariantGraphNormV2(irreps, eps=1e-05, affine=True, reduce='mean', normalization='component')¶
Bases:
torch.nn.Module- forward(node_input, batch, **kwargs)¶
- affine = True¶
- eps = 1e-05¶
- irreps¶
- mean_shift¶
- normalization = 'component'¶
- reduce = 'mean'¶
- class MolecularDiffusion.modules.models.shepherd_arch.equiformer_operations.FeedForwardNetwork_equiformer(irreps_node_input, irreps_node_attr, irreps_node_output, irreps_mlp_mid=None, proj_drop=0.0)¶
Bases:
torch.nn.ModuleUse two (FCTP + Gate)
- forward(node_input, node_attr, **kwargs)¶
- fctp_1¶
- fctp_2¶
- irreps_mlp_mid¶
- irreps_node_attr¶
- irreps_node_input¶
- irreps_node_output¶
- proj_drop = None¶
- class MolecularDiffusion.modules.models.shepherd_arch.equiformer_operations.FullyConnectedTensorProductRescale(irreps_in1, irreps_in2, irreps_out, bias=True, rescale=True, internal_weights=None, shared_weights=None, normalization=None)¶
Bases:
TensorProductRescale
- class MolecularDiffusion.modules.models.shepherd_arch.equiformer_operations.FullyConnectedTensorProductRescaleNorm(irreps_in1, irreps_in2, irreps_out, bias=True, rescale=True, internal_weights=None, shared_weights=None, normalization=None, norm_layer='graph')¶
Bases:
FullyConnectedTensorProductRescale- forward(x, y, batch, weight=None)¶
- norm¶
- class MolecularDiffusion.modules.models.shepherd_arch.equiformer_operations.FullyConnectedTensorProductRescaleNormSwishGate(irreps_in1, irreps_in2, irreps_out, bias=True, rescale=True, internal_weights=None, shared_weights=None, normalization=None, norm_layer='graph')¶
Bases:
FullyConnectedTensorProductRescaleNorm- forward(x, y, batch, weight=None)¶
- gate¶
- class MolecularDiffusion.modules.models.shepherd_arch.equiformer_operations.FullyConnectedTensorProductRescaleSwishGate(irreps_in1, irreps_in2, irreps_out, bias=True, rescale=True, internal_weights=None, shared_weights=None, normalization=None)¶
Bases:
FullyConnectedTensorProductRescale- forward(x, y, weight=None)¶
- gate¶
- class MolecularDiffusion.modules.models.shepherd_arch.equiformer_operations.Gate(irreps_scalars, act_scalars, irreps_gates, act_gates, irreps_gated)¶
Bases:
torch.nn.ModuleUse narrow to split tensor.
Use Activation in this file.
- forward(features)¶
- irreps_in()¶
Input representations.
- irreps_out()¶
Output representations.
- act_gates¶
- act_scalars¶
- irreps_gated¶
- irreps_gates¶
- irreps_scalars¶
- mul¶
- class MolecularDiffusion.modules.models.shepherd_arch.equiformer_operations.LinearRS(irreps_in, irreps_out, bias=True, rescale=True)¶
Bases:
FullyConnectedTensorProductRescale- forward(x)¶
- class MolecularDiffusion.modules.models.shepherd_arch.equiformer_operations.ScaledScatter(avg_aggregate_num)¶
Bases:
torch.nn.Module- extra_repr()¶
- forward(x, index, **kwargs)¶
- avg_aggregate_num¶
- class MolecularDiffusion.modules.models.shepherd_arch.equiformer_operations.SeparableFCTP(irreps_node_input, irreps_edge_attr, irreps_node_output, fc_neurons, use_activation=False, norm_layer='graph', internal_weights=False)¶
Bases:
torch.nn.ModuleUse separable FCTP for spatial convolution.
- forward(node_input, edge_attr, edge_scalars, batch=None, **kwargs)¶
Depthwise TP: node_input TP edge_attr, with TP parametrized by self.dtp_rad(edge_scalars).
- dtp¶
- dtp_rad = None¶
- gate = None¶
- irreps_edge_attr¶
- irreps_node_input¶
- irreps_node_output¶
- lin¶
- norm = None¶
- class MolecularDiffusion.modules.models.shepherd_arch.equiformer_operations.SmoothLeakyReLU(negative_slope=0.2)¶
Bases:
torch.nn.Module- extra_repr()¶
- forward(x)¶
- alpha = 0.2¶
- class MolecularDiffusion.modules.models.shepherd_arch.equiformer_operations.TensorProductRescale(irreps_in1, irreps_in2, irreps_out, instructions, bias=True, rescale=True, internal_weights=None, shared_weights=None, normalization=None)¶
Bases:
torch.nn.Module- calculate_fan_in(ins)¶
- forward(x, y, weight=None)¶
- forward_tp_rescale_bias(x, y, weight=None)¶
- irreps_in1¶
- irreps_in2¶
- irreps_out¶
- rescale = True¶
- tp¶
- use_bias = True¶
- class MolecularDiffusion.modules.models.shepherd_arch.equiformer_operations.Vec2AttnHeads(irreps_head, num_heads)¶
Bases:
torch.nn.ModuleReshape vectors of shape [N, irreps_mid] to vectors of shape [N, num_heads, irreps_head].
- forward(x)¶
- irreps_head¶
- irreps_mid_in = []¶
- mid_in_indices = []¶
- num_heads¶
- MolecularDiffusion.modules.models.shepherd_arch.equiformer_operations.DepthwiseTensorProduct(irreps_node_input, irreps_edge_attr, irreps_node_output, internal_weights=False, bias=True)¶
The irreps of output is pre-determined. irreps_node_output is used to get certain types of vectors.
- MolecularDiffusion.modules.models.shepherd_arch.equiformer_operations.convert_e3nn_to_equiformerv2(input_tensor, lmax, num_channels)¶
- MolecularDiffusion.modules.models.shepherd_arch.equiformer_operations.convert_equiformerv2_to_e3nn(input_tensor, lmax)¶
- MolecularDiffusion.modules.models.shepherd_arch.equiformer_operations.get_mul_0(irreps)¶
- MolecularDiffusion.modules.models.shepherd_arch.equiformer_operations.get_norm_layer(norm_type)¶
- MolecularDiffusion.modules.models.shepherd_arch.equiformer_operations.irreps2gate(irreps)¶
- MolecularDiffusion.modules.models.shepherd_arch.equiformer_operations.sort_irreps_even_first(irreps)¶