MolecularDiffusion.modules.models.shepherd_arch.equiformer_v2_encoder¶
Classes¶
Equiformer with graph attention built upon SO(2) convolution and feedforward network built upon S2 activation |
Module Contents¶
- class MolecularDiffusion.modules.models.shepherd_arch.equiformer_v2_encoder.EdgeDegreeEmbedding(input_sphere_channels, sphere_channels, lmax_list, mmax_list, SO3_rotation, mappingReduced, edge_channels_list, use_atom_edge_embedding, rescale_factor)¶
Bases:
torch.nn.Module- Parameters:
input_sphere_channels (int) – Number of input spherical channels (for nodes)
sphere_channels (int) – Number of spherical channels
(list (edge_channels_list) – int): List of degrees (l) for each resolution
(list – int): List of orders (m) for each resolution
(list – SO3_Rotation): Class to calculate Wigner-D matrices and rotate embeddings
mappingReduced (CoefficientMappingModule) – Class to convert l and m indices once node embedding is rotated
(list – int): List of sizes of invariant edge embedding. For example, [input_channels, hidden_channels, hidden_channels]. The last one will be used as hidden size when use_atom_edge_embedding is True.
use_atom_edge_embedding (bool) – Whether to use atomic embedding along with relative distance for edge scalar features
rescale_factor (float) – Rescale the sum aggregation
- forward(x_input, edge_distance, edge_index)¶
- if self.use_atom_edge_embedding:
source_element = atomic_numbers[edge_index[0]] # Source atom atomic number target_element = atomic_numbers[edge_index[1]] # Target atom atomic number source_embedding = self.source_embedding(source_element) target_embedding = self.target_embedding(target_element) x_edge = torch.cat((edge_distance, source_embedding, target_embedding), dim=1)
- else:
x_edge = edge_distance
- SO3_rotation¶
- edge_channels_list¶
- input_sphere_channels¶
- lmax_list¶
- m_0_num_coefficients¶
- m_all_num_coefficents¶
- mappingReduced¶
- mmax_list¶
- num_resolutions¶
- rad_func¶
- rescale_factor¶
- sphere_channels¶
- use_atom_edge_embedding¶
- if self.use_atom_edge_embedding:
self.source_embedding = nn.Embedding(self.max_num_elements, self.edge_channels_list[-1]) self.target_embedding = nn.Embedding(self.max_num_elements, self.edge_channels_list[-1]) nn.init.uniform_(self.source_embedding.weight.data, -0.001, 0.001) nn.init.uniform_(self.target_embedding.weight.data, -0.001, 0.001) self.edge_channels_list[0] = self.edge_channels_list[0] + 2 * self.edge_channels_list[-1]
- else:
self.source_embedding, self.target_embedding = None, None
- class MolecularDiffusion.modules.models.shepherd_arch.equiformer_v2_encoder.EquiformerV2(final_block_channels=0, num_layers=12, input_sphere_channels=128, sphere_channels=128, attn_hidden_channels=128, num_heads=8, attn_alpha_channels=32, attn_value_channels=16, ffn_hidden_channels=512, norm_type='rms_norm_sh', lmax_list=[6], mmax_list=[2], grid_resolution=None, cutoff=5.0, num_sphere_samples=128, edge_attr_input_channels=0, edge_channels=128, use_atom_edge_embedding=True, share_atom_edge_embedding=False, use_m_share_rad=False, distance_function='gaussian', num_distance_basis=512, attn_activation='scaled_silu', use_s2_act_attn=False, use_attn_renorm=True, ffn_activation='scaled_silu', use_gate_act=False, use_grid_mlp=False, use_sep_s2_act=True, alpha_drop=0.0, drop_path_rate=0.0, proj_drop=0.0, weight_init='normal')¶
Bases:
MolecularDiffusion.modules.models.shepherd_arch.ocp_compat.BaseModelEquiformer with graph attention built upon SO(2) convolution and feedforward network built upon S2 activation
- Parameters:
final_block_channels (int) – Number of spherical channels in final x embedding; if 0, there is no final block.
num_layers (int) – Number of layers in the GNN
input_sphere_channels (int) – Number of spherical channels in input x embedding, used for edge embedding
sphere_channels (int) – Number of spherical channels (one set per resolution)
attn_hidden_channels (int) – Number of hidden channels used during SO(2) graph attention
num_heads (int) – Number of attention heads
attn_alpha_head (int) – Number of channels for alpha vector in each attention head
attn_value_head (int) – Number of channels for value vector in each attention head
ffn_hidden_channels (int) – Number of hidden channels used during feedforward network
norm_type (str) – Type of normalization layer ([‘layer_norm’, ‘layer_norm_sh’, ‘rms_norm_sh’])
lmax_list (int) – List of maximum degree of the spherical harmonics (1 to 10)
mmax_list (int) – List of maximum order of the spherical harmonics (0 to lmax)
grid_resolution (int) – Resolution of SO3_Grid
num_sphere_samples (int) – Number of samples used to approximate the integration of the sphere in the output blocks
edge_channels (int) – Number of channels for the edge invariant features
use_atom_edge_embedding (bool) – Whether to use atomic embedding along with relative distance for edge scalar features
share_atom_edge_embedding (bool) – Whether to share atom_edge_embedding across all blocks
use_m_share_rad (bool) – Whether all m components within a type-L vector of one channel share radial function weights
distance_function ("gaussian", "sigmoid", "linearsigmoid", "silu") – Basis function used for distances
attn_activation (str) – Type of activation function for SO(2) graph attention
use_s2_act_attn (bool) – Whether to use attention after S2 activation. Otherwise, use the same attention as Equiformer
use_attn_renorm (bool) – Whether to re-normalize attention weights
ffn_activation (str) – Type of activation function for feedforward network
use_gate_act (bool) – If True, use gate activation. Otherwise, use S2 activation
use_grid_mlp (bool) – If True, use projecting to grids and performing MLPs for FFNs.
use_sep_s2_act (bool) – If True, use separable S2 activation when use_gate_act is False.
alpha_drop (float) – Dropout rate for attention weights
drop_path_rate (float) – Drop path rate
proj_drop (float) – Dropout rate for outputs of attention and FFN in Transformer blocks
weight_init (str) – [‘normal’, ‘uniform’] initialization of weights of linear layers except those in radial functions
- forward(x_input, pos, edge_index, edge_distance, edge_distance_vec, batch, edge_attr=None)¶
- no_weight_decay()¶
- SO3_grid¶
- SO3_rotation¶
- alpha_drop = 0.0¶
- attn_activation = 'scaled_silu'¶
- attn_alpha_channels = 32¶
- attn_value_channels = 16¶
- blocks¶
- cutoff = 5.0¶
- device = 'cpu'¶
- distance_function = 'gaussian'¶
- drop_path_rate = 0.0¶
- edge_attr_embedding = None¶
- edge_attr_input_channels = 0¶
- edge_channels = 128¶
- edge_channels_list¶
- if self.share_atom_edge_embedding and self.use_atom_edge_embedding:
self.source_embedding = nn.Embedding(self.max_num_elements, self.edge_channels_list[-1]) self.target_embedding = nn.Embedding(self.max_num_elements, self.edge_channels_list[-1]) self.edge_channels_list[0] = self.edge_channels_list[0] + 2 * self.edge_channels_list[-1]
- else:
self.source_embedding, self.target_embedding = None, None
- edge_degree_embedding¶
- ffn_activation = 'scaled_silu'¶
- final_block_channels = 0¶
- grad_forces = False¶
- grid_resolution = None¶
- input_sphere_channels = 128¶
- lmax_list = [6]¶
- mappingReduced¶
- mmax_list = [2]¶
- norm¶
- norm_type = 'rms_norm_sh'¶
- num_distance_basis = 512¶
- num_heads = 8¶
- num_layers = 12¶
- property num_params¶
- num_resolutions¶
- num_sphere_samples = 128¶
- proj_drop = 0.0¶
- sphere_channels = 128¶
- sphere_channels_all¶
- use_atom_edge_embedding = True¶
- use_attn_renorm = True¶
- use_gate_act = False¶
- use_grid_mlp = False¶
- use_s2_act_attn = False¶
- use_sep_s2_act = True¶
- weight_init = 'normal'¶