MolecularDiffusion.modules.layers.conv¶
Classes¶
Map edge features to global features. |
|
Self attention layer that also updates the representations on the edges. |
|
Note: There is a relatively similar layer implemented by NVIDIA: |
|
Transformer that updates node, edge and global features |
|
Map node features to global features |
Functions¶
|
|
|
Custom PyTorch op to replicate TensorFlow's unsorted_segment_sum. |
Module Contents¶
- class MolecularDiffusion.modules.layers.conv.EquivariantBlock(hidden_nf, edge_feat_nf=2, act_fn=nn.SiLU(), n_layers=2, attention=True, norm_diff=True, tanh=False, coords_range=15, norm_constant=1, sin_embedding=None, normalization_factor=100, aggregation_method='sum', dropout=0.0, normalization=False)¶
Bases:
torch.nn.Module- forward(h, x, edge_index, node_mask=None, edge_mask=None, edge_attr=None)¶
- aggregation_method = 'sum'¶
- coords_range_layer¶
- n_layers = 2¶
- norm_constant = 1¶
- norm_diff = True¶
- normalization_factor = 100¶
- sin_embedding = None¶
- class MolecularDiffusion.modules.layers.conv.EquivariantUpdate(hidden_nf, normalization_factor, aggregation_method, edges_in_d=1, act_fn=nn.SiLU(), tanh=False, coords_range=10.0)¶
Bases:
torch.nn.Module- coord_model(h, coord, edge_index, coord_diff, edge_attr, edge_mask)¶
- forward(h, coord, edge_index, coord_diff, edge_attr=None, node_mask=None, edge_mask=None)¶
- aggregation_method¶
- coord_mlp¶
- coords_range = 10.0¶
- normalization_factor¶
- tanh = False¶
- class MolecularDiffusion.modules.layers.conv.EtoX(de, dx)¶
Bases:
torch.nn.Module- forward(E, e_mask2)¶
E: bs, n, n, de
- lin¶
- class MolecularDiffusion.modules.layers.conv.Etoy(d, dy)¶
Bases:
torch.nn.ModuleMap edge features to global features.
- forward(E, e_mask1, e_mask2)¶
E: bs, n, n, de Features relative to the diagonal of E could potentially be added.
- lin¶
- class MolecularDiffusion.modules.layers.conv.GCL(input_nf, output_nf, hidden_nf, normalization_factor, aggregation_method, edges_in_d=0, nodes_att_dim=0, act_fn=nn.SiLU(), attention=False, dropout=0.0, normalization=False)¶
Bases:
torch.nn.Module- edge_model(source, target, edge_attr, edge_mask)¶
- forward(h, edge_index, edge_attr=None, node_attr=None, node_mask=None, edge_mask=None)¶
- node_model(x, edge_index, edge_attr, node_attr)¶
- aggregation_method¶
- attention = False¶
- dropout¶
- edge_mlp¶
- node_mlp¶
- normalization = False¶
- normalization_factor¶
- class MolecularDiffusion.modules.layers.conv.NodeEdgeBlock(dx, de, dy, n_head, last_layer=False)¶
Bases:
torch.nn.ModuleSelf attention layer that also updates the representations on the edges.
- forward(X, E, y, pos, node_mask)¶
- Parameters:
X – bs, n, d node features
E – bs, n, n, d edge features
y – bs, dz global features
pos – bs, n, 3
node_mask – bs, n
- Returns:
newX, newE, new_y with the same shape.
- a¶
- de¶
- df¶
- dist_add_e¶
- dist_mul_e¶
- dx¶
- dy¶
- e_att_mul¶
- e_out¶
- e_pos1¶
- e_pos2¶
- e_x_mul¶
- in_E¶
- k¶
- last_layer = False¶
- lin_dist1¶
- lin_norm_pos1¶
- lin_norm_pos2¶
- n_head¶
- out¶
- pos_att_mul¶
- pos_x_mul¶
- pre_softmax¶
- q¶
- v¶
- x_e_mul1¶
- x_e_mul2¶
- x_out¶
- y_e_add¶
- y_e_mul¶
- y_x_add¶
- y_x_mul¶
- class MolecularDiffusion.modules.layers.conv.PositionsMLP(hidden_dim, eps=1e-05)¶
Bases:
torch.nn.Module- forward(pos, node_mask)¶
- eps = 1e-05¶
- mlp¶
- class MolecularDiffusion.modules.layers.conv.SE3Norm(eps: float = 1e-05, device=None, dtype=None)¶
Bases:
torch.nn.ModuleNote: There is a relatively similar layer implemented by NVIDIA: https://catalog.ngc.nvidia.com/orgs/nvidia/resources/se3transformer_for_pytorch. It computes a ReLU on a mean-zero normalized norm, which I find surprising.
- forward(pos, node_mask)¶
- eps = 1e-05¶
- normalized_shape = (1,)¶
- weight¶
- class MolecularDiffusion.modules.layers.conv.XEyTransformerLayer(dx: int, de: int, dy: int, n_head: int, dim_ffX: int = 2048, dim_ffE: int = 128, dim_ffy: int = 2048, dropout: float = 0.0, layer_norm_eps: float = 1e-05, last_layer=False)¶
Bases:
torch.nn.ModuleTransformer that updates node, edge and global features d_x: node features d_e: edge features dz : global features n_head: the number of heads in the multi_head_attention dim_feedforward: the dimension of the feedforward network model after self-attention dropout: dropout probablility. 0 to disable layer_norm_eps: eps value in layer normalizations.
- forward(X, E, y, pos, node_mask)¶
Pass the input through the encoder layer. X: (bs, n, d) E: (bs, n, n, d) y: (bs, dy) pos: (bs, n, 3) node_mask: (bs, n) Mask for the src keys per batch (optional) Output: newX, newE, new_y with the same shape.
- activation¶
- dropoutE1¶
- dropoutE2¶
- dropoutE3¶
- dropoutX1¶
- dropoutX2¶
- dropoutX3¶
- last_layer = False¶
- linE1¶
- linE2¶
- linX1¶
- linX2¶
- normE1¶
- normE2¶
- normX1¶
- normX2¶
- norm_pos1¶
- self_attn¶
- class MolecularDiffusion.modules.layers.conv.Xtoy(dx, dy)¶
Bases:
torch.nn.ModuleMap node features to global features
- forward(X, x_mask)¶
X: bs, n, dx.
- lin¶
- MolecularDiffusion.modules.layers.conv.masked_softmax(x, mask, **kwargs)¶