MolecularDiffusion.modules.layers.common¶
Classes¶
Multi-layer Perceptron. |
|
Positional embedding based on sine and cosine functions, proposed in Attention Is All You Need. |
|
Module Contents¶
- class MolecularDiffusion.modules.layers.common.MLP(input_dim, hidden_dims, short_cut=False, batch_norm=False, activation='relu', dropout=0)¶
Bases:
torch.nn.ModuleMulti-layer Perceptron. Note there is no batch normalization, activation or dropout in the last layer.
- Parameters:
- forward(input)¶
- dims¶
- layers¶
- short_cut = False¶
- class MolecularDiffusion.modules.layers.common.SinusoidalPositionEmbedding(output_dim)¶
Bases:
torch.nn.ModulePositional embedding based on sine and cosine functions, proposed in Attention Is All You Need.
- Parameters:
output_dim (int) – output dimension
- forward(input)¶