MolecularDiffusion.modules.layers.common

Classes

MLP

Multi-layer Perceptron.

SinusoidalPositionEmbedding

Positional embedding based on sine and cosine functions, proposed in Attention Is All You Need.

SinusoidsEmbeddingNew

Module Contents

class MolecularDiffusion.modules.layers.common.MLP(input_dim, hidden_dims, short_cut=False, batch_norm=False, activation='relu', dropout=0)

Bases: torch.nn.Module

Multi-layer Perceptron. Note there is no batch normalization, activation or dropout in the last layer.

Parameters:
  • input_dim (int) – input dimension

  • hidden_dim (list of int) – hidden dimensions

  • short_cut (bool, optional) – use short cut or not

  • batch_norm (bool, optional) – apply batch normalization or not

  • activation (str or function, optional) – activation function

  • dropout (float, optional) – dropout rate

forward(input)
dims
layers
short_cut = False
class MolecularDiffusion.modules.layers.common.SinusoidalPositionEmbedding(output_dim)

Bases: torch.nn.Module

Positional embedding based on sine and cosine functions, proposed in Attention Is All You Need.

Parameters:

output_dim (int) – output dimension

forward(input)
class MolecularDiffusion.modules.layers.common.SinusoidsEmbeddingNew(max_res=15.0, min_res=15.0 / 2000.0, div_factor=4)

Bases: torch.nn.Module

forward(x)
dim
frequencies
n_frequencies