MolecularDiffusion.callbacks.train_helper

Attributes

Classes

EMA

Queue

SP_regularizer

Self-paced regularizer for curriculum learning

gradient_clipping

gradient_clipping_0

Module Contents

class MolecularDiffusion.callbacks.train_helper.EMA(beta)
update_average(old, new)
update_model_average(ma_model, current_model)
beta
class MolecularDiffusion.callbacks.train_helper.Queue(max_len=50)
add(item)
mean()
std()
items = []
max_len = 50
class MolecularDiffusion.callbacks.train_helper.SP_regularizer(regularizer: str, lambda_: float = 10, lambda_2: float = 100, lambda_update_value: float = 50, lambda_update_step: int = 2500, polynomial_p: float = 1.5, warm_up_steps: int = 100)

Self-paced regularizer for curriculum learning :param regularizer: Regularizer to use. Options are:

  • hard

  • linear

  • logaritmic

  • logistic

Parameters:
  • lambda (float) – Initial lambda value

  • lambda_2 (float) – Initial lambda value for the second regularizer

  • lambda_update_value (float) – Value to update lambda

  • lambda_update_step (int) – Number of steps to update lambda

  • polynomial_p (float) – Value of p for polynomial regularizer

  • warm_up_steps (int) – Number of steps to use the regularizer

hard(losses: torch.Tensor)
hard_relax(losses: torch.Tensor)
linear(losses: torch.Tensor)
logaritmic(losses: torch.Tensor)
logistic(losses: torch.Tensor)
polynomial(losses: torch.Tensor)
update_lambda()
lambda_ = 10
lambda_2 = 100
lambda_update_step = 2500
lambda_update_value = 50
n_calls = 1
p = 1.5
regularizer
warm_up_steps = 100
class MolecularDiffusion.callbacks.train_helper.gradient_clipping(m=1, max_len=200)
FACTOR = 100
m = 1
max_grad_norm = None
max_grad_norms = []
max_len = 200
class MolecularDiffusion.callbacks.train_helper.gradient_clipping_0(m=1, max_len=200)
m = 1
max_grad_norm = None
max_grad_norms = []
max_len = 200
MolecularDiffusion.callbacks.train_helper.logger