class BasicLogger(*args: Any, **kwargs: Any)[source]#

BasicLogger has changed its name to TensorboardLogger in #427.

This class is for compatibility.

class TensorboardLogger(writer: SummaryWriter, train_interval: int = 1000, test_interval: int = 1, update_interval: int = 1000, info_interval: int = 1, save_interval: int = 1, write_flush: bool = True)[source]#

A logger that relies on tensorboard SummaryWriter by default to visualize and log statistics.

  • writer (SummaryWriter) – the writer to log data.

  • train_interval – the log interval in log_train_data(). Default to 1000.

  • test_interval – the log interval in log_test_data(). Default to 1.

  • update_interval – the log interval in log_update_data(). Default to 1000.

  • info_interval – the log interval in log_info_data(). Default to 1.

  • save_interval – the save interval in save_data(). Default to 1 (save at the end of each epoch).

  • write_flush – whether to flush tensorboard result after each add_scalar operation. Default to True.

restore_data() tuple[int, int, int][source]#

Return the metadata from existing log.

If it finds nothing or an error occurs during the recover process, it will return the default parameters.


epoch, env_step, gradient_step.

save_data(epoch: int, env_step: int, gradient_step: int, save_checkpoint_fn: Callable[[int, int, int], str] | None = None) None[source]#

Use writer to log metadata when calling save_checkpoint_fn in trainer.

  • epoch – the epoch in trainer.

  • env_step – the env_step in trainer.

  • gradient_step – the gradient_step in trainer.

  • save_checkpoint_fn (function) – a hook defined by user, see trainer documentation for detail.

write(step_type: str, step: int, data: dict[str, int | Number | number | ndarray | float]) None[source]#

Specify how the writer is used to log data.

  • step_type (str) – namespace which the data dict belongs to.

  • step – stands for the ordinate of the data dict.

  • data – the data to write with format {key: value}.