tensorboard#
Source code: tianshou/utils/logger/tensorboard.py
- class TensorboardLogger(writer: SummaryWriter, train_interval: int = 1000, test_interval: int = 1, update_interval: int = 1000, info_interval: int = 1, save_interval: int = 1, write_flush: bool = True)[source]#
Bases:
BaseLogger
A logger that relies on tensorboard SummaryWriter by default to visualize and log statistics.
- Parameters:
writer (SummaryWriter) – the writer to log data.
train_interval – the log interval in log_train_data(). Default to 1000.
test_interval – the log interval in log_test_data(). Default to 1.
update_interval – the log interval in log_update_data(). Default to 1000.
info_interval – the log interval in log_info_data(). Default to 1.
save_interval – the save interval in save_data(). Default to 1 (save at the end of each epoch).
write_flush – whether to flush tensorboard result after each add_scalar operation. Default to True.
train_interval – the log interval in log_train_data(). Default to 1000.
test_interval – the log interval in log_test_data(). Default to 1.
update_interval – the log interval in log_update_data(). Default to 1000.
info_interval – the log interval in log_info_data(). Default to 1.
exclude_arrays – whether to exclude numpy arrays from the logger’s output
- prepare_dict_for_logging(input_dict: dict[str, Any], parent_key: str = '', delimiter: str = '/', exclude_arrays: bool = True) dict[str, int | Number | number | ndarray | float] [source]#
Flattens and filters a nested dictionary by recursively traversing all levels and compressing the keys.
Filtering is performed with respect to valid logging data types.
- Parameters:
input_dict – The nested dictionary to be flattened and filtered.
parent_key – The parent key used as a prefix before the input_dict keys.
delimiter – The delimiter used to separate the keys.
exclude_arrays – Whether to exclude numpy arrays from the output.
- Returns:
A flattened dictionary where the keys are compressed and values are filtered.
- write(step_type: str, step: int, data: dict[str, Any]) None [source]#
Specify how the writer is used to log data.
- Parameters:
step_type (str) – namespace which the data dict belongs to.
step – stands for the ordinate of the data dict.
data – the data to write with format
{key: value}
.
- save_data(epoch: int, env_step: int, gradient_step: int, save_checkpoint_fn: Callable[[int, int, int], str] | None = None) None [source]#
Use writer to log metadata when calling
save_checkpoint_fn
in trainer.- Parameters:
epoch – the epoch in trainer.
env_step – the env_step in trainer.
gradient_step – the gradient_step in trainer.
save_checkpoint_fn (function) – a hook defined by user, see trainer documentation for detail.
- restore_data() tuple[int, int, int] [source]#
Restore internal data if present and return the metadata from existing log for continuation of training.
If it finds nothing or an error occurs during the recover process, it will return the default parameters.
- Returns:
epoch, env_step, gradient_step.
- static restore_logged_data(log_path: str) dict[str, ndarray | dict[str, dict[str, ndarray | dict[str, TRestoredData]]]] [source]#
Restores the logged data from the tensorboard log directory.
The result is a nested dictionary where the keys are the tensorboard keys and the values are the corresponding numpy arrays. The keys in each level form a nested structure, where the hierarchy is represented by the slashes in the tensorboard key-strings.