PS C:\Users\shadow\Desktop\tb> tb-reducer -i '1-1 full*' -o '1-1' --lax-steps --lax-tag --handle-dup-steps mean
Traceback (most recent call last):
File "c:\users\shadow\appdata\local\programs\python\python38\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "c:\users\shadow\appdata\local\programs\python\python38\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\shadow\AppData\Local\Programs\Python\Python38\Scripts\tb-reducer.exe__main__.py", line 7, in
File "c:\users\shadow\appdata\local\programs\python\python38\lib\site-packages\tensorboard_reducer\main.py", line 109, in main
events_dict = load_tb_events(
File "c:\users\shadow\appdata\local\programs\python\python38\lib\site-packages\tensorboard_reducer\io.py", line 82, in load_tb_events
assert df.index.is_unique, (
AssertionError: Tag 'new episodes per training epoch' from run directory '1-1 full' contains duplicate steps. Please make sure your data wasn't corrupted. If this is expected/you want to proceed anyway, specify how to handle duplicate values recorded for the same tag and step in a single run by passing --handle-dup-steps to the CLI or handle_dup_steps='keep-first'|'keep-last'|'mean' to the Python API. This will keep the first/last occurrence of duplicate steps or take their mean.
PS C:\Users\shadow\Desktop\tb> tb-reducer -i '1-1 full*' -o '1-1' --lax-steps --lax-tag --handle-dup-steps mean Traceback (most recent call last): File "c:\users\shadow\appdata\local\programs\python\python38\lib\runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "c:\users\shadow\appdata\local\programs\python\python38\lib\runpy.py", line 87, in _run_code exec(code, run_globals) File "C:\Users\shadow\AppData\Local\Programs\Python\Python38\Scripts\tb-reducer.exe__main__.py", line 7, in
File "c:\users\shadow\appdata\local\programs\python\python38\lib\site-packages\tensorboard_reducer\main.py", line 109, in main
events_dict = load_tb_events(
File "c:\users\shadow\appdata\local\programs\python\python38\lib\site-packages\tensorboard_reducer\io.py", line 82, in load_tb_events
assert df.index.is_unique, (
AssertionError: Tag 'new episodes per training epoch' from run directory '1-1 full' contains duplicate steps. Please make sure your data wasn't corrupted. If this is expected/you want to proceed anyway, specify how to handle duplicate values recorded for the same tag and step in a single run by passing --handle-dup-steps to the CLI or handle_dup_steps='keep-first'|'keep-last'|'mean' to the Python API. This will keep the first/last occurrence of duplicate steps or take their mean.
same with keep-first
f.zip