hmomin / FinEnvs

Fast Parallel Simulation of Financial Time Series Environments for Reinforcement Learning
GNU General Public License v3.0
9 stars 1 forks source link

SAC Issues #2

Closed hmomin closed 2 years ago

hmomin commented 2 years ago

@mugiwarakaizoku

I'm having trouble getting SAC to learn Cartpole effectively. Below is sample output of one of the better trials, but in most trials, it can't even break above a total reward of 10.

Also, there is a memory leak somewhere that triggers after about 2.2-million samples for me: based on the error message, it looks like it's resulting from not detaching the output from means = self.forward(states) in the actor file, but I'll let you see to it.

Importing module 'gym_37' (/home/momin/Documents/isaacgym/python/isaacgym/_bindings/linux-x86_64/gym_37.so)
Setting GYM_USD_PLUG_INFO_PATH to /home/momin/Documents/isaacgym/python/isaacgym/_bindings/linux-x86_64/usd/plugInfo.json
PyTorch version 1.10.2+cu113
Device count 1
/home/momin/Documents/isaacgym/python/isaacgym/_bindings/src/gymtorch
Using /home/momin/.cache/torch_extensions/py37_cu113 as PyTorch extensions root...
Emitting ninja build file /home/momin/.cache/torch_extensions/py37_cu113/gymtorch/build.ninja...
Building extension module gymtorch...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module gymtorch...
/home/momin/anaconda3/envs/rlgpu/lib/python3.7/site-packages/gym/spaces/box.py:112: UserWarning: WARN: Box bound precision lowered by casting to float32
  logger.warn(f"Box bound precision lowered by casting to {self.dtype}")
Not connected to PVD
+++ Using GPU PhysX
Physics Engine: PhysX
Physics Device: cuda:0
GPU Pipeline: enabled
num samples: 48640 - evaluation return: 42.664555 - mean training return: 19.240843 - std dev training return: 21.298613
num samples: 75264 - evaluation return: 17.117111 - mean training return: 37.337551 - std dev training return: 24.090464
num samples: 96256 - evaluation return: 9.022424 - mean training return: 34.875938 - std dev training return: 26.308626
num samples: 116224 - evaluation return: 7.290071 - mean training return: 33.133209 - std dev training return: 25.416498
num samples: 136192 - evaluation return: 8.945022 - mean training return: 32.170544 - std dev training return: 25.197166
num samples: 157184 - evaluation return: 11.210396 - mean training return: 32.678501 - std dev training return: 22.590132
num samples: 178176 - evaluation return: 8.651594 - mean training return: 33.387737 - std dev training return: 24.583792
num samples: 203264 - evaluation return: 19.072805 - mean training return: 33.129204 - std dev training return: 22.507452
num samples: 224256 - evaluation return: 10.693352 - mean training return: 33.824829 - std dev training return: 22.631231
num samples: 242688 - evaluation return: 7.536970 - mean training return: 35.717468 - std dev training return: 26.653385
num samples: 263168 - evaluation return: 9.159291 - mean training return: 39.677971 - std dev training return: 26.461964
num samples: 284672 - evaluation return: 13.753101 - mean training return: 38.719288 - std dev training return: 25.779493
num samples: 306688 - evaluation return: 11.256489 - mean training return: 41.800442 - std dev training return: 27.398203
num samples: 326656 - evaluation return: 7.761618 - mean training return: 41.407909 - std dev training return: 30.619041
num samples: 349184 - evaluation return: 12.650271 - mean training return: 39.605946 - std dev training return: 26.463860
num samples: 368640 - evaluation return: 7.406431 - mean training return: 43.363243 - std dev training return: 31.207508
num samples: 391168 - evaluation return: 13.261372 - mean training return: 40.798859 - std dev training return: 28.053375
num samples: 411136 - evaluation return: 8.921464 - mean training return: 47.900318 - std dev training return: 34.384724
num samples: 431616 - evaluation return: 10.965508 - mean training return: 41.790977 - std dev training return: 30.708284
num samples: 459264 - evaluation return: 24.945358 - mean training return: 42.953075 - std dev training return: 33.013519
num samples: 480256 - evaluation return: 10.809840 - mean training return: 41.681019 - std dev training return: 29.536516
num samples: 500736 - evaluation return: 10.699786 - mean training return: 42.309608 - std dev training return: 30.666838
num samples: 519680 - evaluation return: 7.754902 - mean training return: 38.211960 - std dev training return: 29.142509
num samples: 538112 - evaluation return: 7.239990 - mean training return: 40.609222 - std dev training return: 30.046165
num samples: 557568 - evaluation return: 7.792455 - mean training return: 41.511486 - std dev training return: 29.129290
num samples: 579584 - evaluation return: 15.246098 - mean training return: 43.569519 - std dev training return: 30.588730
num samples: 598528 - evaluation return: 6.837287 - mean training return: 44.639370 - std dev training return: 32.618675
num samples: 616960 - evaluation return: 8.379328 - mean training return: 44.910011 - std dev training return: 30.007103
num samples: 635392 - evaluation return: 7.636024 - mean training return: 41.894653 - std dev training return: 29.709085
num samples: 654848 - evaluation return: 10.105590 - mean training return: 42.790398 - std dev training return: 29.942787
num samples: 674304 - evaluation return: 9.454279 - mean training return: 42.145344 - std dev training return: 30.102503
num samples: 696832 - evaluation return: 15.113770 - mean training return: 42.603848 - std dev training return: 26.551289
num samples: 743936 - evaluation return: 69.809341 - mean training return: 44.716377 - std dev training return: 31.713352
num samples: 765952 - evaluation return: 14.800228 - mean training return: 48.687096 - std dev training return: 35.265812
num samples: 784384 - evaluation return: 8.898764 - mean training return: 44.878021 - std dev training return: 29.701893
num samples: 810496 - evaluation return: 22.929743 - mean training return: 42.003948 - std dev training return: 29.101030
num samples: 830464 - evaluation return: 8.730614 - mean training return: 46.895416 - std dev training return: 30.673267
num samples: 850944 - evaluation return: 10.750460 - mean training return: 44.366295 - std dev training return: 32.735119
num samples: 869376 - evaluation return: 7.646038 - mean training return: 42.031437 - std dev training return: 30.974838
num samples: 888320 - evaluation return: 8.660542 - mean training return: 45.897411 - std dev training return: 35.273087
num samples: 910336 - evaluation return: 14.657757 - mean training return: 42.573399 - std dev training return: 29.213062
num samples: 951808 - evaluation return: 59.844833 - mean training return: 44.369228 - std dev training return: 32.552788
num samples: 972288 - evaluation return: 10.970460 - mean training return: 42.581337 - std dev training return: 26.832909
num samples: 990208 - evaluation return: 8.688063 - mean training return: 42.989204 - std dev training return: 27.803591
num samples: 1009664 - evaluation return: 10.115323 - mean training return: 44.869339 - std dev training return: 32.852955
num samples: 1028608 - evaluation return: 7.315423 - mean training return: 41.035736 - std dev training return: 32.797501
num samples: 1051648 - evaluation return: 17.410482 - mean training return: 43.608242 - std dev training return: 32.394970
num samples: 1070080 - evaluation return: 8.257707 - mean training return: 44.351231 - std dev training return: 29.345806
num samples: 1089024 - evaluation return: 7.072944 - mean training return: 44.150719 - std dev training return: 31.034515
num samples: 1107968 - evaluation return: 7.315763 - mean training return: 45.740803 - std dev training return: 29.843706
num samples: 1126912 - evaluation return: 8.030341 - mean training return: 48.802032 - std dev training return: 32.735664
num samples: 1147904 - evaluation return: 12.481560 - mean training return: 46.902039 - std dev training return: 30.762377
num samples: 1165824 - evaluation return: 7.350004 - mean training return: 49.774536 - std dev training return: 34.108013
num samples: 1184768 - evaluation return: 8.855827 - mean training return: 48.475475 - std dev training return: 33.205433
num samples: 1203200 - evaluation return: 6.800958 - mean training return: 43.822147 - std dev training return: 27.918304
num samples: 1249280 - evaluation return: 60.188492 - mean training return: 48.652798 - std dev training return: 32.888950
num samples: 1267200 - evaluation return: 7.280651 - mean training return: 43.635883 - std dev training return: 29.472729
num samples: 1314816 - evaluation return: 68.751907 - mean training return: 45.681065 - std dev training return: 32.199825
num samples: 1334784 - evaluation return: 10.479751 - mean training return: 46.177887 - std dev training return: 33.436707
num samples: 1363456 - evaluation return: 27.123913 - mean training return: 45.143280 - std dev training return: 32.398781
num samples: 1382912 - evaluation return: 10.328647 - mean training return: 43.507858 - std dev training return: 35.936104
num samples: 1401856 - evaluation return: 7.638084 - mean training return: 44.668758 - std dev training return: 31.289669
num samples: 1432576 - evaluation return: 32.943344 - mean training return: 44.900688 - std dev training return: 31.360880
num samples: 1452544 - evaluation return: 9.221864 - mean training return: 41.564133 - std dev training return: 27.927759
num samples: 1473536 - evaluation return: 11.704432 - mean training return: 48.011837 - std dev training return: 34.653778
num samples: 1496064 - evaluation return: 15.954937 - mean training return: 50.346596 - std dev training return: 35.377712
num samples: 1514496 - evaluation return: 8.035228 - mean training return: 47.771240 - std dev training return: 33.395077
num samples: 1533952 - evaluation return: 8.281386 - mean training return: 41.216488 - std dev training return: 30.946314
num samples: 1553408 - evaluation return: 10.508433 - mean training return: 44.966591 - std dev training return: 29.735842
num samples: 1575936 - evaluation return: 16.217566 - mean training return: 44.983177 - std dev training return: 33.251244
num samples: 1594368 - evaluation return: 8.081646 - mean training return: 44.372837 - std dev training return: 34.626404
num samples: 1615872 - evaluation return: 12.964675 - mean training return: 45.627056 - std dev training return: 30.419598
num samples: 1636864 - evaluation return: 11.474453 - mean training return: 44.596386 - std dev training return: 30.335295
num samples: 1656320 - evaluation return: 9.743287 - mean training return: 48.475723 - std dev training return: 34.589176
num samples: 1676288 - evaluation return: 9.889929 - mean training return: 45.983326 - std dev training return: 32.190174
num samples: 1706496 - evaluation return: 31.201733 - mean training return: 44.044250 - std dev training return: 29.999483
num samples: 1751552 - evaluation return: 67.660858 - mean training return: 47.377201 - std dev training return: 33.140793
num samples: 1771520 - evaluation return: 11.536253 - mean training return: 48.449409 - std dev training return: 33.765598
num samples: 1788928 - evaluation return: 7.400703 - mean training return: 46.131039 - std dev training return: 34.952114
num samples: 1810944 - evaluation return: 14.474745 - mean training return: 42.301899 - std dev training return: 35.764179
num samples: 1831936 - evaluation return: 10.688743 - mean training return: 46.439331 - std dev training return: 31.543478
num samples: 1860096 - evaluation return: 26.456173 - mean training return: 45.223267 - std dev training return: 31.939152
num samples: 1881600 - evaluation return: 10.944726 - mean training return: 44.479214 - std dev training return: 27.474779
num samples: 1905152 - evaluation return: 18.194380 - mean training return: 50.965813 - std dev training return: 35.656303
num samples: 1925120 - evaluation return: 9.182425 - mean training return: 46.331676 - std dev training return: 33.462132
num samples: 1943040 - evaluation return: 7.370675 - mean training return: 49.256039 - std dev training return: 32.469273
num samples: 1963520 - evaluation return: 10.486354 - mean training return: 46.099960 - std dev training return: 32.586018
num samples: 1980416 - evaluation return: 6.084354 - mean training return: 45.396633 - std dev training return: 33.130478
num samples: 2000384 - evaluation return: 11.129582 - mean training return: 45.089146 - std dev training return: 33.360134
num samples: 2021376 - evaluation return: 11.500680 - mean training return: 46.762657 - std dev training return: 32.203106
num samples: 2042880 - evaluation return: 12.560404 - mean training return: 49.993290 - std dev training return: 33.039204
num samples: 2061824 - evaluation return: 9.376321 - mean training return: 45.685165 - std dev training return: 33.842228
num samples: 2081792 - evaluation return: 11.399903 - mean training return: 44.692337 - std dev training return: 33.136116
num samples: 2101760 - evaluation return: 10.711651 - mean training return: 44.341721 - std dev training return: 31.529741
num samples: 2128896 - evaluation return: 24.776752 - mean training return: 47.567459 - std dev training return: 30.484776
num samples: 2200064 - evaluation return: 87.675751 - mean training return: 46.126362 - std dev training return: 31.622690
Traceback (most recent call last):
  File "examples/SAC_MLP_Isaac_Gym.py", line 30, in <module>
    train_SAC_MLP_on_environiment("Cartpole")
  File "examples/SAC_MLP_Isaac_Gym.py", line 20, in train_SAC_MLP_on_environiment
    actions = agent.step(states)
  File "/home/momin/Documents/GitHub/FinEnvs/finenvs/agents/SAC/SAC_agent.py", line 117, in step
    actions = self.actor.get_actions_and_log_probs(states)[0]
  File "/home/momin/Documents/GitHub/FinEnvs/finenvs/agents/SAC/actor.py", line 58, in get_actions_and_log_probs
    distribution = self.get_distribution(states)
  File "/home/momin/Documents/GitHub/FinEnvs/finenvs/agents/SAC/actor.py", line 51, in get_distribution
    mean = self.forward(states)
  File "/home/momin/Documents/GitHub/FinEnvs/finenvs/agents/networks/multilayer_perceptron.py", line 26, in forward
    return self.network(inputs)
  File "/home/momin/anaconda3/envs/rlgpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/momin/anaconda3/envs/rlgpu/lib/python3.7/site-packages/torch/nn/modules/container.py", line 141, in forward
    input = module(input)
  File "/home/momin/anaconda3/envs/rlgpu/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/momin/anaconda3/envs/rlgpu/lib/python3.7/site-packages/torch/nn/modules/activation.py", line 499, in forward
    return F.elu(input, self.alpha, self.inplace)
  File "/home/momin/anaconda3/envs/rlgpu/lib/python3.7/site-packages/torch/nn/functional.py", line 1391, in elu
    result = torch._C._nn.elu(input, alpha)
RuntimeError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 10.75 GiB total capacity; 8.52 GiB already allocated; 31.38 MiB free; 8.60 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

real    0m36.206s
user    0m37.513s
sys 0m3.575s
hmomin commented 2 years ago

I can confirm after the latest commit (8279ba9) that the agent trains much better now. It still seems slightly slower than TD3 (unexpected), but it's definitely more stable (expected). Also, GPU memory stays roughly constant while training (no memory leaks).

That it can't break above ~485 evaluation return is one of the strangest things I've ever seen 😂. I've never seen an RL agent get so close to optimal return, but unable to reach it... It may be due to a variety of things, but I can offer some pointers:

  1. It might be useful to see what the agents are actually doing by setting headless=False in the example file. This might provide some clues, like how is the agent qualitatively behaving? Is it acting with controlled randomness at high number of samples, or is it very stable at first, followed by "freaking out" near the end of the trial?
  2. Initially, I suspected that the evaluation actions had noise added to them, which would cap the best returns they could get in Cartpole, but after looking through the code, that doesn't seem to be the case. Additionally, the evaluation returns seem consistently better than the training returns at high number of samples, suggesting this isn't likely. It still might be worth double-checking this though.
  3. It may be something weird going on with alpha / the entropy term that's not present in TD3. I'll have to dig through this further to understand it better though.

Sample trial below (20M samples):

num samples: 8192 - evaluation return: 3.488162 - mean training return: 0.643829 - std dev training return: 1.328996
num samples: 15360 - evaluation return: -0.327156 - mean training return: 2.215623 - std dev training return: 2.826911
num samples: 22016 - evaluation return: 0.669242 - mean training return: 0.468180 - std dev training return: 1.335796
num samples: 28672 - evaluation return: -0.012329 - mean training return: 0.454241 - std dev training return: 1.314234
num samples: 35328 - evaluation return: 0.338480 - mean training return: 0.347813 - std dev training return: 1.319118
num samples: 41984 - evaluation return: -1.226050 - mean training return: 0.264227 - std dev training return: 1.248378
num samples: 48640 - evaluation return: 0.334111 - mean training return: 0.329446 - std dev training return: 1.285681
num samples: 55296 - evaluation return: -0.963843 - mean training return: 0.264418 - std dev training return: 1.289434
num samples: 62464 - evaluation return: -0.838776 - mean training return: 0.276530 - std dev training return: 1.233429
num samples: 69120 - evaluation return: -1.681655 - mean training return: 0.217392 - std dev training return: 1.203373
num samples: 75776 - evaluation return: -0.931664 - mean training return: 0.235934 - std dev training return: 1.416092
num samples: 96256 - evaluation return: 7.771082 - mean training return: 0.649114 - std dev training return: 3.296532
num samples: 117248 - evaluation return: 7.551637 - mean training return: 16.831919 - std dev training return: 9.504556
num samples: 139264 - evaluation return: 10.972750 - mean training return: 24.575808 - std dev training return: 15.387838
num samples: 160768 - evaluation return: 13.494565 - mean training return: 25.510498 - std dev training return: 18.210335
num samples: 180224 - evaluation return: 11.014523 - mean training return: 23.648209 - std dev training return: 16.348949
num samples: 199680 - evaluation return: 10.217376 - mean training return: 21.436346 - std dev training return: 15.157605
num samples: 220672 - evaluation return: 11.364716 - mean training return: 23.392260 - std dev training return: 17.096407
num samples: 241664 - evaluation return: 10.445755 - mean training return: 23.036200 - std dev training return: 15.281807
num samples: 260608 - evaluation return: 7.636363 - mean training return: 24.429182 - std dev training return: 17.414497
num samples: 280064 - evaluation return: 9.096969 - mean training return: 26.331076 - std dev training return: 19.043030
num samples: 300032 - evaluation return: 11.605409 - mean training return: 24.658556 - std dev training return: 17.882175
num samples: 320000 - evaluation return: 9.166702 - mean training return: 25.644745 - std dev training return: 17.670046
num samples: 343552 - evaluation return: 14.409164 - mean training return: 26.844248 - std dev training return: 18.247229
num samples: 365056 - evaluation return: 9.136948 - mean training return: 28.209591 - std dev training return: 18.607645
num samples: 386560 - evaluation return: 10.837508 - mean training return: 30.981911 - std dev training return: 21.855999
num samples: 410624 - evaluation return: 13.020391 - mean training return: 31.943718 - std dev training return: 23.282526
num samples: 451072 - evaluation return: 39.672935 - mean training return: 35.870770 - std dev training return: 21.563026
num samples: 482816 - evaluation return: 17.039402 - mean training return: 46.493633 - std dev training return: 28.515078
num samples: 520704 - evaluation return: 42.144985 - mean training return: 62.038685 - std dev training return: 33.706165
num samples: 571392 - evaluation return: 77.432724 - mean training return: 87.507156 - std dev training return: 34.280529
num samples: 618496 - evaluation return: 54.201786 - mean training return: 90.531258 - std dev training return: 41.380157
num samples: 698880 - evaluation return: 143.842499 - mean training return: 86.882591 - std dev training return: 41.644753
num samples: 819712 - evaluation return: 220.994217 - mean training return: 102.187485 - std dev training return: 41.245975
num samples: 922112 - evaluation return: 160.113892 - mean training return: 111.450905 - std dev training return: 41.695702
num samples: 1015808 - evaluation return: 172.467468 - mean training return: 122.468384 - std dev training return: 46.069984
num samples: 1131008 - evaluation return: 212.931671 - mean training return: 129.441467 - std dev training return: 49.470116
num samples: 1252864 - evaluation return: 224.074539 - mean training return: 133.241928 - std dev training return: 47.812256
num samples: 1364992 - evaluation return: 207.197220 - mean training return: 136.031708 - std dev training return: 50.740196
num samples: 1456640 - evaluation return: 169.665100 - mean training return: 144.258820 - std dev training return: 55.848457
num samples: 1615360 - evaluation return: 297.634644 - mean training return: 144.873230 - std dev training return: 55.832222
num samples: 1751552 - evaluation return: 254.361603 - mean training return: 151.457870 - std dev training return: 58.154392
num samples: 1888768 - evaluation return: 257.200592 - mean training return: 142.661362 - std dev training return: 62.508335
num samples: 1995264 - evaluation return: 197.217499 - mean training return: 149.547668 - std dev training return: 60.341419
num samples: 2141184 - evaluation return: 272.687225 - mean training return: 152.466644 - std dev training return: 68.575684
num samples: 2238976 - evaluation return: 179.612778 - mean training return: 150.204773 - std dev training return: 63.577446
num samples: 2376192 - evaluation return: 256.767670 - mean training return: 147.317993 - std dev training return: 61.568192
num samples: 2521088 - evaluation return: 270.617371 - mean training return: 154.793381 - std dev training return: 64.879234
num samples: 2612736 - evaluation return: 168.170685 - mean training return: 159.500473 - std dev training return: 69.554070
num samples: 2721280 - evaluation return: 198.628647 - mean training return: 145.537323 - std dev training return: 64.571823
num samples: 2845696 - evaluation return: 229.646759 - mean training return: 146.200073 - std dev training return: 59.871971
num samples: 2965504 - evaluation return: 222.987366 - mean training return: 156.366089 - std dev training return: 66.380096
num samples: 3053568 - evaluation return: 160.999130 - mean training return: 159.297653 - std dev training return: 71.229660
num samples: 3215872 - evaluation return: 304.151978 - mean training return: 157.044312 - std dev training return: 66.798119
num samples: 3335168 - evaluation return: 220.925705 - mean training return: 168.035522 - std dev training return: 74.676682
num samples: 3456000 - evaluation return: 223.169220 - mean training return: 157.664169 - std dev training return: 70.436905
num samples: 3559424 - evaluation return: 191.635559 - mean training return: 155.961914 - std dev training return: 68.455460
num samples: 3692032 - evaluation return: 246.965332 - mean training return: 161.680435 - std dev training return: 66.274536
num samples: 3805184 - evaluation return: 210.468262 - mean training return: 160.010269 - std dev training return: 71.523735
num samples: 3923456 - evaluation return: 218.636658 - mean training return: 152.428497 - std dev training return: 66.407768
num samples: 4099072 - evaluation return: 327.679779 - mean training return: 150.238525 - std dev training return: 65.795181
num samples: 4201472 - evaluation return: 188.963470 - mean training return: 167.841919 - std dev training return: 77.009232
num samples: 4380672 - evaluation return: 335.464508 - mean training return: 178.044235 - std dev training return: 78.071785
num samples: 4534784 - evaluation return: 287.554932 - mean training return: 201.458755 - std dev training return: 94.620529
num samples: 4691968 - evaluation return: 294.301331 - mean training return: 189.829102 - std dev training return: 94.107971
num samples: 4839424 - evaluation return: 276.987854 - mean training return: 183.183289 - std dev training return: 90.967201
num samples: 4948480 - evaluation return: 203.896500 - mean training return: 208.803207 - std dev training return: 96.538383
num samples: 5124608 - evaluation return: 331.575043 - mean training return: 210.818268 - std dev training return: 103.290726
num samples: 5237760 - evaluation return: 210.392258 - mean training return: 224.152725 - std dev training return: 115.816483
num samples: 5424128 - evaluation return: 350.454620 - mean training return: 237.337601 - std dev training return: 115.947853
num samples: 5535744 - evaluation return: 206.989563 - mean training return: 218.518997 - std dev training return: 119.881752
num samples: 5702144 - evaluation return: 312.179352 - mean training return: 187.776306 - std dev training return: 91.197075
num samples: 5814272 - evaluation return: 208.166214 - mean training return: 199.617691 - std dev training return: 96.476425
num samples: 5954560 - evaluation return: 259.792816 - mean training return: 179.893158 - std dev training return: 91.740097
num samples: 6048256 - evaluation return: 170.624924 - mean training return: 184.748383 - std dev training return: 86.946022
num samples: 6208512 - evaluation return: 300.727509 - mean training return: 205.221451 - std dev training return: 98.233040
num samples: 6365696 - evaluation return: 293.053253 - mean training return: 241.465881 - std dev training return: 119.653580
num samples: 6487040 - evaluation return: 225.250549 - mean training return: 194.907562 - std dev training return: 107.958015
num samples: 6593536 - evaluation return: 193.988159 - mean training return: 199.996887 - std dev training return: 99.050171
num samples: 6742016 - evaluation return: 276.480591 - mean training return: 195.422470 - std dev training return: 104.195023
num samples: 6998016 - evaluation return: 486.021118 - mean training return: 263.100311 - std dev training return: 131.967285
num samples: 7254016 - evaluation return: 485.916626 - mean training return: 253.981689 - std dev training return: 132.261810
num samples: 7510016 - evaluation return: 485.637695 - mean training return: 257.377838 - std dev training return: 131.768417
num samples: 7722496 - evaluation return: 398.998444 - mean training return: 254.963501 - std dev training return: 135.425034
num samples: 7978496 - evaluation return: 485.985352 - mean training return: 239.646576 - std dev training return: 125.887421
num samples: 8234496 - evaluation return: 485.616882 - mean training return: 245.275818 - std dev training return: 127.234177
num samples: 8443392 - evaluation return: 391.363342 - mean training return: 222.081650 - std dev training return: 118.061279
num samples: 8604160 - evaluation return: 299.179749 - mean training return: 269.316315 - std dev training return: 135.701431
num samples: 8731648 - evaluation return: 236.547501 - mean training return: 211.098434 - std dev training return: 113.231621
num samples: 8987648 - evaluation return: 485.981049 - mean training return: 229.094666 - std dev training return: 116.304108
num samples: 9105920 - evaluation return: 217.072159 - mean training return: 245.748947 - std dev training return: 133.239334
num samples: 9361920 - evaluation return: 485.693909 - mean training return: 238.961792 - std dev training return: 123.812737
num samples: 9617920 - evaluation return: 486.058685 - mean training return: 297.450195 - std dev training return: 139.661041
num samples: 9873920 - evaluation return: 485.685059 - mean training return: 285.089569 - std dev training return: 137.288132
num samples: 10129920 - evaluation return: 485.495667 - mean training return: 321.872131 - std dev training return: 140.850494
num samples: 10385920 - evaluation return: 484.907135 - mean training return: 269.709106 - std dev training return: 135.768997
num samples: 10641920 - evaluation return: 484.973480 - mean training return: 280.558899 - std dev training return: 139.547714
num samples: 10897920 - evaluation return: 486.170288 - mean training return: 294.440338 - std dev training return: 137.781342
num samples: 11153920 - evaluation return: 484.449738 - mean training return: 306.219421 - std dev training return: 141.293259
num samples: 11409920 - evaluation return: 484.503052 - mean training return: 311.662384 - std dev training return: 137.566116
num samples: 11665920 - evaluation return: 484.989502 - mean training return: 321.879517 - std dev training return: 137.898117
num samples: 11921920 - evaluation return: 485.141937 - mean training return: 377.592560 - std dev training return: 128.969864
num samples: 12177920 - evaluation return: 485.646759 - mean training return: 350.790558 - std dev training return: 132.366074
num samples: 12433920 - evaluation return: 485.457703 - mean training return: 323.872925 - std dev training return: 138.868729
num samples: 12689920 - evaluation return: 484.718842 - mean training return: 387.379059 - std dev training return: 126.711174
num samples: 12945920 - evaluation return: 483.729279 - mean training return: 371.488586 - std dev training return: 135.266769
num samples: 13201920 - evaluation return: 484.393494 - mean training return: 392.294312 - std dev training return: 119.653877
num samples: 13457920 - evaluation return: 484.599182 - mean training return: 374.750031 - std dev training return: 130.017380
num samples: 13713920 - evaluation return: 483.676392 - mean training return: 355.072479 - std dev training return: 134.248459
num samples: 13969920 - evaluation return: 485.613007 - mean training return: 309.503632 - std dev training return: 142.383728
num samples: 14225920 - evaluation return: 483.690857 - mean training return: 316.050293 - std dev training return: 130.748535
num samples: 14481920 - evaluation return: 481.593994 - mean training return: 327.932892 - std dev training return: 144.335403
num samples: 14737920 - evaluation return: 484.603943 - mean training return: 388.102295 - std dev training return: 121.267220
num samples: 14993920 - evaluation return: 485.441559 - mean training return: 346.512268 - std dev training return: 131.820969
num samples: 15249920 - evaluation return: 483.532410 - mean training return: 380.665680 - std dev training return: 127.549652
num samples: 15505920 - evaluation return: 485.707153 - mean training return: 359.024658 - std dev training return: 131.556458
num samples: 15761920 - evaluation return: 485.201904 - mean training return: 385.690826 - std dev training return: 130.568054
num samples: 16017920 - evaluation return: 485.456482 - mean training return: 369.026550 - std dev training return: 132.289673
num samples: 16273920 - evaluation return: 485.313141 - mean training return: 386.011597 - std dev training return: 119.361526
num samples: 16529920 - evaluation return: 485.614227 - mean training return: 367.021790 - std dev training return: 132.150848
num samples: 16785920 - evaluation return: 482.936615 - mean training return: 367.756409 - std dev training return: 134.502914
num samples: 17041920 - evaluation return: 484.552917 - mean training return: 359.343506 - std dev training return: 135.115891
num samples: 17297920 - evaluation return: 482.189667 - mean training return: 378.499054 - std dev training return: 128.254501
num samples: 17553920 - evaluation return: 483.053253 - mean training return: 305.417023 - std dev training return: 142.657074
num samples: 17809920 - evaluation return: 484.054047 - mean training return: 360.078430 - std dev training return: 131.515213
num samples: 18065920 - evaluation return: 483.943115 - mean training return: 367.913879 - std dev training return: 132.052582
num samples: 18321920 - evaluation return: 482.476654 - mean training return: 382.737671 - std dev training return: 123.678658
num samples: 18577920 - evaluation return: 483.497253 - mean training return: 392.316711 - std dev training return: 121.742180
num samples: 18833920 - evaluation return: 482.082977 - mean training return: 354.310455 - std dev training return: 137.298965
num samples: 19089920 - evaluation return: 483.364532 - mean training return: 404.826965 - std dev training return: 116.172096
num samples: 19345920 - evaluation return: 483.278900 - mean training return: 375.872711 - std dev training return: 130.132767
num samples: 19601920 - evaluation return: 484.912048 - mean training return: 390.773285 - std dev training return: 123.242622
num samples: 19857920 - evaluation return: 480.117340 - mean training return: 364.871185 - std dev training return: 130.754211
num samples: 20113920 - evaluation return: 483.538849 - mean training return: 388.082611 - std dev training return: 123.215248

real    6m44.083s
user    7m14.784s
sys 0m20.788s
hmomin commented 2 years ago

Seems to be training fine on my end: (a couple of sample trials below)

I'm not sure if it can be improved from here. I'll try training it on tougher environments, like Ant and Humanoid.

num samples: 51200 - evaluation return: 81.125275 - mean training return: 57.371201 - std dev training return: 16.665144
num samples: 62464 - evaluation return: 7.753183 - mean training return: 64.942276 - std dev training return: 37.614052
num samples: 69632 - evaluation return: -0.883633 - mean training return: 11.195177 - std dev training return: 26.197699
num samples: 76288 - evaluation return: -0.683245 - mean training return: -0.306751 - std dev training return: 0.745774
num samples: 91648 - evaluation return: 1.029562 - mean training return: -0.773028 - std dev training return: 1.233858
num samples: 112128 - evaluation return: 10.089727 - mean training return: 9.814303 - std dev training return: 6.622053
num samples: 135168 - evaluation return: 12.574868 - mean training return: 28.165398 - std dev training return: 9.528390
num samples: 155136 - evaluation return: 8.648206 - mean training return: 33.556808 - std dev training return: 17.357323
num samples: 175104 - evaluation return: 6.792558 - mean training return: 34.376022 - std dev training return: 19.984034
num samples: 196608 - evaluation return: 10.568657 - mean training return: 29.922438 - std dev training return: 16.706631
num samples: 216576 - evaluation return: 10.423387 - mean training return: 29.523966 - std dev training return: 16.788630
num samples: 238080 - evaluation return: 12.994385 - mean training return: 28.580505 - std dev training return: 17.220018
num samples: 260096 - evaluation return: 14.049004 - mean training return: 26.727781 - std dev training return: 13.821986
num samples: 283136 - evaluation return: 15.625538 - mean training return: 31.576477 - std dev training return: 16.067234
num samples: 338432 - evaluation return: 82.117401 - mean training return: 50.109371 - std dev training return: 23.721992
num samples: 374272 - evaluation return: 44.100426 - mean training return: 52.063278 - std dev training return: 20.542089
num samples: 487936 - evaluation return: 146.891052 - mean training return: 78.732880 - std dev training return: 40.469406
num samples: 585216 - evaluation return: 168.670822 - mean training return: 220.144867 - std dev training return: 66.318436
num samples: 652288 - evaluation return: 118.117874 - mean training return: 145.766769 - std dev training return: 38.091496
num samples: 707072 - evaluation return: 97.420181 - mean training return: 110.983658 - std dev training return: 22.484354
num samples: 768512 - evaluation return: 110.467491 - mean training return: 114.189964 - std dev training return: 11.008554
num samples: 833024 - evaluation return: 114.326782 - mean training return: 115.040733 - std dev training return: 12.389016
num samples: 899584 - evaluation return: 120.891159 - mean training return: 114.932457 - std dev training return: 13.422668
num samples: 995840 - evaluation return: 177.312164 - mean training return: 131.401566 - std dev training return: 16.503223
num samples: 1069056 - evaluation return: 130.625305 - mean training return: 135.230362 - std dev training return: 19.297277
num samples: 1147904 - evaluation return: 142.730682 - mean training return: 143.217728 - std dev training return: 28.197531
num samples: 1221632 - evaluation return: 133.262238 - mean training return: 162.330292 - std dev training return: 33.097839
num samples: 1298944 - evaluation return: 141.512619 - mean training return: 164.057922 - std dev training return: 45.637508
num samples: 1401344 - evaluation return: 189.087631 - mean training return: 162.134720 - std dev training return: 38.670311
num samples: 1501184 - evaluation return: 183.058121 - mean training return: 159.404861 - std dev training return: 42.172630
num samples: 1584128 - evaluation return: 149.857773 - mean training return: 174.881470 - std dev training return: 40.189331
num samples: 1734144 - evaluation return: 279.241882 - mean training return: 194.218246 - std dev training return: 56.473969
num samples: 1856512 - evaluation return: 227.317886 - mean training return: 203.102112 - std dev training return: 56.070274
num samples: 2061312 - evaluation return: 389.948120 - mean training return: 205.172531 - std dev training return: 57.789978
num samples: 2249728 - evaluation return: 351.088745 - mean training return: 222.393127 - std dev training return: 69.012886
num samples: 2441728 - evaluation return: 360.230896 - mean training return: 246.824707 - std dev training return: 79.436989
num samples: 2550784 - evaluation return: 199.690109 - mean training return: 233.286087 - std dev training return: 89.647781
num samples: 2689024 - evaluation return: 257.347351 - mean training return: 248.166580 - std dev training return: 74.226105
num samples: 2808832 - evaluation return: 219.899185 - mean training return: 295.964294 - std dev training return: 81.720566
num samples: 2950656 - evaluation return: 257.780640 - mean training return: 275.880005 - std dev training return: 82.625908
num samples: 3083264 - evaluation return: 238.950394 - mean training return: 267.841187 - std dev training return: 73.925125
num samples: 3269120 - evaluation return: 347.129669 - mean training return: 251.556732 - std dev training return: 84.380249
num samples: 3347968 - evaluation return: 138.702774 - mean training return: 272.011169 - std dev training return: 104.839882
num samples: 3603968 - evaluation return: 481.016022 - mean training return: 285.844238 - std dev training return: 92.963936
num samples: 3706368 - evaluation return: 173.795547 - mean training return: 233.493439 - std dev training return: 92.470581
num samples: 3881984 - evaluation return: 331.878937 - mean training return: 216.486008 - std dev training return: 79.812622
num samples: 4085248 - evaluation return: 385.428101 - mean training return: 218.459778 - std dev training return: 75.344215
num samples: 4172800 - evaluation return: 132.031937 - mean training return: 249.470215 - std dev training return: 95.391670
num samples: 4305920 - evaluation return: 250.740295 - mean training return: 251.058319 - std dev training return: 83.778870
num samples: 4534784 - evaluation return: 433.107178 - mean training return: 289.934662 - std dev training return: 90.615639
num samples: 4790784 - evaluation return: 490.277130 - mean training return: 383.269226 - std dev training return: 86.111870

real    1m12.373s
user    1m15.368s
sys 0m5.875s
num samples: 8704 - evaluation return: 3.577263 - mean training return: 2.127409 - std dev training return: 1.116628
num samples: 14848 - evaluation return: -0.230887 - mean training return: 2.456621 - std dev training return: 3.049357
num samples: 21504 - evaluation return: -0.924851 - mean training return: -0.106297 - std dev training return: 1.022633
num samples: 28160 - evaluation return: 0.494538 - mean training return: -0.210534 - std dev training return: 0.774051
num samples: 34816 - evaluation return: -1.099386 - mean training return: -0.239807 - std dev training return: 0.762321
num samples: 41472 - evaluation return: -0.199272 - mean training return: -0.259052 - std dev training return: 0.730690
num samples: 48128 - evaluation return: -1.340814 - mean training return: -0.262017 - std dev training return: 0.776615
num samples: 54784 - evaluation return: -0.815334 - mean training return: -0.335523 - std dev training return: 0.728658
num samples: 61440 - evaluation return: -1.645157 - mean training return: -0.281953 - std dev training return: 0.716910
num samples: 68096 - evaluation return: 0.492259 - mean training return: -0.294169 - std dev training return: 0.704579
num samples: 74752 - evaluation return: -0.381222 - mean training return: -0.330460 - std dev training return: 0.753243
num samples: 81408 - evaluation return: -0.597015 - mean training return: -0.316173 - std dev training return: 0.737555
num samples: 88064 - evaluation return: -0.389986 - mean training return: -0.279406 - std dev training return: 0.718335
num samples: 94720 - evaluation return: 0.519121 - mean training return: -0.288058 - std dev training return: 0.668145
num samples: 101376 - evaluation return: -1.387735 - mean training return: -0.293972 - std dev training return: 0.740699
num samples: 108032 - evaluation return: -1.520794 - mean training return: -0.307813 - std dev training return: 0.729989
num samples: 114688 - evaluation return: -1.435058 - mean training return: -0.336522 - std dev training return: 0.687554
num samples: 121344 - evaluation return: -0.113847 - mean training return: -0.356510 - std dev training return: 0.712520
num samples: 128000 - evaluation return: 0.402554 - mean training return: -0.235319 - std dev training return: 0.774322
num samples: 134656 - evaluation return: -0.986345 - mean training return: -0.272911 - std dev training return: 0.716621
num samples: 141312 - evaluation return: 0.628756 - mean training return: -0.313752 - std dev training return: 0.717850
num samples: 147968 - evaluation return: -1.739557 - mean training return: -0.275469 - std dev training return: 0.775684
num samples: 154624 - evaluation return: -0.834510 - mean training return: -0.262845 - std dev training return: 0.705004
num samples: 161280 - evaluation return: -0.272222 - mean training return: -0.296579 - std dev training return: 0.733768
num samples: 167936 - evaluation return: 0.206971 - mean training return: -0.262108 - std dev training return: 0.780234
num samples: 174080 - evaluation return: -0.420270 - mean training return: -0.286304 - std dev training return: 0.743619
num samples: 180736 - evaluation return: -1.053610 - mean training return: -0.296410 - std dev training return: 0.755525
num samples: 187392 - evaluation return: -1.006924 - mean training return: -0.326323 - std dev training return: 0.725161
num samples: 194048 - evaluation return: -0.053783 - mean training return: -0.309204 - std dev training return: 0.721851
num samples: 200192 - evaluation return: -0.465294 - mean training return: -0.305436 - std dev training return: 0.725301
num samples: 206336 - evaluation return: -0.361792 - mean training return: -0.322708 - std dev training return: 0.732774
num samples: 212992 - evaluation return: -0.108000 - mean training return: -0.289923 - std dev training return: 0.690331
num samples: 219648 - evaluation return: 0.174712 - mean training return: -0.223083 - std dev training return: 0.744502
num samples: 226304 - evaluation return: -1.059077 - mean training return: -0.297530 - std dev training return: 0.695558
num samples: 232960 - evaluation return: -0.553149 - mean training return: -0.269614 - std dev training return: 0.732810
num samples: 239616 - evaluation return: -0.321520 - mean training return: -0.293785 - std dev training return: 0.716011
num samples: 246272 - evaluation return: -1.304256 - mean training return: -0.274212 - std dev training return: 0.751698
num samples: 252928 - evaluation return: -0.055993 - mean training return: -0.258188 - std dev training return: 0.736136
num samples: 260096 - evaluation return: -0.564772 - mean training return: -0.305612 - std dev training return: 0.750613
num samples: 266752 - evaluation return: -0.185625 - mean training return: -0.344069 - std dev training return: 0.698264
num samples: 273408 - evaluation return: -1.000286 - mean training return: -0.296448 - std dev training return: 0.688095
num samples: 280064 - evaluation return: -0.725463 - mean training return: -0.357293 - std dev training return: 0.679751
num samples: 286720 - evaluation return: 0.686718 - mean training return: -0.349880 - std dev training return: 0.718926
num samples: 293376 - evaluation return: -0.217141 - mean training return: -0.325731 - std dev training return: 0.741199
num samples: 300032 - evaluation return: -0.829026 - mean training return: -0.312862 - std dev training return: 0.726725
num samples: 306688 - evaluation return: -0.065261 - mean training return: -0.331612 - std dev training return: 0.686392
num samples: 313344 - evaluation return: -1.082256 - mean training return: -0.324666 - std dev training return: 0.731677
num samples: 320000 - evaluation return: -0.515805 - mean training return: -0.304830 - std dev training return: 0.715038
num samples: 327168 - evaluation return: -0.787327 - mean training return: -0.280498 - std dev training return: 0.751423
num samples: 333824 - evaluation return: -0.959075 - mean training return: -0.324990 - std dev training return: 0.718457
num samples: 339968 - evaluation return: -0.329482 - mean training return: -0.330447 - std dev training return: 0.721510
num samples: 346624 - evaluation return: -1.503552 - mean training return: -0.235162 - std dev training return: 0.754796
num samples: 353280 - evaluation return: 0.224749 - mean training return: -0.285831 - std dev training return: 0.746168
num samples: 359936 - evaluation return: 0.212472 - mean training return: -0.293262 - std dev training return: 0.744599
num samples: 366592 - evaluation return: -0.605769 - mean training return: -0.263021 - std dev training return: 0.730788
num samples: 373248 - evaluation return: 0.021487 - mean training return: -0.304388 - std dev training return: 0.781377
num samples: 379904 - evaluation return: 0.691861 - mean training return: -0.316312 - std dev training return: 0.740605
num samples: 386560 - evaluation return: -1.584773 - mean training return: -0.305333 - std dev training return: 0.733401
num samples: 393216 - evaluation return: -0.599975 - mean training return: -0.208702 - std dev training return: 0.698861
num samples: 400384 - evaluation return: -0.670436 - mean training return: -0.283109 - std dev training return: 0.726906
num samples: 407040 - evaluation return: 0.653630 - mean training return: -0.295850 - std dev training return: 0.719343
num samples: 413696 - evaluation return: 0.630509 - mean training return: -0.323050 - std dev training return: 0.730686
num samples: 421376 - evaluation return: -1.418671 - mean training return: -0.402701 - std dev training return: 0.755526
num samples: 442368 - evaluation return: 9.550839 - mean training return: 1.085398 - std dev training return: 5.900048
num samples: 465408 - evaluation return: 11.759722 - mean training return: 22.980482 - std dev training return: 8.006370
num samples: 487936 - evaluation return: 5.830169 - mean training return: 31.864887 - std dev training return: 13.246799
num samples: 517120 - evaluation return: 5.293938 - mean training return: 38.351994 - std dev training return: 18.240381
num samples: 551936 - evaluation return: 12.276600 - mean training return: 69.595329 - std dev training return: 24.367613
num samples: 582144 - evaluation return: 17.179407 - mean training return: 44.297493 - std dev training return: 37.208679
num samples: 588800 - evaluation return: -1.150188 - mean training return: 13.639528 - std dev training return: 33.040062
num samples: 595456 - evaluation return: -1.284384 - mean training return: 0.261641 - std dev training return: 9.033545
num samples: 602112 - evaluation return: -0.527396 - mean training return: -0.288252 - std dev training return: 0.762581
num samples: 608768 - evaluation return: -0.420142 - mean training return: -0.320719 - std dev training return: 0.711536
num samples: 615424 - evaluation return: -0.205829 - mean training return: -0.291373 - std dev training return: 0.738021
num samples: 622080 - evaluation return: -0.408827 - mean training return: -0.329399 - std dev training return: 0.733880
num samples: 628736 - evaluation return: -1.688763 - mean training return: -0.293566 - std dev training return: 0.703043
num samples: 635392 - evaluation return: 0.184839 - mean training return: -0.310084 - std dev training return: 0.727917
num samples: 642048 - evaluation return: -0.458322 - mean training return: -0.241170 - std dev training return: 0.713154
num samples: 648704 - evaluation return: -1.304393 - mean training return: -0.305543 - std dev training return: 0.682055
num samples: 655360 - evaluation return: -1.598623 - mean training return: -0.338319 - std dev training return: 0.773056
num samples: 662016 - evaluation return: -0.665520 - mean training return: -0.353980 - std dev training return: 0.715147
num samples: 668672 - evaluation return: -0.761950 - mean training return: -0.213237 - std dev training return: 0.794365
num samples: 675328 - evaluation return: -1.636009 - mean training return: -0.306591 - std dev training return: 0.730199
num samples: 681984 - evaluation return: -0.664783 - mean training return: -0.276579 - std dev training return: 0.761460
num samples: 688128 - evaluation return: -0.514676 - mean training return: -0.318079 - std dev training return: 0.726229
num samples: 694272 - evaluation return: -0.359087 - mean training return: -0.273025 - std dev training return: 0.751420
num samples: 700928 - evaluation return: 0.533606 - mean training return: -0.323964 - std dev training return: 0.774187
num samples: 707584 - evaluation return: 0.703636 - mean training return: -0.329618 - std dev training return: 0.780546
num samples: 714752 - evaluation return: -0.945183 - mean training return: -0.360262 - std dev training return: 0.730441
num samples: 721408 - evaluation return: 0.573922 - mean training return: -0.348083 - std dev training return: 0.729405
num samples: 728064 - evaluation return: -0.624416 - mean training return: -0.326547 - std dev training return: 0.714472
num samples: 734720 - evaluation return: -0.816424 - mean training return: -0.270996 - std dev training return: 0.746379
num samples: 740864 - evaluation return: -0.249636 - mean training return: -0.248523 - std dev training return: 0.698622
num samples: 747520 - evaluation return: 0.030489 - mean training return: -0.319504 - std dev training return: 0.758707
num samples: 753664 - evaluation return: -0.495504 - mean training return: -0.399719 - std dev training return: 0.751378
num samples: 760320 - evaluation return: 0.690846 - mean training return: -0.360300 - std dev training return: 0.710572
num samples: 766976 - evaluation return: 0.581781 - mean training return: -0.384584 - std dev training return: 0.734950
num samples: 773120 - evaluation return: -0.409827 - mean training return: -0.240463 - std dev training return: 0.735929
num samples: 780288 - evaluation return: -0.807088 - mean training return: -0.260167 - std dev training return: 0.760193
num samples: 786944 - evaluation return: 0.013580 - mean training return: -0.387564 - std dev training return: 0.684451
num samples: 793600 - evaluation return: -1.398109 - mean training return: -0.246117 - std dev training return: 0.737202
num samples: 800256 - evaluation return: -1.385463 - mean training return: -0.316410 - std dev training return: 0.751801
num samples: 806400 - evaluation return: -0.177583 - mean training return: -0.315617 - std dev training return: 0.736470
num samples: 813056 - evaluation return: -0.292684 - mean training return: -0.327488 - std dev training return: 0.703080
num samples: 820224 - evaluation return: -0.674989 - mean training return: -0.350183 - std dev training return: 0.715352
num samples: 826880 - evaluation return: -0.383924 - mean training return: -0.315123 - std dev training return: 0.745595
num samples: 833024 - evaluation return: -0.430322 - mean training return: -0.312982 - std dev training return: 0.754867
num samples: 839680 - evaluation return: 0.140292 - mean training return: -0.311781 - std dev training return: 0.754139
num samples: 846336 - evaluation return: 0.149468 - mean training return: -0.270274 - std dev training return: 0.759895
num samples: 852992 - evaluation return: 0.683025 - mean training return: -0.304343 - std dev training return: 0.732456
num samples: 859648 - evaluation return: -0.799096 - mean training return: -0.288090 - std dev training return: 0.689003
num samples: 866304 - evaluation return: 0.222651 - mean training return: -0.277866 - std dev training return: 0.762966
num samples: 873472 - evaluation return: -0.763902 - mean training return: -0.333535 - std dev training return: 0.764468
num samples: 879616 - evaluation return: -0.207717 - mean training return: -0.347197 - std dev training return: 0.730457
num samples: 886272 - evaluation return: -0.883278 - mean training return: -0.300680 - std dev training return: 0.743909
num samples: 892416 - evaluation return: -0.371258 - mean training return: -0.239150 - std dev training return: 0.731507
num samples: 899072 - evaluation return: -0.116722 - mean training return: -0.277151 - std dev training return: 0.725873
num samples: 905216 - evaluation return: -0.318228 - mean training return: -0.330360 - std dev training return: 0.732214
num samples: 911872 - evaluation return: -0.541674 - mean training return: -0.304018 - std dev training return: 0.760028
num samples: 918528 - evaluation return: -1.616357 - mean training return: -0.310715 - std dev training return: 0.744601
num samples: 925184 - evaluation return: 0.250226 - mean training return: -0.320312 - std dev training return: 0.768852
num samples: 932352 - evaluation return: -0.763868 - mean training return: -0.331806 - std dev training return: 0.728154
num samples: 939008 - evaluation return: -0.700556 - mean training return: -0.337780 - std dev training return: 0.751162
num samples: 945664 - evaluation return: -0.527823 - mean training return: -0.328711 - std dev training return: 0.698958
num samples: 951808 - evaluation return: -0.292189 - mean training return: -0.314321 - std dev training return: 0.760793
num samples: 958464 - evaluation return: 0.414775 - mean training return: -0.262468 - std dev training return: 0.714870
num samples: 965632 - evaluation return: -0.909565 - mean training return: -0.293268 - std dev training return: 0.754039
num samples: 972288 - evaluation return: -1.305481 - mean training return: -0.323424 - std dev training return: 0.738697
num samples: 978944 - evaluation return: -1.632725 - mean training return: -0.322148 - std dev training return: 0.729558
num samples: 985600 - evaluation return: -0.897159 - mean training return: -0.228336 - std dev training return: 0.721210
num samples: 992256 - evaluation return: -0.673560 - mean training return: -0.339880 - std dev training return: 0.767403
num samples: 998912 - evaluation return: 0.302334 - mean training return: -0.297791 - std dev training return: 0.733414
num samples: 1005568 - evaluation return: -1.606735 - mean training return: -0.291109 - std dev training return: 0.780853
num samples: 1012224 - evaluation return: -1.639484 - mean training return: -0.322800 - std dev training return: 0.735269
num samples: 1018880 - evaluation return: 0.024287 - mean training return: -0.326477 - std dev training return: 0.704170
num samples: 1025536 - evaluation return: -0.691031 - mean training return: -0.300165 - std dev training return: 0.706852
num samples: 1032192 - evaluation return: -0.693805 - mean training return: -0.321265 - std dev training return: 0.727185
num samples: 1038848 - evaluation return: -1.237633 - mean training return: -0.323832 - std dev training return: 0.732518
num samples: 1045504 - evaluation return: -0.673604 - mean training return: -0.330410 - std dev training return: 0.724474
num samples: 1052160 - evaluation return: -1.593603 - mean training return: -0.281787 - std dev training return: 0.781069
num samples: 1058816 - evaluation return: -0.115018 - mean training return: -0.281713 - std dev training return: 0.712764
num samples: 1065472 - evaluation return: -1.711280 - mean training return: -0.280165 - std dev training return: 0.724075
num samples: 1071616 - evaluation return: -0.319607 - mean training return: -0.217876 - std dev training return: 0.721013
num samples: 1078272 - evaluation return: -1.499112 - mean training return: -0.324122 - std dev training return: 0.715923
num samples: 1084928 - evaluation return: 0.616106 - mean training return: -0.335593 - std dev training return: 0.724741
num samples: 1091584 - evaluation return: -0.663339 - mean training return: -0.289018 - std dev training return: 0.706721
num samples: 1098240 - evaluation return: -1.276084 - mean training return: -0.351578 - std dev training return: 0.703720
num samples: 1104896 - evaluation return: -1.050008 - mean training return: -0.346642 - std dev training return: 0.684289
num samples: 1111552 - evaluation return: -1.639281 - mean training return: -0.292170 - std dev training return: 0.730611
num samples: 1118208 - evaluation return: -1.652530 - mean training return: -0.295525 - std dev training return: 0.734505
num samples: 1124352 - evaluation return: -0.487822 - mean training return: -0.294191 - std dev training return: 0.797307
num samples: 1131008 - evaluation return: -0.241092 - mean training return: -0.243099 - std dev training return: 0.781490
num samples: 1137664 - evaluation return: 0.598148 - mean training return: -0.298616 - std dev training return: 0.713244
num samples: 1144320 - evaluation return: -0.102481 - mean training return: -0.311176 - std dev training return: 0.725996
num samples: 1150976 - evaluation return: -1.105510 - mean training return: -0.355443 - std dev training return: 0.707010
num samples: 1157632 - evaluation return: 0.250203 - mean training return: -0.262843 - std dev training return: 0.726776
num samples: 1164288 - evaluation return: -0.860840 - mean training return: -0.284114 - std dev training return: 0.691858
num samples: 1170944 - evaluation return: -0.697817 - mean training return: -0.357776 - std dev training return: 0.684576
num samples: 1177600 - evaluation return: -0.523914 - mean training return: -0.326267 - std dev training return: 0.705797
num samples: 1184256 - evaluation return: -0.064892 - mean training return: -0.340581 - std dev training return: 0.687891
num samples: 1190912 - evaluation return: -1.289906 - mean training return: -0.262282 - std dev training return: 0.716627
num samples: 1197568 - evaluation return: -0.743008 - mean training return: -0.268564 - std dev training return: 0.735915
num samples: 1204224 - evaluation return: 0.559151 - mean training return: -0.277437 - std dev training return: 0.731550
num samples: 1210368 - evaluation return: -0.607211 - mean training return: -0.345539 - std dev training return: 0.732732
num samples: 1217024 - evaluation return: 0.545210 - mean training return: -0.346018 - std dev training return: 0.749102
num samples: 1223168 - evaluation return: -0.216784 - mean training return: -0.347395 - std dev training return: 0.720874
num samples: 1229824 - evaluation return: -1.411286 - mean training return: -0.289208 - std dev training return: 0.720285
num samples: 1236480 - evaluation return: -0.304197 - mean training return: -0.325529 - std dev training return: 0.780940
num samples: 1242624 - evaluation return: -0.157712 - mean training return: -0.364997 - std dev training return: 0.694920
num samples: 1249280 - evaluation return: 0.437140 - mean training return: -0.311915 - std dev training return: 0.687327
num samples: 1255936 - evaluation return: -0.673512 - mean training return: -0.243449 - std dev training return: 0.734862
num samples: 1262592 - evaluation return: 0.455575 - mean training return: -0.264269 - std dev training return: 0.748812
num samples: 1269248 - evaluation return: -1.201609 - mean training return: -0.300613 - std dev training return: 0.734225
num samples: 1275904 - evaluation return: 0.295182 - mean training return: -0.316899 - std dev training return: 0.744909
num samples: 1282560 - evaluation return: -1.741425 - mean training return: -0.319056 - std dev training return: 0.719632
num samples: 1289216 - evaluation return: -0.266765 - mean training return: -0.329298 - std dev training return: 0.712763
num samples: 1296384 - evaluation return: -0.620202 - mean training return: -0.341256 - std dev training return: 0.715451
num samples: 1326080 - evaluation return: 14.047243 - mean training return: -0.051493 - std dev training return: 7.203621
num samples: 1354240 - evaluation return: 13.502187 - mean training return: 33.755840 - std dev training return: 11.151319
num samples: 1385472 - evaluation return: 16.704943 - mean training return: 50.833996 - std dev training return: 17.058485
num samples: 1416192 - evaluation return: 18.223055 - mean training return: 66.199013 - std dev training return: 24.151640
num samples: 1449472 - evaluation return: 26.653809 - mean training return: 64.464775 - std dev training return: 28.641485
num samples: 1491968 - evaluation return: 29.257538 - mean training return: 59.228790 - std dev training return: 28.276449
num samples: 1518592 - evaluation return: 15.074642 - mean training return: 59.396503 - std dev training return: 27.209282
num samples: 1543680 - evaluation return: 13.520245 - mean training return: 53.042431 - std dev training return: 23.491385
num samples: 1568256 - evaluation return: 12.693990 - mean training return: 56.349098 - std dev training return: 30.669985
num samples: 1592832 - evaluation return: 14.548918 - mean training return: 48.321880 - std dev training return: 27.435207
num samples: 1615360 - evaluation return: 9.125124 - mean training return: 48.213757 - std dev training return: 29.546242
num samples: 1641984 - evaluation return: 15.934216 - mean training return: 47.426437 - std dev training return: 20.952158
num samples: 1669120 - evaluation return: 20.812475 - mean training return: 52.848259 - std dev training return: 20.939800
num samples: 1691648 - evaluation return: 10.199637 - mean training return: 54.987202 - std dev training return: 27.383293
num samples: 1716224 - evaluation return: 13.706933 - mean training return: 52.222088 - std dev training return: 24.187613
num samples: 1745408 - evaluation return: 22.485649 - mean training return: 54.050026 - std dev training return: 26.821083
num samples: 1769472 - evaluation return: 13.901758 - mean training return: 56.767685 - std dev training return: 27.617611
num samples: 1797120 - evaluation return: 21.212681 - mean training return: 53.003258 - std dev training return: 25.134924
num samples: 1822720 - evaluation return: 16.745039 - mean training return: 52.020813 - std dev training return: 26.241608
num samples: 1848832 - evaluation return: 16.744692 - mean training return: 49.441147 - std dev training return: 28.067682
num samples: 1875968 - evaluation return: 18.654871 - mean training return: 46.754799 - std dev training return: 27.402061
num samples: 1899520 - evaluation return: 13.553414 - mean training return: 46.579330 - std dev training return: 26.541412
num samples: 1930240 - evaluation return: 23.745800 - mean training return: 51.669601 - std dev training return: 24.731030
num samples: 1969152 - evaluation return: 43.359570 - mean training return: 58.314671 - std dev training return: 29.217480
num samples: 1994240 - evaluation return: 16.212206 - mean training return: 51.730072 - std dev training return: 29.579000
num samples: 2020864 - evaluation return: 18.237247 - mean training return: 49.346802 - std dev training return: 32.714294
num samples: 2045952 - evaluation return: 14.019270 - mean training return: 46.586708 - std dev training return: 28.077335
num samples: 2073600 - evaluation return: 19.899469 - mean training return: 49.552700 - std dev training return: 28.061085
num samples: 2099712 - evaluation return: 16.839905 - mean training return: 48.726578 - std dev training return: 23.912491
num samples: 2128896 - evaluation return: 20.886269 - mean training return: 47.306320 - std dev training return: 28.023022
num samples: 2157568 - evaluation return: 21.041683 - mean training return: 47.165974 - std dev training return: 25.752600
num samples: 2184704 - evaluation return: 19.858538 - mean training return: 49.746853 - std dev training return: 25.284559
num samples: 2215936 - evaluation return: 26.999073 - mean training return: 50.001957 - std dev training return: 27.909082
num samples: 2243584 - evaluation return: 21.706451 - mean training return: 50.129192 - std dev training return: 28.956474
num samples: 2269696 - evaluation return: 18.217480 - mean training return: 54.397087 - std dev training return: 33.212231
num samples: 2297856 - evaluation return: 22.156265 - mean training return: 61.765476 - std dev training return: 33.261806
num samples: 2324480 - evaluation return: 19.408173 - mean training return: 66.872536 - std dev training return: 34.575230
num samples: 2358272 - evaluation return: 34.657043 - mean training return: 59.466038 - std dev training return: 39.599918
num samples: 2391040 - evaluation return: 35.418736 - mean training return: 58.213425 - std dev training return: 34.900772
num samples: 2422272 - evaluation return: 31.515377 - mean training return: 61.915546 - std dev training return: 38.166824
num samples: 2454528 - evaluation return: 37.956020 - mean training return: 64.703735 - std dev training return: 37.911053
num samples: 2491904 - evaluation return: 44.696922 - mean training return: 63.767509 - std dev training return: 37.759171
num samples: 2521600 - evaluation return: 33.220581 - mean training return: 58.281063 - std dev training return: 24.729616
num samples: 2550784 - evaluation return: 29.452753 - mean training return: 48.926483 - std dev training return: 27.675325
num samples: 2589696 - evaluation return: 48.514973 - mean training return: 54.847080 - std dev training return: 29.009724
num samples: 2621952 - evaluation return: 38.353481 - mean training return: 54.243912 - std dev training return: 26.634745
num samples: 2650112 - evaluation return: 29.013971 - mean training return: 64.396606 - std dev training return: 28.851086
num samples: 2689024 - evaluation return: 49.110577 - mean training return: 59.816063 - std dev training return: 30.774923
num samples: 2724864 - evaluation return: 47.396458 - mean training return: 51.791744 - std dev training return: 24.082825
num samples: 2761728 - evaluation return: 49.357151 - mean training return: 62.273724 - std dev training return: 27.508331
num samples: 2788352 - evaluation return: 31.050671 - mean training return: 58.919521 - std dev training return: 30.630486
num samples: 2824192 - evaluation return: 51.544426 - mean training return: 47.119003 - std dev training return: 32.163364
num samples: 2860544 - evaluation return: 54.740978 - mean training return: 42.129929 - std dev training return: 26.790054
num samples: 2874880 - evaluation return: 9.921804 - mean training return: 33.815880 - std dev training return: 34.029007
num samples: 2881536 - evaluation return: -1.186124 - mean training return: 14.521569 - std dev training return: 27.427841
num samples: 2888704 - evaluation return: -1.560711 - mean training return: 7.604540 - std dev training return: 14.929857
num samples: 2896896 - evaluation return: -0.848794 - mean training return: 1.104980 - std dev training return: 9.708495
num samples: 2941440 - evaluation return: 69.554489 - mean training return: 6.415936 - std dev training return: 10.418206
num samples: 2977280 - evaluation return: 53.386860 - mean training return: 16.616173 - std dev training return: 24.705132
num samples: 3039744 - evaluation return: 105.142326 - mean training return: 25.068884 - std dev training return: 28.917677
num samples: 3081728 - evaluation return: 67.184288 - mean training return: 53.783405 - std dev training return: 48.833164
num samples: 3159552 - evaluation return: 134.902313 - mean training return: 93.410980 - std dev training return: 43.704010
num samples: 3375104 - evaluation return: 405.157166 - mean training return: 253.605209 - std dev training return: 113.650177
num samples: 3475968 - evaluation return: 187.386887 - mean training return: 306.584869 - std dev training return: 82.945526
num samples: 3550720 - evaluation return: 135.103867 - mean training return: 277.703125 - std dev training return: 83.071236
num samples: 3613696 - evaluation return: 110.419548 - mean training return: 198.335876 - std dev training return: 75.512650
num samples: 3665408 - evaluation return: 84.514061 - mean training return: 145.780762 - std dev training return: 42.510136
num samples: 3710976 - evaluation return: 71.547417 - mean training return: 117.657379 - std dev training return: 26.803980
num samples: 3761152 - evaluation return: 79.080544 - mean training return: 116.652176 - std dev training return: 19.102743
num samples: 3811328 - evaluation return: 81.951134 - mean training return: 106.417725 - std dev training return: 19.750528
num samples: 3862016 - evaluation return: 81.488068 - mean training return: 112.993454 - std dev training return: 17.909815
num samples: 3911168 - evaluation return: 78.857185 - mean training return: 97.178131 - std dev training return: 18.033991
num samples: 3953664 - evaluation return: 71.524231 - mean training return: 91.103500 - std dev training return: 12.342433
num samples: 4011008 - evaluation return: 101.271370 - mean training return: 109.643097 - std dev training return: 16.649637
num samples: 4065280 - evaluation return: 94.180649 - mean training return: 107.003029 - std dev training return: 13.822444
num samples: 4120576 - evaluation return: 95.153938 - mean training return: 94.568550 - std dev training return: 14.258013
num samples: 4167168 - evaluation return: 80.157478 - mean training return: 89.572609 - std dev training return: 14.981794
num samples: 4220928 - evaluation return: 94.017021 - mean training return: 99.882248 - std dev training return: 13.053928
num samples: 4273664 - evaluation return: 91.785789 - mean training return: 93.100937 - std dev training return: 10.268265
num samples: 4329472 - evaluation return: 98.320801 - mean training return: 90.117485 - std dev training return: 9.818995
num samples: 4378624 - evaluation return: 65.613556 - mean training return: 78.689690 - std dev training return: 17.212782
num samples: 4432896 - evaluation return: 82.235909 - mean training return: 67.187881 - std dev training return: 13.136176
num samples: 4482048 - evaluation return: 76.356354 - mean training return: 64.181396 - std dev training return: 15.851061
num samples: 4518912 - evaluation return: 48.346348 - mean training return: 58.880356 - std dev training return: 14.994464
num samples: 4566528 - evaluation return: 74.608376 - mean training return: 50.253464 - std dev training return: 16.315653
num samples: 4590080 - evaluation return: 20.772600 - mean training return: 39.540710 - std dev training return: 14.346633
num samples: 4605440 - evaluation return: 14.545864 - mean training return: 33.749016 - std dev training return: 17.493212
num samples: 4636160 - evaluation return: 26.039469 - mean training return: 24.162502 - std dev training return: 9.513905
num samples: 4673536 - evaluation return: 49.816788 - mean training return: 39.489616 - std dev training return: 11.022292
num samples: 4698112 - evaluation return: 23.897694 - mean training return: 38.774967 - std dev training return: 13.716680
num samples: 4737536 - evaluation return: 61.132095 - mean training return: 36.195271 - std dev training return: 12.347352
num samples: 4782592 - evaluation return: 67.720543 - mean training return: 57.307751 - std dev training return: 12.279572
num samples: 4820480 - evaluation return: 49.070065 - mean training return: 48.418224 - std dev training return: 16.034578
num samples: 4859392 - evaluation return: 51.272629 - mean training return: 39.908737 - std dev training return: 12.463587
num samples: 4891136 - evaluation return: 35.538101 - mean training return: 40.705002 - std dev training return: 12.133752
num samples: 4932096 - evaluation return: 51.835430 - mean training return: 49.898659 - std dev training return: 12.333365
num samples: 4971008 - evaluation return: 41.852074 - mean training return: 61.750072 - std dev training return: 13.803815
num samples: 5022720 - evaluation return: 74.437363 - mean training return: 60.357788 - std dev training return: 15.215582
num samples: 5065728 - evaluation return: 60.950554 - mean training return: 69.274529 - std dev training return: 18.758741
num samples: 5120000 - evaluation return: 83.744797 - mean training return: 85.742477 - std dev training return: 25.373329
num samples: 5171200 - evaluation return: 73.354271 - mean training return: 94.375343 - std dev training return: 32.156673
num samples: 5216768 - evaluation return: 66.283051 - mean training return: 93.942276 - std dev training return: 40.000980
num samples: 5269504 - evaluation return: 79.321487 - mean training return: 103.428329 - std dev training return: 49.622025
num samples: 5324288 - evaluation return: 83.266319 - mean training return: 108.432243 - std dev training return: 46.427795
num samples: 5379072 - evaluation return: 83.300369 - mean training return: 119.625656 - std dev training return: 53.741055
num samples: 5431296 - evaluation return: 80.618057 - mean training return: 134.015305 - std dev training return: 71.101700
num samples: 5500416 - evaluation return: 114.082138 - mean training return: 166.012894 - std dev training return: 89.970863
num samples: 5574656 - evaluation return: 120.792580 - mean training return: 167.389053 - std dev training return: 70.300941
num samples: 5658112 - evaluation return: 133.210083 - mean training return: 188.764481 - std dev training return: 73.278893
num samples: 5914112 - evaluation return: 482.090363 - mean training return: 281.054657 - std dev training return: 115.064636
num samples: 6028800 - evaluation return: 210.441910 - mean training return: 381.176758 - std dev training return: 136.529587
num samples: 6103040 - evaluation return: 133.147369 - mean training return: 262.005798 - std dev training return: 120.227722
num samples: 6183936 - evaluation return: 142.308701 - mean training return: 186.133408 - std dev training return: 70.372955
num samples: 6269440 - evaluation return: 154.774841 - mean training return: 163.140854 - std dev training return: 44.010231
num samples: 6326784 - evaluation return: 99.995064 - mean training return: 144.407837 - std dev training return: 37.959511
num samples: 6378496 - evaluation return: 84.904724 - mean training return: 111.769157 - std dev training return: 17.097399
num samples: 6449664 - evaluation return: 124.061813 - mean training return: 118.746384 - std dev training return: 18.606142
num samples: 6508544 - evaluation return: 96.781158 - mean training return: 93.489563 - std dev training return: 21.228756
num samples: 6575616 - evaluation return: 111.087349 - mean training return: 102.125084 - std dev training return: 13.899362
num samples: 6646272 - evaluation return: 123.143486 - mean training return: 121.197006 - std dev training return: 16.408484
num samples: 6719488 - evaluation return: 115.514763 - mean training return: 108.666359 - std dev training return: 17.906939
num samples: 6779392 - evaluation return: 97.430122 - mean training return: 82.360115 - std dev training return: 14.232504
num samples: 6836224 - evaluation return: 92.395454 - mean training return: 91.624207 - std dev training return: 14.414743
num samples: 6886400 - evaluation return: 75.717300 - mean training return: 96.319855 - std dev training return: 14.764734
num samples: 6948864 - evaluation return: 93.425652 - mean training return: 83.860275 - std dev training return: 13.914454
num samples: 7007744 - evaluation return: 87.724159 - mean training return: 88.986328 - std dev training return: 16.312195
num samples: 7069184 - evaluation return: 88.153137 - mean training return: 95.783844 - std dev training return: 27.664074
num samples: 7124992 - evaluation return: 82.169731 - mean training return: 112.725609 - std dev training return: 43.951904
num samples: 7211520 - evaluation return: 129.292587 - mean training return: 108.261658 - std dev training return: 42.078671
num samples: 7277056 - evaluation return: 101.900444 - mean training return: 146.767105 - std dev training return: 38.652050
num samples: 7355904 - evaluation return: 128.242599 - mean training return: 142.542145 - std dev training return: 45.551437
num samples: 7422976 - evaluation return: 105.185555 - mean training return: 126.653046 - std dev training return: 34.627037
num samples: 7507456 - evaluation return: 142.338455 - mean training return: 127.116066 - std dev training return: 22.047060
num samples: 7582720 - evaluation return: 125.599518 - mean training return: 136.537094 - std dev training return: 31.731882
num samples: 7666688 - evaluation return: 141.549942 - mean training return: 141.306992 - std dev training return: 28.757008
num samples: 7757824 - evaluation return: 152.934753 - mean training return: 153.242188 - std dev training return: 32.424362
num samples: 7849984 - evaluation return: 156.894592 - mean training return: 156.751450 - std dev training return: 34.448780
num samples: 7952384 - evaluation return: 185.434387 - mean training return: 189.436157 - std dev training return: 36.428951
num samples: 8109568 - evaluation return: 290.955414 - mean training return: 226.254257 - std dev training return: 52.402958
num samples: 8310784 - evaluation return: 377.526459 - mean training return: 333.047546 - std dev training return: 75.774178
num samples: 8507392 - evaluation return: 357.272064 - mean training return: 431.492981 - std dev training return: 90.512184
num samples: 8611328 - evaluation return: 172.607178 - mean training return: 278.182983 - std dev training return: 112.038712
num samples: 8688640 - evaluation return: 119.746758 - mean training return: 191.313889 - std dev training return: 113.834229
num samples: 8767488 - evaluation return: 113.137520 - mean training return: 132.616531 - std dev training return: 43.493763
num samples: 8898560 - evaluation return: 215.988739 - mean training return: 144.566055 - std dev training return: 52.063145
num samples: 8958464 - evaluation return: 82.011856 - mean training return: 93.156982 - std dev training return: 45.536160
num samples: 9019904 - evaluation return: 88.720024 - mean training return: 92.913017 - std dev training return: 26.413349
num samples: 9102848 - evaluation return: 130.998352 - mean training return: 103.958694 - std dev training return: 26.781437
num samples: 9175040 - evaluation return: 109.180443 - mean training return: 122.896729 - std dev training return: 43.284306
num samples: 9283072 - evaluation return: 170.861282 - mean training return: 160.585663 - std dev training return: 38.100487
num samples: 9539072 - evaluation return: 489.982605 - mean training return: 191.628510 - std dev training return: 85.870155
num samples: 9795072 - evaluation return: 486.946167 - mean training return: 304.137512 - std dev training return: 110.418457
num samples: 10051072 - evaluation return: 489.087494 - mean training return: 350.148041 - std dev training return: 96.706635
num samples: 10307072 - evaluation return: 484.501038 - mean training return: 469.524139 - std dev training return: 37.217403
num samples: 10563072 - evaluation return: 488.314209 - mean training return: 481.904999 - std dev training return: 3.545629
num samples: 10819072 - evaluation return: 485.336304 - mean training return: 483.892273 - std dev training return: 1.570298
num samples: 11075072 - evaluation return: 487.470276 - mean training return: 482.958099 - std dev training return: 1.600114
num samples: 11331072 - evaluation return: 486.684296 - mean training return: 481.597076 - std dev training return: 16.439320
num samples: 11587072 - evaluation return: 487.221924 - mean training return: 481.303619 - std dev training return: 14.267141
num samples: 11843072 - evaluation return: 484.917358 - mean training return: 482.106110 - std dev training return: 12.860074
num samples: 12096000 - evaluation return: 473.514404 - mean training return: 410.478241 - std dev training return: 64.468819
num samples: 12352000 - evaluation return: 488.048615 - mean training return: 394.664673 - std dev training return: 87.680992
num samples: 12608000 - evaluation return: 490.548187 - mean training return: 447.369476 - std dev training return: 64.363800

real    3m5.061s
user    3m15.976s
sys 0m13.105s
mugiwarakaizoku commented 2 years ago

The number of trails required to get to 496 for SAC is very inconsistent if we compare it with TD3 / PPO. I observed that most of the times, whenever the agent gets high evaluation return, it is getting followed by sudden drop of returns making the agent require large number of trails to get high score. I tried changing different parameters like policy delay, rho, initial value of alpha etc. but the issue still persists. The log_std_dev in actor.py is not getting updated in the training. I'm not sure why that is happening. Fixing this may probably reduces the number of trails required.

hmomin commented 2 years ago

Hmm okay. If log_std_dev in actor.py is still not getting updated, then there is probably something being detached that's not supposed to be. I'll have time to investigate it tomorrow.

Managed to get a successful Ant trial in 2 hours 17 minutes (below). On CPU alone, it would probably take ages... There is some instability like you mentioned, it will get to a high reward and then drop down a lot.

num samples: 4091904 - evaluation return: 674.465637 - mean training return: 527.775391 - std dev training return: 240.721222
num samples: 4714496 - evaluation return: 70.914871 - mean training return: 91.053963 - std dev training return: 97.809959
num samples: 5173248 - evaluation return: 48.522472 - mean training return: 94.249649 - std dev training return: 93.994507
num samples: 5394432 - evaluation return: 78.480522 - mean training return: 93.720230 - std dev training return: 97.012863
num samples: 5689344 - evaluation return: 103.137886 - mean training return: 92.161057 - std dev training return: 88.077782
num samples: 6246400 - evaluation return: 6.003512 - mean training return: 90.719955 - std dev training return: 81.639381
num samples: 6471680 - evaluation return: 97.154755 - mean training return: 88.485771 - std dev training return: 86.933891
num samples: 10567680 - evaluation return: -64.959824 - mean training return: 208.211563 - std dev training return: 256.767883
num samples: 14663680 - evaluation return: 733.533508 - mean training return: 208.326355 - std dev training return: 268.350983
num samples: 14958592 - evaluation return: 78.220840 - mean training return: 181.846298 - std dev training return: 265.663788
num samples: 15405056 - evaluation return: 1.046708 - mean training return: 192.927841 - std dev training return: 276.052795

..........

num samples: 2198286336 - evaluation return: 5507.762207 - mean training return: 3710.189453 - std dev training return: 1296.688843
num samples: 2202382336 - evaluation return: 5371.157715 - mean training return: 3893.572266 - std dev training return: 1247.609009
num samples: 2203234304 - evaluation return: 667.075928 - mean training return: 3778.430908 - std dev training return: 1353.292236
num samples: 2207330304 - evaluation return: 5634.553223 - mean training return: 3825.384277 - std dev training return: 1327.463745
num samples: 2208370688 - evaluation return: 1111.988403 - mean training return: 3688.445068 - std dev training return: 1414.791626
num samples: 2210369536 - evaluation return: 2458.423828 - mean training return: 3819.116455 - std dev training return: 1305.974976
num samples: 2214465536 - evaluation return: 5419.958008 - mean training return: 3605.096436 - std dev training return: 1431.059326
num samples: 2215759872 - evaluation return: 1176.634399 - mean training return: 3979.385254 - std dev training return: 1192.949341
num samples: 2219855872 - evaluation return: 5791.973145 - mean training return: 3836.263672 - std dev training return: 1344.140381
num samples: 2223951872 - evaluation return: 6083.122070 - mean training return: 3729.227539 - std dev training return: 1369.081177

real    137m29.631s
user    160m40.280s
sys 5m35.702s
hmomin commented 2 years ago

I figured out why log_std_dev wasn't being updated in actor.py -> it was because the optimizer was created before the log_std_dev parameter was. In other words, we originally had:

self.optimizer = Adam(self.parameters(), learning_rate) # optimizer learns all parameters created up to this point
...
self.log_std_dev = nn.Parameter(...) # optimizer doesn't learn any newly created parameters

If we want the optimizer to recognize log_std_dev as a trainable parameter, then we just have to move its creation above the optimizer's creation:

self.log_std_dev = nn.Parameter(...)
self.optimizer = Adam(self.parameters(), learning_rate) # optimizer now learns log_std_dev as well
...

After making this change to the original SAC we had in the codebase, I found that log_std_dev did begin to train now, but the agent became highly unstable for some reason - not sure why... So I switched the actor to use a final mu and std_dev layer instead of a completely separate log_std_dev and that seemed to stabilize learning a lot. This also seems to be what many other implementations do.

There are a few things I've found that help speed up training a lot:

  1. Using a higher initial alpha: I did some digging and managed to find the original implementation of SAC and they use a starting value of 1.0 for alpha. I found that setting starting_alpha to 1.0 instead of 0.01 improves initial exploration a lot and also improves stability of the agent. In the long run of training an agent, it doesn't matter all that much though, because it adjusts automatically with gradient updates to become closer to the task-specific and state-specific "optimal" value.
  2. Not training until the buffer is full: Initially, I remember you set the TD3 agent so that it wouldn't train until the buffer was at least partially full, but I removed it and let the agent train immediately, since sample generation happens so fast anyway. What I realized was that the agent would occasionally get trapped in local optima (especially on Ant) or just train unstably as a result of learning on highly correlated samples. So instead, I set the agent to start training once the buffer is completely full and this really helps reduce correlations between samples and improve stability with little cost - it takes less than 10 seconds for the buffer to completely fill up on my machine since sample generation happens so fast, so there's hardly any downside.
  3. Using a higher mini-batch size: Setting a much higher mini-batch size (100 -> num_envs) dramatically speeds up learning without introducing much instability from training on highly correlated samples. This was quite bizarre to me, but I understood why this works by thinking through the sample generation process: in a non-parallel version of SAC, you're only gaining one more replay sample with every step, so it takes 1M steps for the replay buffer to completely refresh. However, in our parallel version running on Ant, we get 4096 samples per step, so it only takes 245 steps for the replay buffer to completely refresh, which is 99.98% less steps required. Then, there's dramatically less correlation between gradient update steps with such a high refresh rate, allowing us to use a much larger mini-batch size as a result.

Managed to get Ant to train in ~30 minutes using these tricks, which is pretty good for SAC in my opinion (full trial below).

If I can get Humanoid to 6000, I'll consider the implementation a complete success, however, I'm currently running into some exploding gradient issues that I need to get sorted out (ran into them with PPO as well...)

Importing module 'gym_37' (/home/momin/Documents/isaacgym/python/isaacgym/_bindings/linux-x86_64/gym_37.so)
Setting GYM_USD_PLUG_INFO_PATH to /home/momin/Documents/isaacgym/python/isaacgym/_bindings/linux-x86_64/usd/plugInfo.json
PyTorch version 1.10.2+cu113
Device count 1
/home/momin/Documents/isaacgym/python/isaacgym/_bindings/src/gymtorch
Using /home/momin/.cache/torch_extensions/py37_cu113 as PyTorch extensions root...
Emitting ninja build file /home/momin/.cache/torch_extensions/py37_cu113/gymtorch/build.ninja...
Building extension module gymtorch...
Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N)
ninja: no work to do.
Loading extension module gymtorch...
/home/momin/anaconda3/envs/rlgpu/lib/python3.7/site-packages/gym/spaces/box.py:112: UserWarning: WARN: Box bound precision lowered by casting to float32
  logger.warn(f"Box bound precision lowered by casting to {self.dtype}")
Not connected to PVD
+++ Using GPU PhysX
Physics Engine: PhysX
Physics Device: cuda:0
GPU Pipeline: enabled
num envs 4096 env spacing 5
num samples: 4087808 - evaluation return: 677.719421 - mean training return: 108.282974 - std dev training return: 106.971123
num samples: 8183808 - evaluation return: 789.079346 - mean training return: 179.700043 - std dev training return: 165.438812
num samples: 12279808 - evaluation return: 1028.539917 - mean training return: 334.614044 - std dev training return: 184.693054
num samples: 16375808 - evaluation return: 1205.474121 - mean training return: 523.513794 - std dev training return: 281.925293
num samples: 20471808 - evaluation return: 665.859619 - mean training return: 357.599701 - std dev training return: 247.289398
num samples: 24567808 - evaluation return: 881.567993 - mean training return: 458.887512 - std dev training return: 276.614594
num samples: 25501696 - evaluation return: 441.128204 - mean training return: 409.857483 - std dev training return: 246.448059
num samples: 29597696 - evaluation return: 1158.728882 - mean training return: 453.000549 - std dev training return: 277.082886
num samples: 29917184 - evaluation return: 140.455643 - mean training return: 629.697510 - std dev training return: 358.086639
num samples: 31358976 - evaluation return: 366.491394 - mean training return: 610.980591 - std dev training return: 398.083069
num samples: 33140736 - evaluation return: 584.198303 - mean training return: 458.447571 - std dev training return: 339.154205
num samples: 34721792 - evaluation return: 492.577423 - mean training return: 429.734039 - std dev training return: 296.116455
num samples: 35328000 - evaluation return: 262.866852 - mean training return: 428.451721 - std dev training return: 285.255035
num samples: 39424000 - evaluation return: 645.036743 - mean training return: 428.689423 - std dev training return: 281.813263
num samples: 39821312 - evaluation return: 127.884354 - mean training return: 412.150421 - std dev training return: 252.523285
num samples: 43061248 - evaluation return: 634.150208 - mean training return: 474.250946 - std dev training return: 277.810669
num samples: 44929024 - evaluation return: 589.246094 - mean training return: 526.096069 - std dev training return: 335.864899
num samples: 46227456 - evaluation return: 204.700211 - mean training return: 557.104675 - std dev training return: 374.545349
num samples: 49397760 - evaluation return: 874.659973 - mean training return: 537.133545 - std dev training return: 357.290436
num samples: 50651136 - evaluation return: 261.055634 - mean training return: 550.914673 - std dev training return: 358.997681
num samples: 51580928 - evaluation return: 463.084778 - mean training return: 573.476746 - std dev training return: 373.658203
num samples: 53256192 - evaluation return: 713.588684 - mean training return: 655.041016 - std dev training return: 425.228210
num samples: 54824960 - evaluation return: 393.018433 - mean training return: 712.034851 - std dev training return: 482.058350
num samples: 55824384 - evaluation return: 379.053101 - mean training return: 793.615051 - std dev training return: 544.913025
num samples: 56836096 - evaluation return: 448.237885 - mean training return: 859.322021 - std dev training return: 563.902344
num samples: 58114048 - evaluation return: 301.747406 - mean training return: 896.646973 - std dev training return: 579.185364
num samples: 60084224 - evaluation return: 523.140808 - mean training return: 863.888611 - std dev training return: 568.861816
num samples: 61128704 - evaluation return: 503.267395 - mean training return: 844.890686 - std dev training return: 519.866394
num samples: 63447040 - evaluation return: 1101.335815 - mean training return: 819.425049 - std dev training return: 519.510498
num samples: 65310720 - evaluation return: 154.423325 - mean training return: 907.024414 - std dev training return: 638.676941
num samples: 67002368 - evaluation return: 699.005310 - mean training return: 998.205444 - std dev training return: 682.265137
num samples: 67751936 - evaluation return: 381.027740 - mean training return: 868.862488 - std dev training return: 630.095825
num samples: 68161536 - evaluation return: 199.398499 - mean training return: 756.130798 - std dev training return: 585.665955
num samples: 69894144 - evaluation return: 738.585266 - mean training return: 774.185852 - std dev training return: 554.226257
num samples: 70762496 - evaluation return: 388.188721 - mean training return: 718.754944 - std dev training return: 492.637421
num samples: 71983104 - evaluation return: 708.120605 - mean training return: 654.484924 - std dev training return: 464.252533
num samples: 72785920 - evaluation return: 400.345581 - mean training return: 614.598022 - std dev training return: 426.549500
num samples: 74338304 - evaluation return: 675.634277 - mean training return: 582.915771 - std dev training return: 396.218384
num samples: 75218944 - evaluation return: 536.874939 - mean training return: 563.286194 - std dev training return: 363.038635
num samples: 76165120 - evaluation return: 542.242432 - mean training return: 578.333008 - std dev training return: 375.922241
num samples: 77660160 - evaluation return: 834.953613 - mean training return: 550.629761 - std dev training return: 367.979462
num samples: 78299136 - evaluation return: 93.367989 - mean training return: 524.972473 - std dev training return: 348.997345
num samples: 79966208 - evaluation return: 921.588989 - mean training return: 495.132294 - std dev training return: 316.236755
num samples: 80621568 - evaluation return: 342.091736 - mean training return: 506.530914 - std dev training return: 292.019684
num samples: 81174528 - evaluation return: 314.797028 - mean training return: 476.778381 - std dev training return: 272.112518
num samples: 85270528 - evaluation return: -60.832760 - mean training return: 494.840668 - std dev training return: 283.807770
num samples: 85884928 - evaluation return: 213.353012 - mean training return: 518.540039 - std dev training return: 292.368713
num samples: 89980928 - evaluation return: 727.482849 - mean training return: 543.992981 - std dev training return: 331.421173
num samples: 91271168 - evaluation return: 1007.757019 - mean training return: 615.713135 - std dev training return: 385.041809
num samples: 93409280 - evaluation return: 1652.958252 - mean training return: 643.083435 - std dev training return: 425.476318
num samples: 94072832 - evaluation return: 310.712860 - mean training return: 674.953186 - std dev training return: 441.406158
num samples: 95756288 - evaluation return: 1156.840332 - mean training return: 716.996765 - std dev training return: 469.175262
num samples: 97566720 - evaluation return: 925.261597 - mean training return: 800.349243 - std dev training return: 513.760193
num samples: 99741696 - evaluation return: 1145.262329 - mean training return: 880.760010 - std dev training return: 564.748596
num samples: 100380672 - evaluation return: 223.715012 - mean training return: 892.114990 - std dev training return: 579.333069
num samples: 101158912 - evaluation return: 132.711700 - mean training return: 894.674377 - std dev training return: 577.448181
num samples: 102498304 - evaluation return: 591.050110 - mean training return: 935.081482 - std dev training return: 581.939941
num samples: 103665664 - evaluation return: 705.299011 - mean training return: 1023.103882 - std dev training return: 639.797852
num samples: 104714240 - evaluation return: 768.803223 - mean training return: 1121.698486 - std dev training return: 716.492798
num samples: 106942464 - evaluation return: 1877.770996 - mean training return: 1371.732544 - std dev training return: 848.357910
num samples: 108924928 - evaluation return: 1357.859741 - mean training return: 1657.555542 - std dev training return: 973.079895
num samples: 110358528 - evaluation return: 774.611511 - mean training return: 1674.525757 - std dev training return: 995.656006
num samples: 111403008 - evaluation return: 728.872620 - mean training return: 1757.331787 - std dev training return: 1038.269165
num samples: 112234496 - evaluation return: 615.456604 - mean training return: 1860.955200 - std dev training return: 1054.218628
num samples: 113680384 - evaluation return: 265.972687 - mean training return: 1869.435303 - std dev training return: 1069.396729
num samples: 117563392 - evaluation return: 2264.908447 - mean training return: 2004.195435 - std dev training return: 1094.594727
num samples: 120258560 - evaluation return: 1990.499268 - mean training return: 2073.335693 - std dev training return: 1124.207275
num samples: 124354560 - evaluation return: 1031.377197 - mean training return: 2197.282471 - std dev training return: 1152.165649
num samples: 125595648 - evaluation return: 612.634949 - mean training return: 2415.687012 - std dev training return: 1165.463135
num samples: 129421312 - evaluation return: 2713.983154 - mean training return: 2342.296875 - std dev training return: 1215.573730
num samples: 133406720 - evaluation return: 2600.252441 - mean training return: 2588.665039 - std dev training return: 1251.421631
num samples: 135946240 - evaluation return: 2095.953369 - mean training return: 2625.864502 - std dev training return: 1284.564087
num samples: 138936320 - evaluation return: 2374.656250 - mean training return: 2733.497314 - std dev training return: 1290.049927
num samples: 140304384 - evaluation return: 1215.307007 - mean training return: 2706.850830 - std dev training return: 1317.229858
num samples: 144400384 - evaluation return: 3494.433594 - mean training return: 2800.481934 - std dev training return: 1311.507935
num samples: 146083840 - evaluation return: 1087.415894 - mean training return: 2779.315430 - std dev training return: 1339.602783
num samples: 147005440 - evaluation return: 418.684418 - mean training return: 2699.041748 - std dev training return: 1326.160522
num samples: 147922944 - evaluation return: 750.355347 - mean training return: 2796.516113 - std dev training return: 1339.269409
num samples: 149774336 - evaluation return: 1236.436768 - mean training return: 2871.976074 - std dev training return: 1312.185913
num samples: 150851584 - evaluation return: 718.608521 - mean training return: 2893.704834 - std dev training return: 1311.588989
num samples: 152666112 - evaluation return: 1188.342896 - mean training return: 2929.615723 - std dev training return: 1324.273315
num samples: 156762112 - evaluation return: 3470.165283 - mean training return: 3027.097412 - std dev training return: 1297.322998
num samples: 158978048 - evaluation return: 2052.671143 - mean training return: 3038.439941 - std dev training return: 1283.560425
num samples: 160718848 - evaluation return: 1030.257202 - mean training return: 3029.867188 - std dev training return: 1352.703369
num samples: 162091008 - evaluation return: 1218.868408 - mean training return: 3081.661133 - std dev training return: 1264.801025
num samples: 164012032 - evaluation return: 1337.219971 - mean training return: 3120.406494 - std dev training return: 1269.239746
num samples: 168108032 - evaluation return: 3335.211914 - mean training return: 3098.553223 - std dev training return: 1281.134888
num samples: 172204032 - evaluation return: 739.573975 - mean training return: 3227.940186 - std dev training return: 1265.144653
num samples: 175243264 - evaluation return: 3055.040039 - mean training return: 3203.157715 - std dev training return: 1275.690918
num samples: 179339264 - evaluation return: 2308.020020 - mean training return: 3382.665527 - std dev training return: 1239.221191
num samples: 180199424 - evaluation return: 566.332581 - mean training return: 3294.579102 - std dev training return: 1295.597168
num samples: 181452800 - evaluation return: 1216.080811 - mean training return: 3426.002686 - std dev training return: 1250.474976
num samples: 184442880 - evaluation return: 2121.725586 - mean training return: 3400.754150 - std dev training return: 1257.068848
num samples: 188538880 - evaluation return: 3022.372314 - mean training return: 3477.381592 - std dev training return: 1277.482544
num samples: 192634880 - evaluation return: 2213.867432 - mean training return: 3489.497070 - std dev training return: 1281.137085
num samples: 196730880 - evaluation return: 3999.681152 - mean training return: 3620.264404 - std dev training return: 1248.197388
num samples: 200826880 - evaluation return: 3808.437012 - mean training return: 3658.710693 - std dev training return: 1281.802246
num samples: 202801152 - evaluation return: 1742.295898 - mean training return: 3660.126953 - std dev training return: 1305.203491
num samples: 204673024 - evaluation return: 2002.726440 - mean training return: 3783.060547 - std dev training return: 1265.210815
num samples: 208769024 - evaluation return: 4151.474609 - mean training return: 3757.763916 - std dev training return: 1278.100708
num samples: 212865024 - evaluation return: 4753.059570 - mean training return: 3825.936768 - std dev training return: 1242.946289
num samples: 216961024 - evaluation return: 884.362122 - mean training return: 3871.710205 - std dev training return: 1262.270508
num samples: 221057024 - evaluation return: 4365.353516 - mean training return: 4005.335205 - std dev training return: 1244.638550
num samples: 225153024 - evaluation return: 3253.089844 - mean training return: 4054.728271 - std dev training return: 1249.697388
num samples: 229249024 - evaluation return: 3921.890625 - mean training return: 4149.657227 - std dev training return: 1178.521240
num samples: 230572032 - evaluation return: 741.822876 - mean training return: 4119.970703 - std dev training return: 1248.752075
num samples: 234668032 - evaluation return: 4669.076660 - mean training return: 4269.319824 - std dev training return: 1190.458740
num samples: 238764032 - evaluation return: 4912.197266 - mean training return: 4284.246582 - std dev training return: 1185.154541
num samples: 242860032 - evaluation return: 4467.602539 - mean training return: 4351.229980 - std dev training return: 1201.437622
num samples: 246956032 - evaluation return: 4714.000488 - mean training return: 4386.395020 - std dev training return: 1213.113037
num samples: 249167872 - evaluation return: 2174.789062 - mean training return: 4498.969727 - std dev training return: 1108.780029
num samples: 253263872 - evaluation return: 5140.258789 - mean training return: 4488.983398 - std dev training return: 1145.733398
num samples: 257359872 - evaluation return: 4709.086914 - mean training return: 4562.039551 - std dev training return: 1143.631714
num samples: 261455872 - evaluation return: 4810.796387 - mean training return: 4672.280273 - std dev training return: 1047.610352
num samples: 265302016 - evaluation return: 4089.748291 - mean training return: 4682.460938 - std dev training return: 1091.500610
num samples: 269398016 - evaluation return: 4635.677246 - mean training return: 4748.047852 - std dev training return: 1031.729614
num samples: 273494016 - evaluation return: 4546.480469 - mean training return: 4809.504883 - std dev training return: 1005.152832
num samples: 277590016 - evaluation return: 4904.815430 - mean training return: 4849.750000 - std dev training return: 987.033875
num samples: 281686016 - evaluation return: 4522.263184 - mean training return: 4919.270996 - std dev training return: 965.424072
num samples: 283664384 - evaluation return: 2223.982666 - mean training return: 4889.103516 - std dev training return: 1030.603149
num samples: 287760384 - evaluation return: 2973.659180 - mean training return: 4927.033691 - std dev training return: 973.088745
num samples: 291856384 - evaluation return: 4207.409180 - mean training return: 4955.288086 - std dev training return: 984.219849
num samples: 295952384 - evaluation return: 4997.489258 - mean training return: 5004.883789 - std dev training return: 928.310486
num samples: 300048384 - evaluation return: 1291.066650 - mean training return: 4993.019531 - std dev training return: 933.257446
num samples: 301060096 - evaluation return: 1067.592407 - mean training return: 5015.382324 - std dev training return: 914.977051
num samples: 305156096 - evaluation return: 5024.103516 - mean training return: 4970.977051 - std dev training return: 995.527405
num samples: 309252096 - evaluation return: 4349.481445 - mean training return: 4972.365723 - std dev training return: 1000.720154
num samples: 310697984 - evaluation return: 1303.066162 - mean training return: 5070.601074 - std dev training return: 856.453491
num samples: 314793984 - evaluation return: 5362.983887 - mean training return: 5120.913574 - std dev training return: 888.620544
num samples: 315969536 - evaluation return: 579.818359 - mean training return: 5106.833984 - std dev training return: 961.476257
num samples: 319864832 - evaluation return: 4390.357422 - mean training return: 5139.199219 - std dev training return: 866.727783
num samples: 323960832 - evaluation return: 4720.991211 - mean training return: 5048.111328 - std dev training return: 958.304260
num samples: 326348800 - evaluation return: 2642.534424 - mean training return: 5052.557617 - std dev training return: 1051.609741
num samples: 330444800 - evaluation return: 4479.364258 - mean training return: 5098.503906 - std dev training return: 972.859741
num samples: 334540800 - evaluation return: 4732.688477 - mean training return: 5129.225098 - std dev training return: 1000.643005
num samples: 338636800 - evaluation return: 5076.015137 - mean training return: 5101.492676 - std dev training return: 925.866394
num samples: 342732800 - evaluation return: 4792.084473 - mean training return: 5073.970703 - std dev training return: 990.441772
num samples: 345890816 - evaluation return: 3071.876221 - mean training return: 5068.729980 - std dev training return: 1007.909363
num samples: 349986816 - evaluation return: 5125.055664 - mean training return: 5064.096680 - std dev training return: 1017.940247
num samples: 354082816 - evaluation return: 4990.968262 - mean training return: 5100.190430 - std dev training return: 1057.222290
num samples: 356442112 - evaluation return: 2588.724854 - mean training return: 5148.245605 - std dev training return: 972.368591
num samples: 360538112 - evaluation return: 4681.024902 - mean training return: 5140.180664 - std dev training return: 981.377930
num samples: 364634112 - evaluation return: 5262.228027 - mean training return: 5256.445312 - std dev training return: 853.666077
num samples: 368730112 - evaluation return: 4944.168945 - mean training return: 5330.930664 - std dev training return: 817.912842
num samples: 372826112 - evaluation return: 4614.281250 - mean training return: 5275.549316 - std dev training return: 941.166992
num samples: 376922112 - evaluation return: 4893.793945 - mean training return: 5279.234863 - std dev training return: 864.030151
num samples: 381018112 - evaluation return: 4860.990723 - mean training return: 5286.876465 - std dev training return: 981.783997
num samples: 385114112 - evaluation return: 4797.515137 - mean training return: 5310.090332 - std dev training return: 939.185425
num samples: 389210112 - evaluation return: 4707.235840 - mean training return: 5309.575195 - std dev training return: 879.500000
num samples: 393306112 - evaluation return: 5271.142578 - mean training return: 5229.105957 - std dev training return: 946.041382
num samples: 395751424 - evaluation return: 2910.239990 - mean training return: 5196.331543 - std dev training return: 926.110413
num samples: 399847424 - evaluation return: 5198.004883 - mean training return: 5205.377441 - std dev training return: 992.374512
num samples: 401543168 - evaluation return: 1850.524414 - mean training return: 5135.352051 - std dev training return: 1073.026978
num samples: 405639168 - evaluation return: 5470.232422 - mean training return: 5152.245605 - std dev training return: 1042.800903
num samples: 409735168 - evaluation return: 5019.185547 - mean training return: 5224.198730 - std dev training return: 967.428345
num samples: 413831168 - evaluation return: 5613.448730 - mean training return: 5188.541992 - std dev training return: 1021.655762
num samples: 417927168 - evaluation return: 5251.003418 - mean training return: 5281.265625 - std dev training return: 918.070129
num samples: 422023168 - evaluation return: 5098.303223 - mean training return: 5150.244629 - std dev training return: 971.772949
num samples: 426119168 - evaluation return: 5165.926758 - mean training return: 5106.055176 - std dev training return: 1058.684570
num samples: 430215168 - evaluation return: 5072.031250 - mean training return: 5225.807617 - std dev training return: 950.585083
num samples: 434311168 - evaluation return: 4731.968750 - mean training return: 5342.452148 - std dev training return: 916.584961
num samples: 438407168 - evaluation return: 5480.619141 - mean training return: 5389.568359 - std dev training return: 934.963379
num samples: 442503168 - evaluation return: 4748.313477 - mean training return: 5398.486816 - std dev training return: 897.014587
num samples: 446599168 - evaluation return: 2642.171631 - mean training return: 5435.519043 - std dev training return: 971.012512
num samples: 450695168 - evaluation return: 5281.966309 - mean training return: 5523.281738 - std dev training return: 865.167786
num samples: 454791168 - evaluation return: 5099.273438 - mean training return: 5547.996094 - std dev training return: 826.935181
num samples: 458887168 - evaluation return: 5634.799805 - mean training return: 5543.472168 - std dev training return: 849.902954
num samples: 462983168 - evaluation return: 4827.682617 - mean training return: 5556.342773 - std dev training return: 809.752747
num samples: 467079168 - evaluation return: 5211.116211 - mean training return: 5620.545410 - std dev training return: 697.883423
num samples: 471175168 - evaluation return: 5521.529297 - mean training return: 5615.448730 - std dev training return: 746.056641
num samples: 475271168 - evaluation return: 5456.325195 - mean training return: 5580.168945 - std dev training return: 874.432312
num samples: 479367168 - evaluation return: 5652.470703 - mean training return: 5676.817871 - std dev training return: 703.073425
num samples: 483463168 - evaluation return: 5270.827637 - mean training return: 5626.043945 - std dev training return: 872.524109
num samples: 487559168 - evaluation return: 6089.966797 - mean training return: 5759.302246 - std dev training return: 698.500366

real    32m44.662s
user    39m30.624s
sys 1m21.430s