Open Mohamedellebody opened 1 year ago
` | SpatialTransformer/encoderblock_4/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 | SpatialTransformer/encoderblock_4/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | 1.69e-05 | 0.0312 | SpatialTransformer/encoderblock_4/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 | SpatialTransformer/encoderblock_4/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | -2.32e-05 | 0.0313 | SpatialTransformer/encoderblock_5/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 | SpatialTransformer/encoderblock_5/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 | SpatialTransformer/encoderblock_5/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 | SpatialTransformer/encoderblock_5/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 | SpatialTransformer/encoderblock_5/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | -8.86e-09 | 1e-06 | SpatialTransformer/encoderblock_5/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | -5.36e-06 | 0.0198 | SpatialTransformer/encoderblock_5/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | -4.11e-08 | 9.63e-07 | SpatialTransformer/encoderblock_5/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | -1.34e-05 | 0.0198 | SpatialTransformer/encoderblock_5/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 | SpatialTransformer/encoderblock_5/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | -2.98e-05 | 0.0313 | SpatialTransformer/encoderblock_5/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 | I0206 02:33:57.038443 140652483282752 parameter_overview.py:264] | SpatialTransformer/encoderblock_5/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | 4.29e-05 | 0.0313 | SpatialTransformer/encoderblock_5/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 | SpatialTransformer/encoderblock_5/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | 9.04e-06 | 0.0313 | SpatialTransformer/encoderblock_5/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 | SpatialTransformer/encoderblock_5/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | -2.05e-05 | 0.0312 | SpatialTransformer/encoderblock_6/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 | SpatialTransformer/encoderblock_6/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 | SpatialTransformer/encoderblock_6/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 | SpatialTransformer/encoderblock_6/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 | SpatialTransformer/encoderblock_6/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | -8.36e-09 | 9.86e-07 | SpatialTransformer/encoderblock_6/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | -2.5e-05 | 0.0198 | SpatialTransformer/encoderblock_6/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | 4.78e-08 | 9.82e-07 | SpatialTransformer/encoderblock_6/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | 1.99e-05 | 0.0198 | SpatialTransformer/encoderblock_6/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 | SpatialTransformer/encoderblock_6/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | -2.12e-05 | 0.0313 | SpatialTransformer/encoderblock_6/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 | SpatialTransformer/encoderblock_6/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | 2.97e-05 | 0.0313 | SpatialTransformer/encoderblock_6/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 | SpatialTransformer/encoderblock_6/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | 3.2e-06 | 0.0313 | SpatialTransformer/encoderblock_6/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 | SpatialTransformer/encoderblock_6/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | -2.18e-05 | 0.0312 | SpatialTransformer/encoderblock_7/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 | SpatialTransformer/encoderblock_7/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 | SpatialTransformer/encoderblock_7/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 | SpatialTransformer/encoderblock_7/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 | SpatialTransformer/encoderblock_7/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | 2.53e-08 | 9.9e-07 | SpatialTransformer/encoderblock_7/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | 1.42e-05 | 0.0198 | SpatialTransformer/encoderblock_7/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | 1.85e-08 | 1.01e-06 | SpatialTransformer/encoderblock_7/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | -6.61e-06 | 0.0198 | SpatialTransformer/encoderblock_7/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 | SpatialTransformer/encoderblock_7/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | -1.04e-05 | 0.0313 | SpatialTransformer/encoderblock_7/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 | SpatialTransformer/encoderblock_7/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | -4.6e-05 | 0.0312 | SpatialTransformer/encoderblock_7/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 | SpatialTransformer/encoderblock_7/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | -3.17e-05 | 0.0313 | SpatialTransformer/encoderblock_7/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 | SpatialTransformer/encoderblock_7/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | 2.33e-05 | 0.0313 | SpatialTransformer/encoderblock_8/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 | SpatialTransformer/encoderblock_8/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 | SpatialTransformer/encoderblock_8/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 | SpatialTransformer/encoderblock_8/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 | SpatialTransformer/encoderblock_8/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | 1.66e-08 | 9.97e-07 | SpatialTransformer/encoderblock_8/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | -6.57e-06 | 0.0198 | SpatialTransformer/encoderblock_8/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | 3.72e-08 | 9.71e-07 | SpatialTransformer/encoderblock_8/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | -3.2e-06 | 0.0198 | SpatialTransformer/encoderblock_8/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 | SpatialTransformer/encoderblock_8/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | 2.75e-05 | 0.0313 | SpatialTransformer/encoderblock_8/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 | SpatialTransformer/encoderblock_8/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | 3.36e-06 | 0.0313 | SpatialTransformer/encoderblock_8/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 | SpatialTransformer/encoderblock_8/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | -7.52e-05 | 0.0312 | SpatialTransformer/encoderblock_8/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 | SpatialTransformer/encoderblock_8/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | -2.33e-05 | 0.0313 | SpatialTransformer/encoderblock_9/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 | SpatialTransformer/encoderblock_9/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 | SpatialTransformer/encoderblock_9/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 | SpatialTransformer/encoderblock_9/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 | SpatialTransformer/encoderblock_9/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | -3.11e-08 | 1.01e-06 | SpatialTransformer/encoderblock_9/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | -7.71e-06 | 0.0198 | SpatialTransformer/encoderblock_9/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | -1.28e-08 | 1.01e-06 | SpatialTransformer/encoderblock_9/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | 1.79e-05 | 0.0198 | SpatialTransformer/encoderblock_9/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 | SpatialTransformer/encoderblock_9/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | 1.36e-05 | 0.0312 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
SpatialTransformer/encoderblock_9/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
SpatialTransformer/encoderblock_9/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | -2.6e-05 | 0.0313 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
SpatialTransformer/encoderblock_9/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
SpatialTransformer/encoderblock_9/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | 3.29e-05 | 0.0312 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
SpatialTransformer/encoderblock_9/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
SpatialTransformer/encoderblock_9/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | 2.43e-06 | 0.0312 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
SpatialTransformer/posembed_input/pos_embedding | (1, 197, 1024) | 201,728 | 3.66e-05 | 0.02 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
TemporalTransformer/encoder_norm/bias | (1024,) | 1,024 | 0.0 | 0.0 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
TemporalTransformer/encoder_norm/scale | (1024,) | 1,024 | 1.0 | 0.0 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
TemporalTransformer/encoderblock_0/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
TemporalTransformer/encoderblock_0/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
TemporalTransformer/encoderblock_0/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
TemporalTransformer/encoderblock_0/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
TemporalTransformer/encoderblock_0/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | 2.94e-08 | 1.04e-06 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
TemporalTransformer/encoderblock_0/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | -3.91e-06 | 0.0198 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
TemporalTransformer/encoderblock_0/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | -4.25e-08 | 1.01e-06 | |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
TemporalTransformer/encoderblock_0/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | -8.06e-06 | 0.0198 |
I0206 02:33:57.038616 140652483282752 parameter_overview.py:264]
| TemporalTransformer/encoderblock_0/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_0/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | 3.36e-05 | 0.0313 |
| TemporalTransformer/encoderblock_0/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_0/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | 6.52e-06 | 0.0313 |
| TemporalTransformer/encoderblock_0/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_0/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | 2.04e-05 | 0.0312 |
| TemporalTransformer/encoderblock_0/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_0/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | -5.26e-06 | 0.0313 |
| TemporalTransformer/encoderblock_1/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_1/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_1/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_1/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_1/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | -7.33e-09 | 1.01e-06 |
| TemporalTransformer/encoderblock_1/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | 3.12e-06 | 0.0198 |
| TemporalTransformer/encoderblock_1/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | 1.25e-08 | 1e-06 |
| TemporalTransformer/encoderblock_1/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | 6.45e-06 | 0.0198 |
| TemporalTransformer/encoderblock_1/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_1/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | 2.46e-05 | 0.0312 |
| TemporalTransformer/encoderblock_1/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_1/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | 1.67e-05 | 0.0313 |
| TemporalTransformer/encoderblock_1/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_1/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | -4.57e-05 | 0.0312 |
| TemporalTransformer/encoderblock_1/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_1/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | -1.53e-05 | 0.0313 |
| TemporalTransformer/encoderblock_10/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_10/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_10/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_10/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_10/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | 2.34e-08 | 1.01e-06 |
| TemporalTransformer/encoderblock_10/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | -3.48e-05 | 0.0198 |
| TemporalTransformer/encoderblock_10/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | -4.13e-10 | 9.86e-07 |
| TemporalTransformer/encoderblock_10/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | 1.53e-05 | 0.0198 |
| TemporalTransformer/encoderblock_10/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_10/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | 1.26e-05 | 0.0312 |
| TemporalTransformer/encoderblock_10/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_10/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | -2.99e-06 | 0.0312 |
| TemporalTransformer/encoderblock_10/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_10/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | 1.6e-05 | 0.0312 |
| TemporalTransformer/encoderblock_10/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_10/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | -3.33e-05 | 0.0313 |
| TemporalTransformer/encoderblock_11/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_11/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_11/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_11/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_11/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | -6.22e-09 | 9.71e-07 |
| TemporalTransformer/encoderblock_11/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | -7.41e-06 | 0.0198 |
| TemporalTransformer/encoderblock_11/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | 1.51e-08 | 9.93e-07 |
| TemporalTransformer/encoderblock_11/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | -4.76e-06 | 0.0198 |
| TemporalTransformer/encoderblock_11/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_11/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | 2.08e-05 | 0.0312 |
| TemporalTransformer/encoderblock_11/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_11/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | -9.87e-06 | 0.0312 |
| TemporalTransformer/encoderblock_11/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_11/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | -1.37e-05 | 0.0312 |
| TemporalTransformer/encoderblock_11/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_11/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | 6.88e-05 | 0.0313 |
| TemporalTransformer/encoderblock_12/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_12/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_12/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_12/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_12/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | -5.46e-09 | 9.93e-07 |
| TemporalTransformer/encoderblock_12/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | -5.17e-06 | 0.0198 |
| TemporalTransformer/encoderblock_12/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | -1.17e-08 | 1.02e-06 |
| TemporalTransformer/encoderblock_12/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | 7.39e-06 | 0.0198 |
| TemporalTransformer/encoderblock_12/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_12/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | 1.77e-06 | 0.0313 |
| TemporalTransformer/encoderblock_12/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_12/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | 2.21e-06 | 0.0313 |
| TemporalTransformer/encoderblock_12/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_12/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | -2.34e-05 | 0.0312 |
| TemporalTransformer/encoderblock_12/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_12/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | 8.18e-06 | 0.0313 |
| TemporalTransformer/encoderblock_13/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_13/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_13/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_13/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_13/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | 1.51e-08 | 1.02e-06 |
| TemporalTransformer/encoderblock_13/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | -7.84e-06 | 0.0198 |
| TemporalTransformer/encoderblock_13/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | -5.03e-08 | 1.02e-06 |
| TemporalTransformer/encoderblock_13/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | -4.9e-06 | 0.0198 |
I0206 02:33:57.038791 140652483282752 parameter_overview.py:264]
| TemporalTransformer/encoderblock_13/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_13/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | -7.1e-06 | 0.0312 |
| TemporalTransformer/encoderblock_13/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_13/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | 4.81e-05 | 0.0313 |
| TemporalTransformer/encoderblock_13/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_13/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | 1.74e-05 | 0.0313 |
| TemporalTransformer/encoderblock_13/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_13/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | -9.11e-06 | 0.0313 |
| TemporalTransformer/encoderblock_14/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_14/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_14/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_14/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_14/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | 3.9e-08 | 9.86e-07 |
| TemporalTransformer/encoderblock_14/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | -1.78e-05 | 0.0198 |
| TemporalTransformer/encoderblock_14/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | 2.79e-08 | 1.05e-06 |
| TemporalTransformer/encoderblock_14/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | -5.18e-06 | 0.0198 |
| TemporalTransformer/encoderblock_14/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_14/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | -2.56e-05 | 0.0312 |
| TemporalTransformer/encoderblock_14/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_14/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | 6.94e-05 | 0.0312 |
| TemporalTransformer/encoderblock_14/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_14/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | 7.14e-06 | 0.0312 |
| TemporalTransformer/encoderblock_14/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_14/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | -1.56e-05 | 0.0313 |
| TemporalTransformer/encoderblock_15/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_15/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_15/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_15/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_15/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | 6.92e-09 | 1.01e-06 |
| TemporalTransformer/encoderblock_15/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | -2.17e-06 | 0.0198 |
| TemporalTransformer/encoderblock_15/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | 1.28e-08 | 1e-06 |
| TemporalTransformer/encoderblock_15/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | -1.67e-05 | 0.0198 |
| TemporalTransformer/encoderblock_15/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_15/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | -1.48e-05 | 0.0313 |
| TemporalTransformer/encoderblock_15/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_15/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | 3.53e-05 | 0.0312 |
| TemporalTransformer/encoderblock_15/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_15/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | -4.25e-05 | 0.0312 |
| TemporalTransformer/encoderblock_15/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_15/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | -1.68e-05 | 0.0312 |
| TemporalTransformer/encoderblock_16/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_16/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_16/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_16/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_16/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | 1.46e-09 | 9.85e-07 |
| TemporalTransformer/encoderblock_16/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | -3.86e-07 | 0.0198 |
| TemporalTransformer/encoderblock_16/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | -5.2e-08 | 1.04e-06 |
| TemporalTransformer/encoderblock_16/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | 1.29e-05 | 0.0198 |
| TemporalTransformer/encoderblock_16/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_16/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | 3.56e-05 | 0.0313 |
| TemporalTransformer/encoderblock_16/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_16/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | 3.2e-06 | 0.0313 |
| TemporalTransformer/encoderblock_16/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_16/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | -2.09e-05 | 0.0312 |
| TemporalTransformer/encoderblock_16/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_16/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | 4.96e-05 | 0.0312 |
| TemporalTransformer/encoderblock_17/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_17/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_17/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_17/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_17/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | -5.94e-09 | 9.97e-07 |
| TemporalTransformer/encoderblock_17/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | 6.44e-06 | 0.0198 |
| TemporalTransformer/encoderblock_17/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | 2.81e-08 | 9.97e-07 |
| TemporalTransformer/encoderblock_17/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | -5.5e-06 | 0.0198 |
| TemporalTransformer/encoderblock_17/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_17/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | 2.27e-05 | 0.0313 |
| TemporalTransformer/encoderblock_17/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_17/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | 3.62e-06 | 0.0312 |
| TemporalTransformer/encoderblock_17/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_17/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | -3.45e-05 | 0.0313 |
| TemporalTransformer/encoderblock_17/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_17/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | -2.54e-05 | 0.0312 |
| TemporalTransformer/encoderblock_18/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_18/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_18/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_18/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_18/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | -1.13e-08 | 1e-06 |
| TemporalTransformer/encoderblock_18/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | 1.21e-05 | 0.0198 |
| TemporalTransformer/encoderblock_18/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | -2.84e-08 | 9.99e-07 |
| TemporalTransformer/encoderblock_18/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | -4.11e-06 | 0.0198 |
I0206 02:33:57.038942 140652483282752 parameter_overview.py:264]
| TemporalTransformer/encoderblock_18/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_18/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | 1.08e-05 | 0.0312 |
| TemporalTransformer/encoderblock_18/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_18/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | 1.84e-05 | 0.0313 |
| TemporalTransformer/encoderblock_18/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_18/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | -6.71e-05 | 0.0312 |
| TemporalTransformer/encoderblock_18/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_18/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | -4.25e-06 | 0.0312 |
| TemporalTransformer/encoderblock_19/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_19/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_19/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_19/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_19/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | 7.61e-09 | 1.01e-06 |
| TemporalTransformer/encoderblock_19/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | 1.25e-05 | 0.0198 |
| TemporalTransformer/encoderblock_19/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | 6e-08 | 1e-06 |
| TemporalTransformer/encoderblock_19/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | 3.62e-06 | 0.0198 |
| TemporalTransformer/encoderblock_19/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_19/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | 1.17e-06 | 0.0313 |
| TemporalTransformer/encoderblock_19/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_19/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | -2.66e-05 | 0.0313 |
| TemporalTransformer/encoderblock_19/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_19/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | 1.26e-05 | 0.0313 |
| TemporalTransformer/encoderblock_19/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_19/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | 1.32e-06 | 0.0313 |
| TemporalTransformer/encoderblock_2/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_2/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_2/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_2/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_2/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | 5.39e-09 | 9.96e-07 |
| TemporalTransformer/encoderblock_2/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | 3.44e-06 | 0.0198 |
| TemporalTransformer/encoderblock_2/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | 3.77e-09 | 9.79e-07 |
| TemporalTransformer/encoderblock_2/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | 2.01e-05 | 0.0198 |
| TemporalTransformer/encoderblock_2/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_2/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | -3.33e-05 | 0.0313 |
| TemporalTransformer/encoderblock_2/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_2/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | -3.32e-05 | 0.0312 |
| TemporalTransformer/encoderblock_2/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_2/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | -3.32e-05 | 0.0312 |
| TemporalTransformer/encoderblock_2/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_2/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | -3.38e-05 | 0.0313 |
| TemporalTransformer/encoderblock_20/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_20/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_20/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_20/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_20/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | 2.17e-09 | 1.01e-06 |
| TemporalTransformer/encoderblock_20/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | 1.5e-06 | 0.0198 |
| TemporalTransformer/encoderblock_20/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | 4.63e-08 | 9.78e-07 |
| TemporalTransformer/encoderblock_20/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | 8.51e-06 | 0.0198 |
| TemporalTransformer/encoderblock_20/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_20/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | -8.95e-06 | 0.0312 |
| TemporalTransformer/encoderblock_20/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_20/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | 1.07e-05 | 0.0312 |
| TemporalTransformer/encoderblock_20/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_20/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | -2.07e-05 | 0.0312 |
| TemporalTransformer/encoderblock_20/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_20/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | 5.07e-05 | 0.0313 |
| TemporalTransformer/encoderblock_21/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_21/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_21/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_21/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_21/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | 2.45e-09 | 9.88e-07 |
| TemporalTransformer/encoderblock_21/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | -7.46e-07 | 0.0198 |
| TemporalTransformer/encoderblock_21/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | 3.64e-08 | 1.01e-06 |
| TemporalTransformer/encoderblock_21/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | -4.79e-06 | 0.0198 |
| TemporalTransformer/encoderblock_21/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_21/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | -1.4e-05 | 0.0312 |
| TemporalTransformer/encoderblock_21/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_21/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | -2.6e-05 | 0.0312 |
| TemporalTransformer/encoderblock_21/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_21/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | -6.45e-06 | 0.0313 |
| TemporalTransformer/encoderblock_21/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_21/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | -2.01e-05 | 0.0313 |
| TemporalTransformer/encoderblock_22/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_22/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_22/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_22/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_22/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | 1.98e-08 | 9.91e-07 |
| TemporalTransformer/encoderblock_22/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | 4.29e-06 | 0.0198 |
| TemporalTransformer/encoderblock_22/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | -9.6e-09 | 9.92e-07 |
| TemporalTransformer/encoderblock_22/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | 9.66e-06 | 0.0198 |
I0206 02:33:57.039104 140652483282752 parameter_overview.py:264]
| TemporalTransformer/encoderblock_22/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_22/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | -2.58e-06 | 0.0313 |
| TemporalTransformer/encoderblock_22/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_22/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | -3.32e-05 | 0.0313 |
| TemporalTransformer/encoderblock_22/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_22/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | 5.26e-05 | 0.0312 |
| TemporalTransformer/encoderblock_22/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_22/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | 4.11e-05 | 0.0312 |
| TemporalTransformer/encoderblock_23/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_23/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_23/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_23/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_23/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | -5.97e-09 | 9.91e-07 |
| TemporalTransformer/encoderblock_23/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | -1.22e-05 | 0.0198 |
| TemporalTransformer/encoderblock_23/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | 4.51e-08 | 9.91e-07 |
| TemporalTransformer/encoderblock_23/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | -7.94e-07 | 0.0198 |
| TemporalTransformer/encoderblock_23/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_23/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | -2.1e-05 | 0.0313 |
| TemporalTransformer/encoderblock_23/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_23/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | -1.39e-05 | 0.0313 |
| TemporalTransformer/encoderblock_23/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_23/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | 3.7e-05 | 0.0313 |
| TemporalTransformer/encoderblock_23/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_23/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | -1.84e-05 | 0.0312 |
| TemporalTransformer/encoderblock_3/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_3/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_3/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_3/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_3/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | 2.1e-08 | 9.77e-07 |
| TemporalTransformer/encoderblock_3/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | 2.02e-06 | 0.0198 |
| TemporalTransformer/encoderblock_3/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | -3.75e-08 | 9.98e-07 |
| TemporalTransformer/encoderblock_3/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | 2.32e-06 | 0.0198 |
| TemporalTransformer/encoderblock_3/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_3/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | 5.95e-05 | 0.0313 |
| TemporalTransformer/encoderblock_3/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_3/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | 7.09e-05 | 0.0313 |
| TemporalTransformer/encoderblock_3/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_3/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | 2.24e-06 | 0.0312 |
| TemporalTransformer/encoderblock_3/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_3/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | 1.47e-05 | 0.0313 |
| TemporalTransformer/encoderblock_4/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_4/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_4/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_4/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_4/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | -5.93e-09 | 9.94e-07 |
| TemporalTransformer/encoderblock_4/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | -9.09e-06 | 0.0198 |
| TemporalTransformer/encoderblock_4/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | 1.95e-08 | 1.04e-06 |
| TemporalTransformer/encoderblock_4/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | -1.53e-05 | 0.0198 |
| TemporalTransformer/encoderblock_4/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_4/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | 6.41e-05 | 0.0313 |
| TemporalTransformer/encoderblock_4/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_4/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | 2.32e-05 | 0.0313 |
| TemporalTransformer/encoderblock_4/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_4/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | 4.69e-06 | 0.0312 |
| TemporalTransformer/encoderblock_4/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_4/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | 2.26e-05 | 0.0312 |
| TemporalTransformer/encoderblock_5/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_5/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_5/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_5/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_5/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | 8.54e-09 | 1e-06 |
| TemporalTransformer/encoderblock_5/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | 7.96e-06 | 0.0198 |
| TemporalTransformer/encoderblock_5/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | -1.81e-08 | 1.02e-06 |
| TemporalTransformer/encoderblock_5/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | -7.36e-06 | 0.0198 |
| TemporalTransformer/encoderblock_5/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_5/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | -1.01e-06 | 0.0313 |
| TemporalTransformer/encoderblock_5/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_5/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | 2.83e-05 | 0.0313 |
| TemporalTransformer/encoderblock_5/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_5/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | -3.1e-06 | 0.0313 |
| TemporalTransformer/encoderblock_5/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_5/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | 2.83e-05 | 0.0313 |
| TemporalTransformer/encoderblock_6/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_6/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_6/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_6/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_6/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | -6.25e-09 | 9.95e-07 |
| TemporalTransformer/encoderblock_6/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | -1.23e-05 | 0.0198 |
| TemporalTransformer/encoderblock_6/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | 3.29e-08 | 1.04e-06 |
| TemporalTransformer/encoderblock_6/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | -2.87e-06 | 0.0198 |
I0206 02:33:57.039546 140652483282752 parameter_overview.py:264]
| TemporalTransformer/encoderblock_6/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_6/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | 3.29e-05 | 0.0312 |
| TemporalTransformer/encoderblock_6/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_6/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | -8.77e-06 | 0.0313 |
| TemporalTransformer/encoderblock_6/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_6/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | -1.35e-05 | 0.0313 |
| TemporalTransformer/encoderblock_6/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_6/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | 0.000102 | 0.0312 |
| TemporalTransformer/encoderblock_7/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_7/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_7/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_7/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_7/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | 2e-08 | 1.02e-06 |
| TemporalTransformer/encoderblock_7/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | -9.33e-06 | 0.0198 |
| TemporalTransformer/encoderblock_7/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | -1.35e-08 | 1.01e-06 |
| TemporalTransformer/encoderblock_7/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | -6.26e-06 | 0.0198 |
| TemporalTransformer/encoderblock_7/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_7/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | -1.94e-05 | 0.0313 |
| TemporalTransformer/encoderblock_7/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_7/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | 2.91e-05 | 0.0313 |
| TemporalTransformer/encoderblock_7/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_7/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | -4.36e-05 | 0.0312 |
| TemporalTransformer/encoderblock_7/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_7/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | 2.41e-05 | 0.0313 |
| TemporalTransformer/encoderblock_8/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_8/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_8/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_8/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_8/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | -7.67e-09 | 9.92e-07 |
| TemporalTransformer/encoderblock_8/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | -1.5e-05 | 0.0198 |
| TemporalTransformer/encoderblock_8/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | -2.75e-08 | 9.85e-07 |
| TemporalTransformer/encoderblock_8/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | -2.23e-06 | 0.0198 |
| TemporalTransformer/encoderblock_8/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_8/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | 2.11e-05 | 0.0313 |
| TemporalTransformer/encoderblock_8/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_8/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | 4.09e-05 | 0.0312 |
| TemporalTransformer/encoderblock_8/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_8/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | -7.13e-06 | 0.0313 |
| TemporalTransformer/encoderblock_8/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_8/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | 1.62e-05 | 0.0313 |
| TemporalTransformer/encoderblock_9/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_9/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_9/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_9/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| TemporalTransformer/encoderblock_9/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | -1.28e-08 | 1e-06 |
| TemporalTransformer/encoderblock_9/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | 9.11e-06 | 0.0198 |
| TemporalTransformer/encoderblock_9/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | 8.95e-09 | 1.01e-06 |
| TemporalTransformer/encoderblock_9/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | 1.39e-05 | 0.0198 |
| TemporalTransformer/encoderblock_9/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_9/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | -1.4e-05 | 0.0312 |
| TemporalTransformer/encoderblock_9/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_9/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | 4.98e-05 | 0.0313 |
| TemporalTransformer/encoderblock_9/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_9/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | 3.23e-05 | 0.0313 |
| TemporalTransformer/encoderblock_9/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| TemporalTransformer/encoderblock_9/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | -3e-05 | 0.0312 |
| TemporalTransformer/posembed_input/pos_embedding | (1, 17, 1024) | 17,408 | -0.000141 | 0.02 |
| cls_SpatialTransformer | (1, 1, 1024) | 1,024 | 0.0 | 0.0 |
| cls_TemporalTransformer | (1, 1, 1024) | 1,024 | 0.0 | 0.0 |
| embedding/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| embedding/kernel | (2, 16, 16, 3, 1024) | 1,572,864 | -1.88e-06 | 0.00638 |
| output_projection/bias | (2000,) | 2,000 | 0.0 | 0.0 |
| output_projection/kernel | (1024, 2000) | 2,048,000 | 0.0 | 0.0 |
+---------------------------------------------------------------------------------+----------------------+-----------+-----------+----------+
Total: 608,467,920
I0206 02:33:57.041780 140652483282752 debug_utils.py:73] Total params: 608467920
I0206 02:33:57.157953 140652483282752 model.py:115] Using central frame initializer for input embedding
I0206 02:35:10.476510 140652483282752 debug_utils.py:129] GFLOPs 994.912 for input spec: [((-1, 32, 224, 224, 3), <class 'jax.numpy.float32'>)]
/myapp/tfenv/lib/python3.10/site-packages/flax/optim/base.py:49: DeprecationWarning: Use optax
instead of flax.optim
. Refer to the update guide https://flax.readthedocs.io/en/latest/howtos/optax_update_guide.html for detailed instructions.
warnings.warn(
I0206 02:35:20.597260 140652483282752 checkpoints.py:429] Restoring checkpoint from asl_vivit_large_factorised_encoder/checkpoint_vivit_classification
I0206 02:35:32.671247 140652483282752 trainer.py:204] Starting training loop at step 330751.
(tfenv) root@e862ed6d06ec:/myapp/scenic#
`r"""ViViT Factorised Encoder model.
"""
import ml_collections
ASL_2000_TRAIN_SIZE = 14298 ASL_2000_VAL_SIZE = 3916 ASL_2000_TEST_SIZE = 2878
def get_config(): """Returns the base experiment configuration.""" config = ml_collections.ConfigDict() config.experiment_name = 'vivit_large_factorised_encoder'
config.dataset_name = 'video_tfrecord_dataset' config.dataset_configs = ml_collections.ConfigDict() config.data_dtype_str = 'float32' config.datset_configs = ml_collections.ConfigDict() config.dataset_configs.base_dir = ( '/myapp/asl_shards') config.dataset_configs.tables = { 'train': 'train.tfrecord@1024', 'validation': 'validation.tfrecord@1024', 'test': 'test.tfrecord@1024' } config.dataset_configs.examples_per_subset = { 'train': ASL_2000_TRAIN_SIZE, 'validation': ASL_2000_VAL_SIZE, 'test': ASL_2000_TEST_SIZE } config.dataset_configs.num_classes = 2000 config.data_dtype_str = 'float32'
config.dataset_configs.num_frames = 32 config.dataset_configs.stride = 2 config.dataset_configs.min_resize = 256 config.dataset_configs.crop_size = 224 config.dataset_configs.one_hot_labels = True config.dataset_configs.zero_centering = True
config.dataset_configs.do_multicrop_test = True # Do during training. config.dataset_configs.log_test_epochs = 5
config.dataset_configs.num_test_clips = 4 config.dataset_configs.test_batch_size = 2 # Must equal num_local_devices.
config.dataset_configs.do_three_spatial_crops = True config.multicrop_clips_per_device = 2
config.dataset_configs.augmentation_params = ml_collections.ConfigDict() config.dataset_configs.augmentation_params.do_jitter_scale = True config.dataset_configs.augmentation_params.scale_min_factor = 0.9 config.dataset_configs.augmentation_params.scale_max_factor = 1.33 config.dataset_configs.augmentation_params.prob_scale_jitter = 1.0 config.dataset_configs.augmentation_params.do_color_augment = True config.dataset_configs.augmentation_params.prob_color_augment = 0.8 config.dataset_configs.augmentation_params.prob_color_drop = 0.1 config.dataset_configs.prefetch_to_device = 2
config.model_name = 'vivit_classification' config.model = ml_collections.ConfigDict() config.model.hidden_size = 1024
config.model.attention_config = ml_collections.ConfigDict() config.model.attention_config.type = 'factorized_encoder' config.model.spatial_transformer = ml_collections.ConfigDict() config.model.spatial_transformer.num_heads = 16 config.model.spatial_transformer.mlp_dim = 4096 config.model.spatial_transformer.num_layers = 24 config.model.temporal_transformer = ml_collections.ConfigDict() config.model.temporal_transformer.num_heads = 16 config.model.temporal_transformer.mlp_dim = 4096 config.model.temporal_transformer.num_layers = 24 config.model.representation_size = None config.model.classifier = 'token' config.model.attention_dropout_rate = 0. config.model.dropout_rate = 0. config.model_dtype_str = 'float32'
config.model.temporal_encoding_config = ml_collections.ConfigDict() config.model.temporal_encoding_config.method = '3d_conv'
config.model.patches = ml_collections.ConfigDict() config.model.patches.size = [16, 16, 2]
config.model.temporal_encoding_config.kernel_init_method = 'central_frame_initializer'
config.trainer_name = 'vivit_trainer' config.optimizer = 'momentum' config.optimizer_configs = ml_collections.ConfigDict() config.l2_decay_factor = 0 config.max_grad_norm = 1 config.label_smoothing = None config.num_training_epochs = 30 config.batch_size = 64 config.rng_seed = 0
config.init_from = ml_collections.ConfigDict() config.init_from.model_config = None
config.init_from.checkpoint_path = 'path_to_checkpoint_of_ViViT-L/16x2 FE' config.init_from.checkpoint_format = 'scenic' config.init_from.model_config = ml_collections.ConfigDict() config.init_from.model_config.model = ml_collections.ConfigDict() config.init_from.model_config.model.classifier = 'token' # Specify if this is 'token' or 'gap'. pylint: disable=line-too-long config.init_from.restore_positional_embedding = True config.init_from.restore_input_embedding = True config.init_from.positional_embed_size_change = 'tile'
steps_per_epoch = ASL_2000_TRAIN_SIZE // config.batch_size total_steps = config.num_training_epochs steps_per_epoch config.lr_configs = ml_collections.ConfigDict() config.lr_configs.learning_rate_schedule = 'compound' config.lr_configs.factors = 'constant cosine_decay linear_warmup' config.lr_configs.warmup_steps = 2.5 steps_per_epoch config.lr_configs.steps_per_cycle = total_steps config.lr_configs.base_learning_rate = 5e-2
config.write_summary = True config.checkpoint = True # Do checkpointing. config.debug_train = False # Debug mode during training. config.debug_eval = False # Debug mode during eval. config.checkpoint_steps = 500 # Checkpoint more frequently than a val epoch. config.log_summary_steps = 100 return config`
Thes elibs needs to be inspected in case of failure *nvidia-cuda-runtime-cu11==11.7.99 reason:cuda is 11.5 maybe they can differ
pip install git+https://github.com/deepmind/dmvr.git then check tensorflow usability with cuda install tensorrt by:- pip install pycuda pip install onnx==1.12 pip install tensorrt link 'libnvinfer.so.7 to v8 by:- https://forums.developer.nvidia.com/t/could-not-load-dynamic-library-libnvinfer-so-7/231606 ln -s /myapp/tfenv/lib/python3.10/site-packages/tensorrt/libnvinfer.so.8 /myapp/tfenv/lib/python3.10/site-packages/tensorrt/libnvinfer_plugin.so.7 ln -s /myapp/tfenv/lib/python3.10/site-packages/tensorrt/libnvinfer.so.8 /myapp/tfenv/lib/python3.10/site-packages/tensorrt/libnvinfer_plugin.so.7 export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/myapp/tfenv/lib/python3.10/site-packages/tensorrt/ check tf by test_tf.py pip install git+https://github.com/openai/CLIP.git pip install "jax[cuda11_cudnn86]" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html pip install git+https://github.com/google-research/scenic.git pip install tf-models-official pip uninstall flax pip install flax==0.5.3 pip install -r requirements in vivit project`
I'm trying to finetune a custome dataset on ViViT project. I edited the provided config file of kinetics600. prepared the dataset according to the provided instructions using dmvr. installed all needed libraries. I made a working folder and put the checkpoint in it. Unfortunately the training don't begin for an ambiguous reason. This is the log `(tfenv) root@e862ed6d06ec:/myapp/scenic# python -m scenic.projects.vivit.main --config=scenic/projects/vivit/configs/kinetics600/vivit_large_factorised_encoder.py --workdir=asl_vivit_large_factorised_encoder/ I0206 02:31:41.029487 140652483282752 xla_bridge.py:355] Unable to initialize backend 'tpu_driver': NOT_FOUND: Unable to find driver in registry given worker: I0206 02:31:41.249347 140652483282752 xla_bridge.py:355] Unable to initialize backend 'rocm': NOT_FOUND: Could not find registered platform with name: "rocm". Available platform names are: Host CUDA Interpreter I0206 02:31:41.250045 140652483282752 xla_bridge.py:355] Unable to initialize backend 'tpu': module 'jaxlib.xla_extension' has no attribute 'get_tpu_client' I0206 02:31:41.250326 140652483282752 xla_bridge.py:355] Unable to initialize backend 'plugin': xla_extension has no attributes named get_plugin_device_client. Compile TensorFlow with //tensorflow/compiler/xla/python:enable_plugin_device set to true (defaults to false) to enable this. I0206 02:31:41.250571 140652483282752 app.py:84] JAX host: 0 / 1 I0206 02:31:41.250730 140652483282752 app.py:85] JAX devices: [StreamExecutorGpuDevice(id=0, process_index=0, slice_index=0), StreamExecutorGpuDevice(id=1, process_index=0, slice_index=0)] I0206 02:31:41.251355 140652483282752 local.py:45] Setting task status: host_id: 0, host_count: 1 I0206 02:31:41.251779 140652483282752 local.py:50] Created artifact Workdir of type ArtifactType.DIRECTORY and value asl_vivit_large_factorised_encoder/. I0206 02:31:41.897853 140652483282752 app.py:96] RNG: [0 0] I0206 02:31:42.903113 140652483282752 train_utils.py:275] device_count: 2 I0206 02:31:42.903555 140652483282752 train_utils.py:276] num_hosts : 1 I0206 02:31:42.903628 140652483282752 train_utils.py:277] host_id : 0 I0206 02:31:44.272073 140652483282752 datasets.py:108] On-demand import of dataset (video_tfrecord_dataset) from module (scenic.projects.vivit.data.video_tfrecord_dataset). I0206 02:31:44.272513 140652483282752 train_utils.py:295] local_batch_size : 64 I0206 02:31:44.272591 140652483282752 train_utils.py:296] device_batch_size : 32 I0206 02:31:44.273579 140652483282752 video_tfrecord_dataset.py:416] Loading split train I0206 02:31:44.274933 140652483282752 video_ops.py:560] Adding color_augment I0206 02:31:44.275032 140652483282752 video_tfrecord_dataset.py:313] Preprocessing graph: [FunctionDescription(fn_name='image_resize_smallest', fn=<function add_image.. at 0x7fe9d40568c0>, feature_name='image', stateful=False), FunctionDescription(fn_name='image_jitter_scale', fn=functools.partial(<function scale_jitter_augm at 0x7fe9d4449240>, min_scale_factor=0.9, max_scale_factor=1.33, prob=1.0), feature_name='image', stateful=False), FunctionDescription(fn_name='image_random_crop', fn=<function add_image.. at 0x7fe9d4056950>, feature_name='image', stateful=True), FunctionDescription(fn_name='image_random_flip', fn=<function add_image.. at 0x7fe9d40569e0>, feature_name='image', stateful=True), FunctionDescription(fn_name='image_normalize', fn=<function add_image.. at 0x7fe9d4056a70>, feature_name='image', stateful=False), FunctionDescription(fn_name='image_subtract_given_mean', fn=<function add_image.. at 0x7fe9d4056b00>, feature_name='image', stateful=False), FunctionDescription(fn_name='image_divide_by_given_std', fn=<function add_image.. at 0x7fe9d4056b90>, feature_name='image', stateful=False), FunctionDescription(fn_name='label_one_hot', fn=<function add_label.. at 0x7fe9d4056c20>, feature_name='label', stateful=False), FunctionDescription(fn_name='fn_0', fn=functools.partial(<function color_default_augm at 0x7fe9d44492d0>, zero_centering_image=True, prob_color_augment=0.8, prob_color_drop=0.1), feature_name='image', stateful=False)]
I0206 02:31:44.275162 140652483282752 video_tfrecord_dataset.py:315] Postprocessing graph: []
WARNING:tensorflow:From /myapp/tfenv/lib/python3.10/site-packages/tensorflow/python/autograph/pyct/static_analysis/liveness.py:83: Analyzer.lamba_check (from tensorflow.python.autograph.pyct.static_analysis.liveness) is deprecated and will be removed after 2023-09-23.
Instructions for updating:
Lambda fuctions will be no more assumed to be used in the statement where they are used, or at least in the same block. https://github.com/tensorflow/tensorflow/issues/56089
W0206 02:31:44.329456 140652483282752 deprecation.py:350] From /myapp/tfenv/lib/python3.10/site-packages/tensorflow/python/autograph/pyct/static_analysis/liveness.py:83: Analyzer.lamba_check (from tensorflow.python.autograph.pyct.static_analysis.liveness) is deprecated and will be removed after 2023-09-23.
Instructions for updating:
Lambda fuctions will be no more assumed to be used in the statement where they are used, or at least in the same block. https://github.com/tensorflow/tensorflow/issues/56089
WARNING:tensorflow:From /myapp/tfenv/lib/python3.10/site-packages/tensorflow/python/autograph/impl/api.py:458: calling map_fn_v2 (from tensorflow.python.ops.map_fn) with back_prop=False is deprecated and will be removed in a future version.
Instructions for updating:
back_prop=False is deprecated. Consider using tf.stop_gradient instead.
Instead of:
results = tf.map_fn(fn, elems, back_prop=False)
Use:
results = tf.nest.map_structure(tf.stop_gradient, tf.map_fn(fn, elems))
W0206 02:31:45.435705 140652483282752 deprecation.py:623] From /myapp/tfenv/lib/python3.10/site-packages/tensorflow/python/autograph/impl/api.py:458: calling map_fn_v2 (from tensorflow.python.ops.map_fn) with back_prop=False is deprecated and will be removed in a future version.
Instructions for updating:
back_prop=False is deprecated. Consider using tf.stop_gradient instead.
Instead of:
results = tf.map_fn(fn, elems, back_prop=False)
Use:
results = tf.nest.map_structure(tf.stop_gradient, tf.map_fn(fn, elems))
WARNING:tensorflow:From /myapp/tfenv/lib/python3.10/site-packages/tensorflow/python/util/deprecation.py:629: calling map_fn_v2 (from tensorflow.python.ops.map_fn) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Use fn_output_signature instead
W0206 02:31:45.435978 140652483282752 deprecation.py:554] From /myapp/tfenv/lib/python3.10/site-packages/tensorflow/python/util/deprecation.py:629: calling map_fn_v2 (from tensorflow.python.ops.map_fn) with dtype is deprecated and will be removed in a future version.
Instructions for updating:
Use fn_output_signature instead
I0206 02:31:46.841617 140652483282752 video_dataset.py:487] Dataset created successfully
I0206 02:31:47.118823 140652483282752 video_tfrecord_dataset.py:416] Loading split validation
I0206 02:31:47.120168 140652483282752 video_tfrecord_dataset.py:313] Preprocessing graph: [FunctionDescription(fn_name='image_resize_smallest', fn=<function add_image.. at 0x7fe9d4057b50>, feature_name='image', stateful=False), FunctionDescription(fn_name='image_central_crop', fn=<function add_image.. at 0x7fe9d4057c70>, feature_name='image', stateful=False), FunctionDescription(fn_name='image_normalize', fn=<function add_image.. at 0x7fe9844b0040>, feature_name='image', stateful=False), FunctionDescription(fn_name='image_subtract_given_mean', fn=<function add_image.. at 0x7fe9844b16c0>, feature_name='image', stateful=False), FunctionDescription(fn_name='image_divide_by_given_std', fn=<function add_image.. at 0x7fe9844b27a0>, feature_name='image', stateful=False), FunctionDescription(fn_name='label_one_hot', fn=<function add_label.. at 0x7fe9844b3250>, feature_name='label', stateful=False)]
I0206 02:31:47.120327 140652483282752 video_tfrecord_dataset.py:315] Postprocessing graph: []
I0206 02:31:47.476291 140652483282752 video_dataset.py:487] Dataset created successfully
I0206 02:31:47.704233 140652483282752 video_tfrecord_dataset.py:416] Loading split test
I0206 02:31:47.705418 140652483282752 video_tfrecord_dataset.py:313] Preprocessing graph: [FunctionDescription(fn_name='image_resize_smallest', fn=<function add_image.. at 0x7fe98427a680>, feature_name='image', stateful=False), FunctionDescription(fn_name='image_central_crop', fn=functools.partial(<function three_spatial_crops at 0x7fe9d4055d80>, crop_size=224), feature_name='image', stateful=False), FunctionDescription(fn_name='image_normalize', fn=<function add_image.. at 0x7fe984278dc0>, feature_name='image', stateful=False), FunctionDescription(fn_name='image_subtract_given_mean', fn=<function add_image.. at 0x7fe98427bbe0>, feature_name='image', stateful=False), FunctionDescription(fn_name='image_divide_by_given_std', fn=<function add_image.. at 0x7fe98427b880>, feature_name='image', stateful=False), FunctionDescription(fn_name='label_one_hot', fn=<function add_label.. at 0x7fe98427b400>, feature_name='label', stateful=False)]
I0206 02:31:47.705570 140652483282752 video_tfrecord_dataset.py:315] Postprocessing graph: [FunctionDescription(fn_name='image_reshape', fn=<function add_image.. at 0x7fe98427a950>, feature_name='image', stateful=False)]
I0206 02:31:48.637670 140652483282752 video_dataset.py:487] Dataset created successfully
I0206 02:31:48.880880 140652483282752 video_tfrecord_dataset.py:485] Dataset metadata:
{'num_classes': 2000, 'input_shape': (-1, 32, 224, 224, 3), 'num_train_examples': 14298, 'num_eval_examples': 3916, 'num_test_examples': 34536, 'input_dtype': <class 'jax.numpy.float32'>, 'target_is_onehot': True}
/myapp/tfenv/lib/python3.10/site-packages/flax/core/lift.py:112: FutureWarning: jax.tree_flatten is deprecated, and will be removed in a future release. Use jax.tree_util.tree_flatten instead.
scopes, treedef = jax.tree_flatten(scope_tree)
I0206 02:31:49.771279 140652483282752 model.py:115] Using central frame initializer for input embedding
/myapp/tfenv/lib/python3.10/site-packages/flax/linen/transforms.py:249: FutureWarning: jax.tree_leaves is deprecated, and will be removed in a future release. Use jax.tree_util.tree_leaves instead.
jax.tree_leaves(tree)))
I0206 02:33:57.037426 140652483282752 parameter_overview.py:264]
+---------------------------------------------------------------------------------+----------------------+-----------+-----------+----------+
| Name | Shape | Size | Mean | Std |
+---------------------------------------------------------------------------------+----------------------+-----------+-----------+----------+
| SpatialTransformer/encoder_norm/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoder_norm/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_0/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_0/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_0/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_0/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_0/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | -9.5e-09 | 1.01e-06 |
| SpatialTransformer/encoderblock_0/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | 2.03e-06 | 0.0198 |
| SpatialTransformer/encoderblock_0/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | -7.65e-09 | 9.93e-07 |
| SpatialTransformer/encoderblock_0/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | 1.92e-05 | 0.0198 |
| SpatialTransformer/encoderblock_0/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_0/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | 6.2e-06 | 0.0313 |
| SpatialTransformer/encoderblock_0/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_0/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | 3.46e-05 | 0.0313 |
| SpatialTransformer/encoderblock_0/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_0/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | 4.32e-06 | 0.0313 |
| SpatialTransformer/encoderblock_0/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_0/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | -1.16e-05 | 0.0312 |
| SpatialTransformer/encoderblock_1/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_1/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_1/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_1/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_1/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | -2.47e-10 | 1e-06 |
| SpatialTransformer/encoderblock_1/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | 2.59e-06 | 0.0198 |
| SpatialTransformer/encoderblock_1/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | -1.04e-08 | 1e-06 |
| SpatialTransformer/encoderblock_1/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | -4.85e-06 | 0.0198 |
| SpatialTransformer/encoderblock_1/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_1/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | 3.08e-05 | 0.0313 |
| SpatialTransformer/encoderblock_1/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_1/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | 1.29e-06 | 0.0312 |
| SpatialTransformer/encoderblock_1/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_1/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | 1.46e-05 | 0.0313 |
| SpatialTransformer/encoderblock_1/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_1/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | 2.5e-05 | 0.0313 |
| SpatialTransformer/encoderblock_10/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_10/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_10/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_10/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_10/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | -2.59e-09 | 1.01e-06 |
| SpatialTransformer/encoderblock_10/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | -3.06e-07 | 0.0198 |
| SpatialTransformer/encoderblock_10/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | -3.16e-09 | 9.91e-07 |
| SpatialTransformer/encoderblock_10/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | 1e-05 | 0.0198 |
| SpatialTransformer/encoderblock_10/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_10/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | -4.55e-06 | 0.0313 |
| SpatialTransformer/encoderblock_10/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_10/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | -3.99e-05 | 0.0312 |
| SpatialTransformer/encoderblock_10/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_10/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | 3.74e-06 | 0.0312 |
| SpatialTransformer/encoderblock_10/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_10/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | 2.11e-05 | 0.0313 |
| SpatialTransformer/encoderblock_11/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_11/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_11/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_11/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_11/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | 1.63e-08 | 1.01e-06 |
| SpatialTransformer/encoderblock_11/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | -3.43e-07 | 0.0198 |
| SpatialTransformer/encoderblock_11/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | 6.67e-08 | 1.01e-06 |
| SpatialTransformer/encoderblock_11/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | 1.6e-06 | 0.0198 |
| SpatialTransformer/encoderblock_11/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_11/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | -1.01e-05 | 0.0312 |
| SpatialTransformer/encoderblock_11/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_11/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | 3.36e-06 | 0.0313 |
| SpatialTransformer/encoderblock_11/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_11/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | 4.15e-06 | 0.0313 |
| SpatialTransformer/encoderblock_11/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_11/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | -2.2e-05 | 0.0312 |
| SpatialTransformer/encoderblock_12/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_12/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_12/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_12/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_12/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | -6.68e-09 | 9.89e-07 |
| SpatialTransformer/encoderblock_12/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | -1.51e-06 | 0.0198 |
| SpatialTransformer/encoderblock_12/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | 3.46e-08 | 9.87e-07 |
| SpatialTransformer/encoderblock_12/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | 8.32e-06 | 0.0198 |
| SpatialTransformer/encoderblock_12/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_12/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | 1.16e-06 | 0.0313 |
| SpatialTransformer/encoderblock_12/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
I0206 02:33:57.037870 140652483282752 parameter_overview.py:264]
| SpatialTransformer/encoderblock_12/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | -4.34e-05 | 0.0312 |
| SpatialTransformer/encoderblock_12/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_12/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | 8.6e-05 | 0.0312 |
| SpatialTransformer/encoderblock_12/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_12/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | 4.15e-05 | 0.0313 |
| SpatialTransformer/encoderblock_13/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_13/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_13/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_13/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_13/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | 1.4e-08 | 1.02e-06 |
| SpatialTransformer/encoderblock_13/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | 1.3e-05 | 0.0198 |
| SpatialTransformer/encoderblock_13/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | 1.08e-08 | 9.92e-07 |
| SpatialTransformer/encoderblock_13/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | -1.15e-06 | 0.0198 |
| SpatialTransformer/encoderblock_13/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_13/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | 1.52e-05 | 0.0312 |
| SpatialTransformer/encoderblock_13/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_13/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | 1.64e-05 | 0.0313 |
| SpatialTransformer/encoderblock_13/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_13/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | 3.29e-05 | 0.0313 |
| SpatialTransformer/encoderblock_13/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_13/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | 9.05e-06 | 0.0312 |
| SpatialTransformer/encoderblock_14/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_14/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_14/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_14/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_14/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | -1.17e-08 | 9.8e-07 |
| SpatialTransformer/encoderblock_14/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | 1.53e-05 | 0.0198 |
| SpatialTransformer/encoderblock_14/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | -1.43e-08 | 1.01e-06 |
| SpatialTransformer/encoderblock_14/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | 3.93e-07 | 0.0198 |
| SpatialTransformer/encoderblock_14/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_14/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | -3.32e-05 | 0.0312 |
| SpatialTransformer/encoderblock_14/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_14/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | 2.41e-05 | 0.0313 |
| SpatialTransformer/encoderblock_14/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_14/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | 8.35e-06 | 0.0312 |
| SpatialTransformer/encoderblock_14/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_14/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | -4.13e-05 | 0.0313 |
| SpatialTransformer/encoderblock_15/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_15/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_15/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_15/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_15/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | 2.24e-08 | 1.01e-06 |
| SpatialTransformer/encoderblock_15/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | -3.05e-06 | 0.0198 |
| SpatialTransformer/encoderblock_15/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | -5.74e-09 | 1e-06 |
| SpatialTransformer/encoderblock_15/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | 1.38e-06 | 0.0198 |
| SpatialTransformer/encoderblock_15/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_15/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | 5.14e-05 | 0.0312 |
| SpatialTransformer/encoderblock_15/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_15/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | -1.3e-05 | 0.0313 |
| SpatialTransformer/encoderblock_15/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_15/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | -1.92e-05 | 0.0313 |
| SpatialTransformer/encoderblock_15/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_15/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | 3.26e-05 | 0.0312 |
| SpatialTransformer/encoderblock_16/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_16/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_16/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_16/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_16/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | -7.32e-09 | 9.91e-07 |
| SpatialTransformer/encoderblock_16/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | -8.94e-06 | 0.0198 |
| SpatialTransformer/encoderblock_16/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | -5.17e-08 | 1.03e-06 |
| SpatialTransformer/encoderblock_16/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | -6.51e-06 | 0.0198 |
| SpatialTransformer/encoderblock_16/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_16/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | -9.56e-06 | 0.0312 |
| SpatialTransformer/encoderblock_16/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_16/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | -2.55e-05 | 0.0312 |
| SpatialTransformer/encoderblock_16/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_16/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | -4.48e-06 | 0.0312 |
| SpatialTransformer/encoderblock_16/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_16/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | 3.04e-05 | 0.0313 |
| SpatialTransformer/encoderblock_17/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_17/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_17/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_17/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_17/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | -1.41e-08 | 1.02e-06 |
| SpatialTransformer/encoderblock_17/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | 1.23e-05 | 0.0198 |
| SpatialTransformer/encoderblock_17/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | 3.57e-08 | 9.86e-07 |
| SpatialTransformer/encoderblock_17/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | 4.52e-06 | 0.0198 |
| SpatialTransformer/encoderblock_17/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_17/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | 1.32e-05 | 0.0313 |
| SpatialTransformer/encoderblock_17/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
I0206 02:33:57.038076 140652483282752 parameter_overview.py:264]
| SpatialTransformer/encoderblock_17/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | 4.73e-05 | 0.0313 |
| SpatialTransformer/encoderblock_17/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_17/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | -7.43e-06 | 0.0312 |
| SpatialTransformer/encoderblock_17/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_17/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | 2.87e-05 | 0.0312 |
| SpatialTransformer/encoderblock_18/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_18/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_18/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_18/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_18/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | -1.98e-09 | 1e-06 |
| SpatialTransformer/encoderblock_18/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | -1.32e-05 | 0.0198 |
| SpatialTransformer/encoderblock_18/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | -1.52e-09 | 1.05e-06 |
| SpatialTransformer/encoderblock_18/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | -9.56e-06 | 0.0198 |
| SpatialTransformer/encoderblock_18/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_18/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | 2.64e-06 | 0.0312 |
| SpatialTransformer/encoderblock_18/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_18/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | -3.55e-05 | 0.0313 |
| SpatialTransformer/encoderblock_18/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_18/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | -4.05e-06 | 0.0312 |
| SpatialTransformer/encoderblock_18/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_18/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | 2.59e-05 | 0.0312 |
| SpatialTransformer/encoderblock_19/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_19/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_19/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_19/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_19/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | 8.31e-09 | 9.85e-07 |
| SpatialTransformer/encoderblock_19/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | 1.07e-05 | 0.0198 |
| SpatialTransformer/encoderblock_19/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | -6.75e-09 | 9.93e-07 |
| SpatialTransformer/encoderblock_19/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | -3.38e-06 | 0.0198 |
| SpatialTransformer/encoderblock_19/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_19/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | 3.16e-05 | 0.0313 |
| SpatialTransformer/encoderblock_19/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_19/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | 1.94e-05 | 0.0312 |
| SpatialTransformer/encoderblock_19/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_19/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | -4.17e-05 | 0.0312 |
| SpatialTransformer/encoderblock_19/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_19/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | 3.03e-05 | 0.0313 |
| SpatialTransformer/encoderblock_2/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_2/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_2/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_2/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_2/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | 2.71e-08 | 1.01e-06 |
| SpatialTransformer/encoderblock_2/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | -6.13e-07 | 0.0198 |
| SpatialTransformer/encoderblock_2/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | -3.18e-10 | 1.02e-06 |
| SpatialTransformer/encoderblock_2/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | 7.18e-06 | 0.0198 |
| SpatialTransformer/encoderblock_2/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_2/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | -3.75e-05 | 0.0313 |
| SpatialTransformer/encoderblock_2/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_2/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | 4.68e-06 | 0.0313 |
| SpatialTransformer/encoderblock_2/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_2/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | 2.38e-05 | 0.0312 |
| SpatialTransformer/encoderblock_2/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_2/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | -2.01e-05 | 0.0313 |
| SpatialTransformer/encoderblock_20/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_20/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_20/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_20/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_20/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | 1.75e-08 | 1e-06 |
| SpatialTransformer/encoderblock_20/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | -1.22e-05 | 0.0198 |
| SpatialTransformer/encoderblock_20/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | -2.08e-08 | 1e-06 |
| SpatialTransformer/encoderblock_20/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | -2.91e-06 | 0.0198 |
| SpatialTransformer/encoderblock_20/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_20/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | -1.31e-05 | 0.0312 |
| SpatialTransformer/encoderblock_20/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_20/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | -2.72e-05 | 0.0313 |
| SpatialTransformer/encoderblock_20/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_20/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | 3.39e-05 | 0.0312 |
| SpatialTransformer/encoderblock_20/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_20/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | -6.06e-05 | 0.0313 |
| SpatialTransformer/encoderblock_21/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_21/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_21/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_21/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_21/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | -3.73e-08 | 9.98e-07 |
| SpatialTransformer/encoderblock_21/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | -3.71e-06 | 0.0198 |
| SpatialTransformer/encoderblock_21/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | 2.04e-08 | 9.91e-07 |
| SpatialTransformer/encoderblock_21/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | -4.6e-06 | 0.0198 |
| SpatialTransformer/encoderblock_21/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_21/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | 1.99e-05 | 0.0313 |
| SpatialTransformer/encoderblock_21/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
I0206 02:33:57.038257 140652483282752 parameter_overview.py:264]
| SpatialTransformer/encoderblock_21/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | -9.41e-07 | 0.0313 |
| SpatialTransformer/encoderblock_21/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_21/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | 5.98e-05 | 0.0312 |
| SpatialTransformer/encoderblock_21/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_21/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | -2.96e-05 | 0.0312 |
| SpatialTransformer/encoderblock_22/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_22/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_22/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_22/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_22/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | 5.28e-09 | 1e-06 |
| SpatialTransformer/encoderblock_22/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | 1.75e-05 | 0.0198 |
| SpatialTransformer/encoderblock_22/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | -2.15e-08 | 1.02e-06 |
| SpatialTransformer/encoderblock_22/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | 1.23e-05 | 0.0198 |
| SpatialTransformer/encoderblock_22/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_22/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | -4.67e-06 | 0.0312 |
| SpatialTransformer/encoderblock_22/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_22/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | 2.73e-05 | 0.0312 |
| SpatialTransformer/encoderblock_22/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_22/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | 3.72e-05 | 0.0312 |
| SpatialTransformer/encoderblock_22/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_22/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | 3.85e-05 | 0.0313 |
| SpatialTransformer/encoderblock_23/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_23/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_23/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_23/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_23/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | -6.35e-09 | 1e-06 |
| SpatialTransformer/encoderblock_23/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | -1.28e-06 | 0.0198 |
| SpatialTransformer/encoderblock_23/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | 2.84e-08 | 9.66e-07 |
| SpatialTransformer/encoderblock_23/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | 8.74e-06 | 0.0198 |
| SpatialTransformer/encoderblock_23/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_23/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | 2.94e-05 | 0.0313 |
| SpatialTransformer/encoderblock_23/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_23/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | 1.05e-05 | 0.0313 |
| SpatialTransformer/encoderblock_23/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_23/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | -3.21e-05 | 0.0312 |
| SpatialTransformer/encoderblock_23/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_23/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | -7.37e-06 | 0.0312 |
| SpatialTransformer/encoderblock_3/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_3/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_3/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_3/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_3/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | 1.43e-08 | 9.88e-07 |
| SpatialTransformer/encoderblock_3/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | -7.27e-06 | 0.0198 |
| SpatialTransformer/encoderblock_3/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | -2.03e-08 | 9.65e-07 |
| SpatialTransformer/encoderblock_3/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | 1e-05 | 0.0198 |
| SpatialTransformer/encoderblock_3/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_3/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | -2.56e-05 | 0.0313 |
| SpatialTransformer/encoderblock_3/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_3/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | 2.62e-05 | 0.0313 |
| SpatialTransformer/encoderblock_3/MultiHeadDotProductAttention_0/query/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_3/MultiHeadDotProductAttention_0/query/kernel | (1024, 16, 64) | 1,048,576 | -2.53e-05 | 0.0312 |
| SpatialTransformer/encoderblock_3/MultiHeadDotProductAttention_0/value/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_3/MultiHeadDotProductAttention_0/value/kernel | (1024, 16, 64) | 1,048,576 | -3.13e-05 | 0.0313 |
| SpatialTransformer/encoderblock_4/LayerNorm_0/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_4/LayerNorm_0/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_4/LayerNorm_1/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_4/LayerNorm_1/scale | (1024,) | 1,024 | 1.0 | 0.0 |
| SpatialTransformer/encoderblock_4/MlpBlock_0/Dense_0/bias | (4096,) | 4,096 | 1.44e-08 | 9.92e-07 |
| SpatialTransformer/encoderblock_4/MlpBlock_0/Dense_0/kernel | (1024, 4096) | 4,194,304 | 5.82e-06 | 0.0198 |
| SpatialTransformer/encoderblock_4/MlpBlock_0/Dense_1/bias | (1024,) | 1,024 | 1.97e-08 | 9.86e-07 |
| SpatialTransformer/encoderblock_4/MlpBlock_0/Dense_1/kernel | (4096, 1024) | 4,194,304 | -3e-06 | 0.0198 |
| SpatialTransformer/encoderblock_4/MultiHeadDotProductAttention_0/key/bias | (16, 64) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_4/MultiHeadDotProductAttention_0/key/kernel | (1024, 16, 64) | 1,048,576 | -5.21e-05 | 0.0312 |
| SpatialTransformer/encoderblock_4/MultiHeadDotProductAttention_0/out/bias | (1024,) | 1,024 | 0.0 | 0.0 |
| SpatialTransformer/encoderblock_4/MultiHeadDotProductAttention_0/out/kernel | (16, 64, 1024) | 1,048,576 | -1.36e-05 | 0.0313 |