irllabs / ml-lib

A machine learning library for Max and Pure Data
Other
274 stars 40 forks source link

ml.mlp nan and inf output #73

Closed batchku closed 10 years ago

batchku commented 10 years ago

hello, i'm working with the last ml.mlp (alpha 10) and trying to confirm that previous issues are worked out. i'm not able to get good values out of ml.mlp. i'm training with 3-inputs (iphone accel) and 6 outputs (synthesis params). i use

mode 1 num_outputs 6 training_rate 0.05 rand_training_iterations 100

rest are default.

i send in training data like this, where first three numbers after "add" are the iphone accels, and the next 6 numbers are the synth params.

when i train, i get "train 1" out, so no failure.

below you see a max dump window that shows all.

the helpfile in the repo is updated to have all this in it.


ml.mlp: Multilayer Perceptron based on the GRT library version 0.1 revision: 301 to-ml.mlp: mode 1 to-ml.mlp: num_outputs 6 to-ml.mlp: num_hidden 2 to-ml.mlp: training_rate 0.05 to-ml.mlp: rand_training_iterations 100 to-ml.mlp: min_epochs 100 to-ml.mlp: max_epochs 150 to-ml.mlp: add 0.082352 0.025208 -1.007431 60. 0.5 60. 1.8 0. new input vector size, adjusting num_inputs to 3 to-ml.mlp: add 0.112686 0.03125 -1.017441 60. 0.5 60. 1.8 0. to-ml.mlp: add 0.11348 0.024277 -1.004974 60. 0.5 60. 1.8 0. to-ml.mlp: add 0.083817 0.03891 -1.007278 60. 0.5 60. 1.8 0. to-ml.mlp: add 0.077454 0.043839 -1.0242 60. 0.5 60. 1.8 0. to-ml.mlp: add 0.105026 0.038055 -1.020813 60. 0.5 60. 1.8 0. to-ml.mlp: add 0.093765 0.029541 -1.02829 60. 0.5 60. 1.8 0. to-ml.mlp: add 0.08609 0.045166 -0.999588 60. 0.5 60. 1.8 0. to-ml.mlp: add 0.08522 0.042862 -1.017075 60. 0.5 60. 1.8 0. to-ml.mlp: add 0.086273 0.04071 -1.027664 60. 0.5 60. 1.8 0. to-ml.mlp: add 0.045807 -0.981293 -0.107758 60. 0.5 81. 0. 40. to-ml.mlp: add 0.049728 -1.00441 -0.074341 60. 0.5 81. 0. 40. to-ml.mlp: add 0.056046 -1.004349 -0.069397 60. 0.5 81. 0. 40. to-ml.mlp: add 0.057892 -0.99617 -0.086761 60. 0.5 81. 0. 40. to-ml.mlp: add 0.055847 -0.986969 -0.097473 60. 0.5 81. 0. 40. to-ml.mlp: add 0.051224 -0.985046 -0.111237 60. 0.5 81. 0. 40. to-ml.mlp: add 0.025604 -0.978058 -0.118286 60. 0.5 81. 0. 40. to-ml.mlp: add 0.058334 -0.992477 -0.087234 60. 0.5 81. 0. 40. to-ml.mlp: add 0.032028 -0.987396 -0.101913 60. 0.5 81. 0. 40. to-ml.mlp: add 0.037827 -0.98938 -0.084488 60. 0.5 81. 0. 40. to-ml.mlp: add 0.826202 -0.381836 -0.447617 60. 0.5 26. 15. 7. to-ml.mlp: add 0.84137 -0.386139 -0.377335 60. 0.5 26. 15. 7. to-ml.mlp: add 0.840149 -0.379242 -0.408234 60. 0.5 26. 15. 7. to-ml.mlp: add 0.846375 -0.390503 -0.396576 60. 0.5 26. 15. 7. to-ml.mlp: add 0.83815 -0.37674 -0.410904 60. 0.5 26. 15. 7. to-ml.mlp: add 0.833054 -0.377396 -0.406006 60. 0.5 26. 15. 7. to-ml.mlp: add 0.838348 -0.375885 -0.397385 60. 0.5 26. 15. 7. to-ml.mlp: add 0.846619 -0.388992 -0.401947 60. 0.5 26. 15. 7. to-ml.mlp: add 0.833862 -0.384827 -0.398941 60. 0.5 26. 15. 7. to-ml.mlp: add 0.833145 -0.383057 -0.404099 60. 0.5 26. 15. 7. to-ml.mlp: add -0.797806 -0.355637 -0.494141 11. 0.5 33. 44. 55. to-ml.mlp: add -0.79483 -0.364777 -0.462524 11. 0.5 33. 44. 55. to-ml.mlp: add -0.780258 -0.371994 -0.490402 11. 0.5 33. 44. 55. to-ml.mlp: add -0.810181 -0.352997 -0.517731 11. 0.5 33. 44. 55. to-ml.mlp: add -0.842087 -0.342224 -0.490311 11. 0.5 33. 44. 55. to-ml.mlp: add -0.822052 -0.345306 -0.496735 11. 0.5 33. 44. 55. to-ml.mlp: add -0.844452 -0.334167 -0.480133 11. 0.5 33. 44. 55. to-ml.mlp: add -0.824112 -0.353073 -0.499283 11. 0.5 33. 44. 55. to-ml.mlp: add -0.798355 -0.367661 -0.490845 11. 0.5 33. 44. 55. to-ml.mlp: add -0.798141 -0.356125 -0.464157 11. 0.5 33. 44. 55. to-ml.mlp: train info: train 1 to-ml.mlp: map -0.509003 -0.473755 -0.749725 map: nan inf nan -inf nan -inf to-ml.mlp: map -0.5439 -0.467422 -0.744736 map: nan inf nan -inf nan -inf to-ml.mlp: map -0.520035 -0.46666 -0.688644 map: nan inf nan -inf nan -inf to-ml.mlp: map -0.460999 -0.497375 -0.650162 map: nan inf nan -inf nan -inf to-ml.mlp: map -0.509476 -0.419449 -0.736786 map: nan inf nan -inf nan -inf to-ml.mlp: map -0.485611 -0.555893 -0.433945 map: nan inf nan -inf nan -inf to-ml.mlp: map -0.32515 -0.632629 -0.987106 map: nan inf nan -inf nan -inf to-ml.mlp: map 0.065811 -0.527817 -1.417633 map: nan inf nan -inf nan -inf to-ml.mlp: map 0.011887 -0.547241 -1.13974 map: nan inf nan -inf nan -inf to-ml.mlp: map -0.016357 -0.441116 -1.100143 map: nan inf nan -inf nan -inf to-ml.mlp: map 0.124344 -0.716003 -0.619598 map: nan inf nan -inf nan -inf to-ml.mlp: map 0.309525 -0.795105 -0.384888 map: nan inf nan -inf nan -inf to-ml.mlp: map 0.152802 -0.759995 -0.339279 map: nan inf nan -inf nan -inf to-ml.mlp: map 0.142532 -0.750275 -0.302277 map: nan inf nan -inf nan -inf to-ml.mlp: map 0.208557 -0.821472 -0.123322 map: nan inf nan -inf nan -inf to-ml.mlp: map 0.223526 -0.770767 -0.132568 map: nan inf nan -inf nan -inf to-ml.mlp: map 0.171234 -0.766754 -0.217377 map: nan inf nan -inf nan -inf to-ml.mlp: map 0.091339 -0.737198 -0.204758 map: nan inf nan -inf nan -inf to-ml.mlp: map 0.018433 -0.702362 -0.598328 map: nan inf nan -inf nan -inf to-ml.mlp: map -0.014084 -0.588715 -0.754883 map: nan inf nan -inf nan -inf to-ml.mlp: map -0.122711 -0.500763 -0.80011 map: nan inf nan -inf nan -inf to-ml.mlp: map -0.402344 -0.439133 -1.078094 map: nan inf nan -inf nan -inf to-ml.mlp: map -0.319717 -0.270157 -1.0466 map: nan inf nan -inf nan -inf to-ml.mlp: map -0.391083 -0.242554 -1.088608 map: nan inf nan -inf nan -inf to-ml.mlp: map -0.467758 -0.194473 -1.103302 map: nan inf nan -inf nan -inf to-ml.mlp: map -0.498886 -0.187012 -1.005371 map: nan inf nan -inf nan -inf to-ml.mlp: map -0.488907 -0.197067 -1.028824 map: nan inf nan -inf nan -inf to-ml.mlp: map -0.39711 -0.208176 -0.927139 map: nan inf nan -inf nan -inf to-ml.mlp: map -0.393661 -0.229782 -0.900085 map: nan inf nan -inf nan -inf to-ml.mlp: map -0.40654 -0.278427 -0.879745 map: nan inf nan -inf nan -inf to-ml.mlp: map -0.359726 -0.282059 -0.886902 map: nan inf nan -inf nan -inf to-ml.mlp: map -0.361603 -0.276794 -0.865204 map: nan inf nan -inf nan -inf to-ml.mlp: map -0.371506 -0.273376 -0.838364 map: nan inf nan -inf nan -inf to-ml.mlp: map -0.384003 -0.278122 -0.854904 map: nan inf nan -inf nan -inf to-ml.mlp: map -0.371277 -0.283539 -0.915939 map: nan inf nan -inf nan -inf to-ml.mlp: map -0.375366 -0.270401 -0.924652 map: nan inf nan -inf nan -inf to-ml.mlp: map -0.399384 -0.270782 -0.872269 map: nan inf nan -inf nan -inf to-ml.mlp: map -0.403336 -0.280945 -0.879776 map: nan inf nan -inf nan -inf to-ml.mlp: map -0.383255 -0.274796 -0.87381 map: nan inf nan -inf nan -inf

jamiebullock commented 10 years ago

Hi,

Would you mind saving out your data with "save" and attaching it here, so I can test by loading it in?

Thanks,

batchku commented 10 years ago

here it is: https://www.dropbox.com/s/t18wyjobcz3k5d8/mlp-test.txt

i'm not sure what the format of the saved data should be, but it seems weird that the text file has 9 columns of numbers, and the MIDDLE three are the INPUTS, while the FIRST three and LAST three are the outputs; i didn't expect the vectors to be broken up.

a

jamiebullock commented 10 years ago

Thanks @batchku, this is super-helpful and highlights where the bug is. Expect a fix today.

Jamie

batchku commented 10 years ago

how exciting! send it over!

On Mon, Mar 17, 2014 at 9:41 AM, Jamie Bullock notifications@github.comwrote:

Thanks @batchku https://github.com/batchku, this is super-helpful and highlights where the bug is. Expect a fix today.

Jamie

Reply to this email directly or view it on GitHubhttps://github.com/cmuartfab/ml-lib/issues/73#issuecomment-37815825 .

jamiebullock commented 10 years ago

Hi Ali,

This isn't actually a bug! Just a lack of documentation...

The problem is that the input and output are in the wrong order.

It should be:

add target[0] target[1] target[2] target[n] source[0] source[1] source[2]

This follows the same logic as when the mlp is in classification mode, where we have:

add label in[0] in[1] in[2]

That is, the thing we are mapping to always goes first.

The number of dimensions in the target corresponds to the num_outputs attribute, which must be set to the correct value before sending any add messages. The object should post a meaningful error to the Max console if you've done something wrong.

I've updated the Max help file in develop branch with some crude working examples, but I think you are best placed to reverse the order of source / target with [zl join] in your test patch.

Let me know how it goes.

batchku commented 10 years ago

that explains the out of order formatting of the text file too!

On Mon, Mar 17, 2014 at 4:43 PM, Jamie Bullock notifications@github.comwrote:

Hi Ali,

This isn't actually a bug! Just a lack of documentation...

The problem is that the input and output are in the wrong order.

It should be:

add target[0] target[1] target[2] target[n] source[0] source[1] source[2]

This follows the same logic as when the mlp is in classification mode, where we have:

add label in[0] in[1] in[2]

That is, the thing we are mapping to always goes first.

The number of dimensions in the target corresponds to the num_outputsattribute, which must be set to the correct value before sending any add messages. The object should post a meaningful error to the Max console if you've done something wrong.

I've updated the Max help file in develop branch with some crude working examples, but I think you are best placed to reverse the order of source / target with [zl join].

Let me know how it goes.

Reply to this email directly or view it on GitHubhttps://github.com/cmuartfab/ml-lib/issues/73#issuecomment-37867534 .

batchku commented 10 years ago

ok, i just tried it. partial success; the training works and i don't get inf and nan's. yay.

however, now i'm back at trying to fine tune params to get good training. i've added a "preset" object to the helpfile, preset 1 shows you what params i'm using. i'm not getting very good results with the trained mapping.

any advice?

also a request: please set all the other exposed params in the helpfile, connect the preset object to those params and resave preset one (shift click on the preset one circle) and then resave the patch/upload to git.

ali

ps i synched with the master again, just helpfile changes, but i promise to change to the developer fork :)

On Mon, Mar 17, 2014 at 4:52 PM, Ali Momeni batchku@gmail.com wrote:

that explains the out of order formatting of the text file too!

On Mon, Mar 17, 2014 at 4:43 PM, Jamie Bullock notifications@github.comwrote:

Hi Ali,

This isn't actually a bug! Just a lack of documentation...

The problem is that the input and output are in the wrong order.

It should be:

add target[0] target[1] target[2] target[n] source[0] source[1] source[2]

This follows the same logic as when the mlp is in classification mode, where we have:

add label in[0] in[1] in[2]

That is, the thing we are mapping to always goes first.

The number of dimensions in the target corresponds to the num_outputsattribute, which must be set to the correct value before sending any add messages. The object should post a meaningful error to the Max console if you've done something wrong.

I've updated the Max help file in develop branch with some crude working examples, but I think you are best placed to reverse the order of source / target with [zl join].

Let me know how it goes.

Reply to this email directly or view it on GitHubhttps://github.com/cmuartfab/ml-lib/issues/73#issuecomment-37867534 .

jamiebullock commented 10 years ago

@batchku can you clarify what you mean with the shift / click preset thing? Do you mean that once we have a good set of settings for the accel -> paf~ mapping I should save those settings as a preset?

Also, I'm closing this issue now and have raised a separate one #76 for improving the mapping performance.