peterwittek / somoclu

Massively parallel self-organizing maps: accelerate training on multicore CPUs, GPUs, and clusters
https://peterwittek.github.io/somoclu/
MIT License
266 stars 69 forks source link

Batch mode and learning rate #168

Open jtpesone opened 1 year ago

jtpesone commented 1 year ago

If somoclu always uses the batch training mode, how is the learning rate used? If the update is done according to the batch training equation given in Wittek et al, 2017 (Somoclu: An Efficient Parallel Library for Self-Organizing Maps), learning rate is not used at all.

xgdgsc commented 1 year ago

You can search for "scale" in code to see.

jtpesone commented 1 year ago

If you cannot provide this as an equation, I will try to dig into the code. It would be good to document it, however. Would be important for the users to know which equations the sw uses.

xgdgsc commented 1 year ago

https://stats.stackexchange.com/a/402360 ?