Open arassadin opened 7 years ago
Hi, the simplest way is to do data parallelism, I mean just split batch over several GPU
Hi,
Thanks for the response. If I'm right, in FMs there is no explicit batches as in NNs. In a sample-wise splitting, at least two questions come to mind:
Best Regards, Alexandr
in FMs there is no explicit batches as in NNs
You need to solve an optimization task. While it's common to use sample-wise updates in such settings (for example, in libFFM), mini-batch learning also works.
This implementation use batches exactly as in NNs.
You can see it in batch dimension in placeholders (https://github.com/geffy/tffm/blob/master/tffm/core.py#L129) and param batch_size
of TFFMBaseModel (https://github.com/geffy/tffm/blob/master/tffm/base.py#L160)
Hi, any news there?
Hi,
Unfortunately, priorities changed rapidly. I nearly had not a chance to handle this issue.
Hi,
I just wondered how FM can be parallelized effectively between multiple GPUs. I'm a bit familiar with TF and not really with FMs. If you provide me with ideas or any highlights, I would make a necessary modifications and subsequently a PR since for today I'm interested in parallel GPU FM implementation and seems that your code is a well base for this.
Best Regards, Alexandr