maciejkula / spotlight

Deep recommender models using PyTorch.
MIT License
2.99k stars 428 forks source link

How to implement other optimization function e.g. SGD #146

Open TedSIWEILIU opened 5 years ago

TedSIWEILIU commented 5 years ago

Hi, I read through issues 22&23 but still couldn't find the clue to change the default optimizer Adam to torch.optim.SGD. I tried

emodel = ExplicitFactorizationModel(n_iter=15,
                                    embedding_dim=32, #Spotlight default is 32
                                    use_cuda=False,
                                    loss='regression',
                                    l2=0.00005,
                                    optimizer_func=optim.SGD(lr=0.001, momentum=0.9))

but it returns TypeError: init() missing 1 required positional argument: 'params' I know it might because I'm not passing self._net.parameters() to the optimizer. Could u suggest me how to do it?

EthanRosenthal commented 5 years ago

Yeah, you have to create an explicit function that receives parameters() as the first argument and returns an instantiated PyTorch optimizer object. This test here shows an example of how to use Adagrad instead of Adam.

    def adagrad_optimizer(model_params,
                          lr=1e-2,
                          weight_decay=1e-6):

        return torch.optim.Adagrad(model_params,
                                   lr=lr,
                                   weight_decay=weight_decay)
maciejkula commented 5 years ago

Thanks, Ethan!