igolan / bgd

Implementation of Bayesian Gradient Descent
MIT License
37 stars 8 forks source link

How to call the BGD class. #4

Open AdaUchendu opened 4 years ago

AdaUchendu commented 4 years ago
bgd_params = [{ "params": [1, 0.02, 10] }]

bgd_optimizer = BGD(params=bgd_params, std_init=0.02)

This returns error:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-27-8e4fa4b1fc3c> in <module>()
      7 
      8 
----> 9 bgd_optimizer = BGD(params=bgd_params, std_init=0.02)

2 frames
<ipython-input-14-d5e1f88e5f35> in __init__(self, params, std_init, mean_eta, mc_iters)
     27                          Use None to disable the check.
     28         """
---> 29         super(BGD, self).__init__(params, defaults={})
     30         assert mc_iters is None or (type(mc_iters) == int and mc_iters > 0), "mc_iters should be positive int or None."
     31         self.std_init = std_init

/usr/local/lib/python3.6/dist-packages/torch/optim/optimizer.py in __init__(self, params, defaults)
     49 
     50         for param_group in param_groups:
---> 51             self.add_param_group(param_group)
     52 
     53     def __getstate__(self):

/usr/local/lib/python3.6/dist-packages/torch/optim/optimizer.py in add_param_group(self, param_group)
    202             if not isinstance(param, torch.Tensor):
    203                 raise TypeError("optimizer can only optimize Tensors, "
--> 204                                 "but one of the params is " + torch.typename(param))
    205             if not param.is_leaf:
    206                 raise ValueError("can't optimize a non-leaf Tensor")

TypeError: optimizer can only optimize Tensors, but one of the params is int

I am not sure why though. I need to define the BGD class so I can use it for the example stated below. I have tried different variations of bgd_params and still get errors. How exactly, should I call the function?

for samples, labels in data:
    for mc_iter in range(mc_iters):
        bgd_optimizer.randomize_weights()
        output = model.forward(samples)
        loss = cirterion(output, labels)
        bgd_optimizer.zero_grad()
        loss.backward()
        bgd_optimizer.aggregate_grads(batch_size)
    optimizer.step()

Thank you