vicFigure / EAutoDet

Implementation of EAutoDet
GNU General Public License v3.0
10 stars 3 forks source link

[Question] How does the 'alpha' get updated by using register_buffer()? #3

Open jsrimr opened 2 years ago

jsrimr commented 2 years ago

As I know, variables introduced to the module by using register_buffer() are not get updated by optimizer.

So I wonder how the alphas registered as buffer could be updated 🤔

For example, in Conv_search,

self.register_buffer('alphas', torch.autograd.Variable(1e-3*torch.randn(len(kd)), requires_grad=True))
vicFigure commented 2 years ago

I am afraid that it is a misunderstanding about register_buffer(). Once you utilize register_buffer() to register a variable, you cannot use model.named_parameter to obtain that variable, but you can still update it by an optimizer as long as you can index that Variable and pass it to the optimizer.

In architect.py, we utilize model.arch_parameters() to index alphas and pass it to the optimizer.

jsrimr commented 2 years ago

So, the alphas are updated only by the validation data by the code below !

            # architect
            if epoch >= opt.search_warmup:
#              input_valid = imgs
#              target_valid = targets
              input_valid, target_valid, _, _ = next(valid_gen)
              input_valid = input_valid.to(device, non_blocking=True).float() / 255.0  # uint8 to float32, 0-255 to 0.0-1.0
              # Multi-scale
              if opt.multi_scale:
                  sz = random.randrange(imgsz * 0.5, imgsz * 1.5 + gs) // gs * gs  # size
                  sf = sz / max(input_valid.shape[2:])  # scale factor
                  if sf != 1:
                      ns = [math.ceil(x * sf / gs) * gs for x in input_valid.shape[2:]]  # new shape (stretched to gs-multiple)
                      input_valid = F.interpolate(input_valid, size=ns, mode='bilinear', align_corners=False)
              architect.step(input_valid, target_valid)

Now I'm curious if arch_paramters could be updated enough. As shown in the picture below, alpha values didn't change much during the search epochs(=50) Did you also only get slight change in alphas in your experiment?

image

Thanks for the quick answer always!

vicFigure commented 2 years ago

Since we normalize alpha by softmax, so even if the absolute values have little difference, softmax(alpha) can be quite different.

BTW, Our code is based on DARTS (DARTS: Differentiable Architecture Search), where alphas are initialized at a scale of 1e-3 and the lr for alpha is set as 5e-4, making the absolute values of alpha change slightly. But softmax(alpha) changes quite drastic.

jsrimr commented 2 years ago

Softmax value for model.1.alphas at epoch0 is

F.softmax(torch.tensor([-0.00035720536834560335, 0.0007502037915401161, -0.00034166971454396844, 0.0004349834634922445]))
tensor([0.2499, 0.2502, 0.2499, 0.2501])

and softmax value for model.1.alphas at epoch49 is

F.softmax(torch.tensor([-0.0003571510314941406, 0.0007500648498535156, -0.00034165382385253906, 0.00043487548828125]))
tensor([0.2499, 0.2502, 0.2499, 0.2501])

Softmax(alpha) also seems to show only slight change. Am I missing something?

Maybe we could increase the lr for alpha..?