Hi!
I wonder is it possible to change the dataset settings and add optimizer to parameters other than model parameters when using Runner. For example,
# My Custom Dataset
def CustomDataset(BaseDataset):
def load_data_list(self):
...
return [dict(inputs=torch.rand(1,3,224,224).numpy(), gt_label = torch.rand(1).numpy() for i in range(bs)]
# Configs for Runner -> cfg
train_pipeline_cfg = [dict(type=...)]
ds = dict(type='CustomDataset',pipeline=train_pipeline_cfg)
train_dataloader_cfg = dict(
batch_size=32,
num_workers=8,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=True),
dataset=ds)
...
runner = Runner.from_cfg(cfg)
# After created runner, here I want to initialize an optimizer for custom dataset and want to set requires_grad -> True in the custom dataset
# Set the require_grad of data in Custom Dataset to True
(????) = (????).requires_grad_(True) # What should I put in the (????)
optimizer_data = torch.optim.SGD([????]) # What should I put in the optimizer input to get the grad in Custom Dataset?
runner.train()
Branch
main branch (1.x version, such as
v1.0.0
, ordev-1.x
branch)Prerequisite
Environment
sys.platform: linux Python: 3.9.18 | packaged by conda-forge | (main, Aug 30 2023, 03:49:32) [GCC 12.3.0] CUDA available: True numpy_random_seed: 2147483648 GPU 0,1,2,3,4,5,6,7: Tesla V100-SXM2-32GB CUDA_HOME: /usr/local/cuda-11.7 NVCC: Cuda compilation tools, release 11.7, V11.7.64 GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 PyTorch: 2.0.1+cu117 PyTorch compiling details: PyTorch built with:
TorchVision: 0.15.2+cu117 OpenCV: 4.8.1 MMEngine: 0.9.0 MMAction2: 1.2.0+4d6c934 MMCV: 2.1.0
Describe the bug
Hi! I wonder is it possible to change the dataset settings and add optimizer to parameters other than model parameters when using Runner. For example,
Can anyone enlighten me on this? Thanks.
Reproduces the problem - code sample
No response
Reproduces the problem - command or script
No response
Reproduces the problem - error message
No response
Additional information
No response