cha15yq / CUT

Segmentation assisted U-shaped multi-scale transformer for crowd counting
17 stars 2 forks source link

Code issue #6

Open sulaimanvesal opened 11 months ago

sulaimanvesal commented 11 months ago

Hi,

I am getting this issue, and I don't know why. The data preprocessing and other steps are done. Is it something related to different timm versions?

  File "C:/scripts/train.py", line 74, in <module>
    trainer.setup()
  File "C:\HanWha\scripts\utils\regression_trainer.py", line 56, in setup
    self.model = Count(args)
  File "C:\scripts\model\model.py", line 40, in __init__
    self.LA_end1 = SAAM(256, 4, 4, 1)
  File "C:\scripts\model\pvt.py", line 311, in __init__
    self._block = GroupBlock(dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0.,
  File "C:\scripts\model\pvt.py", line 73, in __init__
    super(GroupBlock, self).__init__(dim, num_heads, mlp_ratio, qkv_bias, drop, attn_drop,
  File "C:\Users\sulai\anaconda3\envs\pytorch_env\lib\site-packages\timm\models\vision_transformer.py", line 141, in __init__
    self.ls1 = LayerScale(dim, init_values=init_values) if init_values else nn.Identity()
  File "C:\Users\sulai\anaconda3\envs\pytorch_env\lib\site-packages\timm\models\vision_transformer.py", line 107, in __init__
    self.gamma = nn.Parameter(init_values * torch.ones(dim))
TypeError: unsupported operand type(s) for *: 'type' and 'Tensor'
cha15yq commented 11 months ago

Yes, I think so, the version of package timm has been updated several times since I finished this project. I believe I have left the timm version I used in the readme.

On Wed, 8 Nov 2023 at 21:26, Sulaiman Vesal @.***> wrote:

Hi,

I am getting this issue, and I don't know why. The data preprocessing and other steps are done. Is it something related to different timm versions?

File "C:/scripts/train.py", line 74, in trainer.setup() File "C:\HanWha\scripts\utils\regression_trainer.py", line 56, in setup self.model = Count(args) File "C:\scripts\model\model.py", line 40, in init self.LA_end1 = SAAM(256, 4, 4, 1) File "C:\scripts\model\pvt.py", line 311, in init self._block = GroupBlock(dim, num_heads, mlp_ratio=4., qkv_bias=False, qk_scale=None, drop=0., attn_drop=0., File "C:\scripts\model\pvt.py", line 73, in init super(GroupBlock, self).init(dim, num_heads, mlp_ratio, qkv_bias, drop, attn_drop, File "C:\Users\sulai\anaconda3\envs\pytorch_env\lib\site-packages\timm\models\vision_transformer.py", line 141, in init self.ls1 = LayerScale(dim, init_values=init_values) if init_values else nn.Identity() File "C:\Users\sulai\anaconda3\envs\pytorch_env\lib\site-packages\timm\models\vision_transformer.py", line 107, in init self.gamma = nn.Parameter(init_values torch.ones(dim))TypeError: unsupported operand type(s) for : 'type' and 'Tensor'

— Reply to this email directly, view it on GitHub https://github.com/cha15yq/CUT/issues/6, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANGLTWIRWVB3IPQQX3T5F43YDP2HTAVCNFSM6AAAAAA7DRNYBWVHI2DSMVQWIX3LMV43ASLTON2WKOZRHE4DINBSGQYDMMY . You are receiving this because you are subscribed to this thread.Message ID: @.***>

sulaimanvesal commented 11 months ago

Thanks for the prompt response, just downgraded the timm and it worked well.

However, after 200 epochs starting the validation, getting inf, any experience on this error:

11-08 14:11:59, ----------------------------------------Epoch:200/999----------------------------------------
11-08 14:12:11, Epoch 200 Train, Loss: 0.24, MSE: 20.62, MAE: 7.66, 
level2: ssim: 0.1003 seg: 0.0011 tv:0.3910;
level3: ssim: 0.1111 seg: 0.0011 tv:0.4105;
level4: ssim: 0.1425 seg: 0.0014 tv:0.4735;  Cost: 12.7 sec 
C:\Users\sulai\anaconda3\envs\pytorch_env\lib\site-packages\numpy\core\fromnumeric.py:3432: RuntimeWarning: Mean of empty slice.
  return _methods._mean(a, axis=axis, dtype=dtype,
C:\Users\sulai\anaconda3\envs\pytorch_env\lib\site-packages\numpy\core\_methods.py:190: RuntimeWarning: invalid value encountered in double_scalars
  ret = ret.dtype.type(ret / rcount)
11-08 14:12:13, Epoch 200 Val, MAE: nan, MSE: nan,  Cost 0.0 sec
11-08 14:12:13, ----------------------------------------Epoch:201/999----------------------------------------
Best Result: MAE: inf MSE:inf
11-08 14:12:25, Epoch 201 Train, Loss: 0.22, MSE: 12.60, MAE: 6.40, 
level2: ssim: 0.0941 seg: 0.0011 tv:0.3994;
level3: ssim: 0.1040 seg: 0.0011 tv:0.4190;
level4: ssim: 0.1330 seg: 0.0013 tv:0.4797;  Cost: 12.8 sec 
cha15yq commented 11 months ago

It looks like the test_data_loader contains no samples.

sulaimanvesal commented 11 months ago

Thanks fixed the issue.

I could train CUT on both SHHA and SHHB. However, I couldn't reproduce the results on the paper.

For SSHA, I trained two times, and the lowest MAE and MSE I got was: 58.1, 95.2. For SSHB, MAE and MSE was 8.0 and 12.0.

Any suggestions?

cha15yq commented 11 months ago

You may need to start the evaluation earlier. Maybe you could try train the model with Google Colab, since I did SHA experiment on that. By the time I uploaded the code, I was able to reproduce the result with Google Colab, but since it has been a year, I am not sure if it still gives a similar result.

image image
sulaimanvesal commented 10 months ago

Thanks, I was able to reproduce somewhere close to your results but not exactly. That should be fine.

One last question: why the shape of result during inference for result, _, _, _, _, _ = model(input_data) is 1x1x128x128 when we test and want to plot the density map? and it's not the actual input shape?

cha15yq commented 10 months ago

When I designed this model, I followed a common choice of downsampling ratio in most crowd counting research (which is 8).

sulaimanvesal commented 10 months ago

Hi @cha15yq

You can close this issue, but could you please your preprocessing script for NWPU dataset? I found so many variations for point-map version but not density map. I know, you mentioned that's very similar to qnrf code, but the results I get using that code doesn't match your paper results. Therefore I thought, maybe something is wrong with the data preprocessing for NWPU.

yangtle commented 8 months ago

Hi @cha15yq

You can close this issue, but could you please your preprocessing script for NWPU dataset? I found so many variations for point-map version but not density map. I know, you mentioned that's very similar to qnrf code, but the results I get using that code doesn't match your paper results. Therefore I thought, maybe something is wrong with the data preprocessing for NWPU.

Hello, have you run experiments on the JHU-Crowd and NWPU datasets? Could you please provide the preprocessing code for these two datasets? I would greatly appreciate it!