clemsgrs / hipt

Re-implementation of HIPT
17 stars 7 forks source link

for classification traing #11

Closed AlexNmSED closed 9 months ago

AlexNmSED commented 1 year ago

1684829930620

Hello, When I was doing feature extraction, I encountered the above problem, do you have a solution? Are you facing the same problem.

AlexNmSED commented 1 year ago

I use the Pytorch 1.8

clemsgrs commented 1 year ago

hey, could you tell me which python version you're using?

AlexNmSED commented 1 year ago

Yes , Python 3.8.16 . Should I be using a higher base python?

clemsgrs commented 1 year ago

try upgrading to python 3.9

AlexNmSED commented 1 year ago

In order to reduce other conflicts. Can you share the conda environment configuration file? This will reduce the hassle. Thank you for your help.

XinYu Hao, Master Student JYU

---Original--- From: "clement @.> Date: Tue, May 23, 2023 11:59 AM To: @.>; Cc: @.**@.>; Subject: Re: [clemsgrs/hipt] for distributed training (Issue #11)

try upgrading to python 3.9

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

clemsgrs commented 1 year ago

I don't use conda, I use Docker instead. But you should be good to go in a conda environment with python 3.9.10 and torch 1.10.2. Then just pip install -r requirements.txt

clemsgrs commented 1 year ago

If you still run into some distributed-related issues, you can try upgrading to torch 1.12

AlexNmSED commented 1 year ago

Thank you.

I will do it.

XinYu Hao, Master Student The Alpha Lab (http://thealphalab.org) School of Software, Dalian University of Technology, Development Zone, Dalian 116620, China @.***

---Original--- From: "clement @.> Date: Tue, May 23, 2023 12:12 PM To: @.>; Cc: @.**@.>; Subject: Re: [clemsgrs/hipt] for distributed training (Issue #11)

If you still run into some distributed-related issues, you can try upgrading to torch 1.12

— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>

AlexNmSED commented 1 year ago

Thank you for your help. The feature extraction is going fine.

AlexNmSED commented 1 year ago

Hello , when I try agg_method: 'self_att', I encountered the following problems:

slide_pos_embed: use: False type: '1d' AssertionError: was expecting embedding dimension of 192, but got 1

slide_pos_embed: use: True type: '1d' AssertionError: query should be unbatched 2D or batched 3D tensor but received 4-D query tensor

slide_pos_embed: use: True type: '2d' TypeError: forward() missing 1 required positional argument: 'coords'

How should I use self-attention, can you give me some advice? Thank you for your help.

clemsgrs commented 9 months ago

hi, agg_method: 'self_att' is not fully supported yet my idea was to add a 4th level of self-attention on top of the slide-level Transformer to manage cases where the label is assigned to a group of slides (rather than a single slide), but i haven't had time to thoroughly test it.

I'd suggest using agg_method: 'concat', which will simply concatenate regions extracted from multiple slides into a single -- longer -- sequence.