Zhengyushan / kat

The code for Kernel attention transformer (KAT)
MIT License
28 stars 5 forks source link

Questions about the model FLOP, GPU memory cost, and speed #1

Closed pzSuen closed 1 year ago

pzSuen commented 1 year ago

Hello, very impressed with your work.

But I have a few small questions about Table1 and Table2, specifically about the model FLOP, GPU memory cost, and speed.

As far as I know, the size of each slide is different, which causes the efficiency of each image calculation to be different.

So my question is, what size slice are you computing on, and how many patches are there?

Looking forward to your reply.

Yours. pzSuen.

Zhengyushan commented 1 year ago

Hi,

Very thanks for your attention to our work.

These results were calculated under a consistent size of patches for each dataset because we need to pad the data to a fixed tensor size for batch training. We assigned a max patch number of 2048 for the gastric dataset since we found most of the patch number per slide in this dataset does not exceed 2048. Similarly, we assigned the max patch number as 4096 for our endometrial dataset. So, the GPU memory, speed, and FLOPs for the gastric dataset are the results for 2048 patches and the results for the endometrial dataset are 4096 patches.

Please feel free to contact me if you have other questions.

Sincerely,

Yushan Zheng

发件人: @. @.> 代表 pzSuen 发送时间: 2022年11月13日 13:13 收件人: Zhengyushan/kat @.> 抄送: Subscribed @.> 主题: [Zhengyushan/kat] Questions about the model FLOP, GPU memory cost, and speed (Issue #1)

Hello, very impressed with your work.

But I have a few small questions about Table1 and Table2, specifically about the model FLOP, GPU memory cost, and speed.

As far as I know, the size of each slide is different, which causes the efficiency of each image calculation to be different.

So my question is, what size slice are you computing on, and how many patches are there?

Looking forward to your reply.

Yours. pzSuen.

— Reply to this email directly, view it on GitHub https://github.com/Zhengyushan/kat/issues/1 , or unsubscribe https://github.com/notifications/unsubscribe-auth/AG6JCPATZ5GMTT4QTNMLXTTWIB2FZANCNFSM6AAAAAAR6U7PGY . You are receiving this because you are subscribed to this thread. https://github.com/notifications/beacon/AG6JCPAVVIF3XQ2DWH5DKX3WIB2FZA5CNFSM6AAAAAAR6U7PG2WGG33NNVSW45C7OR4XAZNFJFZXG5LFVJRW63LNMVXHIX3JMTHFMO34ME.gif Message ID: @. @.> >