Open vvthai10 opened 1 month ago
我的损失比图像损失还小
Dataset Brain, you train by this?? What dataset u use to train model? And can u share about hyper param batch, epoch, learning_rate?
On Tue, May 28, 2024 at 2:20 PM wyy123de @.***> wrote:
image.png (view on web) https://github.com/zqhang/AnomalyCLIP/assets/145307001/904f651f-e3d3-4eba-8889-8450c71a0913 我的损失比图像损失还小
— Reply to this email directly, view it on GitHub https://github.com/zqhang/AnomalyCLIP/issues/16#issuecomment-2134513822, or unsubscribe https://github.com/notifications/unsubscribe-auth/AR65VJPQ7FAZYZU46BAOBULZEQV45AVCNFSM6AAAAABIIVZNICVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMZUGUYTGOBSGI . You are receiving this because you authored the thread.Message ID: @.***>
and how can you show the table after train 15 epochs, I not find way to show this
这个你要自己保存到results中,我用的Head-CT数据集 还可以可视化呢
You train by Brain dataset, not mvtec
Can u share me code you visualize graphwhen train model
对啊,我是医学影响异常检测
sure
I'm also studying the medical part like you, learnign_rate u still 0.001 ?
yes
can u share me link of dataset u use,
I try train with mvtec, after 15 epoch, loss, and image_loss is 3.4, 0.3 but when test result still okay
我感觉我训练有BUG
原文效果没那么好,哈哈哈哈
parser.add_argument("--train_data_path", type=str, default="/root/Downloads/Untitled Folder/AnomalyCLIP-main", help="train dataset path") parser.add_argument("--save_path", type=str, default='/root/Downloads/Untitled Folder/AnomalyCLIP-main/results', help='path to save results') 这个是保存代码,其实作者给的代码很详细,你要保存到位置就可以看到
And now i want to add adapter to image encoder, u want try with me
"The original text is not that good" I don't understand what u say.
训练出现了BUG
导致效果好
你的意思是你的结果比文章高很多吧?
对
看起来他们在文章中使用 mvtec 进行训练并在医疗设备上进行评估,但如果你使用医疗包进行训练,我认为结果会有所不同。
跑了100个epoch
为什么训练了 100 个 epoch,结果却低于 15 个 epoch?
在数据集文件中,我没有看到的generate_class_info函数返回大脑数据集的obj_list
100个epoch,训练的多,我学习率调到0.000001,15个epoch学习率0.001
我发现在这种类型的练习中,他们仅训练 15 或 50 个 epoch,lr 为 0.001 或 0.0001。 因为训练数据很小,训练多了就怕过拟合
yes
但是你的损失确实很低,我不明白为什么,因为当我用mvtec文件训练时,它是3.x,哈哈哈
您想与您的团队就这个主题进行合作吗? 我正在尝试更新图像编码器部分以将其合并到本文中。
可能你才是对的,不过损失不下降,可能是欠拟合,你可以尝试调大epoch,或者学习率
但是你的损失确实很低,我不明白为什么,因为当我用mvtec文件训练时,它是3.x,哈哈哈
好啊,我很荣幸
---Original--- From: @.> Date: Tue, May 28, 2024 16:54 PM To: @.>; Cc: @.**@.>; Subject: Re: [zqhang/AnomalyCLIP] About loss, image_loss when train model?(Issue #16)
您想与您的团队就这个主题进行合作吗? 我正在尝试更新图像编码器部分以将其合并到本文中。
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
您可以通过电子邮件与我联系 @.***
Message ID: @.***>
这里没有显示我的电子邮件地址。我的 Gmail 是 vuvanthai1410。
ok
请给我发电子邮件,我会详细告诉你我在做什么:)
yeah, i ssee
I train with dataset visa, I train 11 epoch, but loss and image_loss is 3.7960, 0.5325. I feel something wrong. I setup by your setting. Can you share me about loss and image_loss when u train?