Closed Always-Stude closed 5 days ago
Please format your code, it's hard to figure out what you're doing. It's hard to understand from your post what you think is going wrong. Furthermore, there's no call to minimize here, so you're not actually demonstrating an issue.
"I apologize for not being clear. What I mean is that I have two functions, S1fun_pm1_pytorch and S1fun, which produce the same results for the same data. However, result = minimize(S1fun_pm1_pytorch, np.log([lain, gain, vin]), args=(tdata, tt), method='nelder-mead') and result = minimize(S1fun, np.log([lain, gain, vin]), args=(tdata, tt), method='nelder-mead') yield different results."
------------------ 原始邮件 ------------------ 发件人: "Andrew @.>; 发送时间: 2024年9月8日(星期天) 下午2:05 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [scipy/scipy] Doubts about minimize optimization (Issue #21521)
Please format your code, it's hard to figure out what you're doing. It's hard to understand from your post what you think is going wrong. Furthermore, there's no call to minimize here, so you're not actually demonstrating an issue.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you authored the thread.Message ID: @.***>
@Always-Stude Please spend some effort on making clear issues. We cannot guess what you are trying to do based on your copy/pasted code and we don't use PyTorch. At the very least make sure that you are formatting your code properly in the Github issue. Otherwise we don't have much choice other than closing this issue.
Will there be a significant difference in the final results of minimize when using double precision versus single precision?
------------------ 原始邮件 ------------------ 发件人: "Ilhan @.>; 发送时间: 2024年9月8日(星期天) 下午2:48 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [scipy/scipy] Doubts about minimize optimization (Issue #21521)
@Always-Stude Please spend some effort on making clear issues. We cannot guess what you are trying to do based on your copy/pasted code and we don't use PyTorch. At the very least make sure that you are formatting your code properly in the Github issue. Otherwise we don't have much choice other than closing this issue.
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>
Double and single precision can have an effect, it would depend on the problem. When you say there's a big difference in the result, what do you classify as a big difference?
When using single precision, the optimal parameters are [2.78, 2.54, 3.55], but when using double precision, the optimal parameters are [8.69, 2.78, 4.55].
------------------ 原始邮件 ------------------ 发件人: "Andrew @.>; 发送时间: 2024年9月8日(星期天) 晚上6:03 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [scipy/scipy] Doubts about minimize optimization (Issue #21521)
Double and single precision can have an effect, it would depend on the problem. When you say there's a big difference in the result, what do you classify as a big difference?
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>
Because my calculations involve a large number of exponentials and logarithms
------------------ 原始邮件 ------------------ 发件人: "Andrew @.>; 发送时间: 2024年9月8日(星期天) 晚上6:03 收件人: @.>; 抄送: @.>; @.>; 主题: Re: [scipy/scipy] Doubts about minimize optimization (Issue #21521)
Double and single precision can have an effect, it would depend on the problem. When you say there's a big difference in the result, what do you classify as a big difference?
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you were mentioned.Message ID: @.***>
Why do functions with the same functionality produce different results when computed on a CPU versus a GPU? I defined a function using numpy arrays and rewrote it in PyTorch. The test data confirmed that the results were consistent, but there is a significant difference in results when using minimize optimization. This is my function
def S1fun_pm1(b,tdata,tt): # rho = np.exp(b[2]) # d = 1 # sigma1 = np.exp(b[0]) # sigma0 = np.exp(b[1])
def S1fun_pm1_pytorch(params, tdata, tt): dtype=torch.float32 device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') b = torch.tensor(params, dtype=dtype,device=device)
tt_tensor = torch.tensor(tt, dtype=dtype,device=device)