Open Lee-Carl opened 11 months ago
我们的方法在论文中有写,通过优化的方法的来。
Lee-Carl @.***> 于2023年12月1日周五 22:30写道:
洪老师,您好,我想知道您论文中比较SAD用的端元是怎么提取的?
— Reply to this email directly, view it on GitHub https://github.com/danfenghong/IEEE_TNNLS_EGU-Net/issues/14, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFL2GZQENZPKKEKZD3BYUELYHHSX7AVCNFSM6AAAAABAC5V3GOVHI2DSMVQWIX3LMV43ASLTON2WKOZSGAZDAOJZG42TQMI . You are receiving this because you are subscribed to this thread.Message ID: @.***>
也就是说,将通过优化的VCA生成的端元视为预测的端元,也就是这图上的红线部分的端元?
不是,论文中有公式,是求解一个优化问题。
Lee-Carl @.***> 于2023年12月2日周六 10:49写道:
也就是说,用通过优化的VCA生成的端元参与计算,也就是这图上的红线部分的端元? image.png (view on web) https://github.com/danfenghong/IEEE_TNNLS_EGU-Net/assets/118745813/8d217c19-06be-446c-b997-b0542f593e10
— Reply to this email directly, view it on GitHub https://github.com/danfenghong/IEEE_TNNLS_EGU-Net/issues/14#issuecomment-1837005230, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFL2GZQSTLFZYXZZUYYDTSLYHKJNPAVCNFSM6AAAAABAC5V3GOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQMZXGAYDKMRTGA . You are receiving this because you commented.Message ID: @.***>
是这个优化问题吧?
是的。
Lee-Carl @.***> 于2023年12月9日周六 11:46写道:
是这个优化问题吧? image.png (view on web) https://github.com/danfenghong/IEEE_TNNLS_EGU-Net/assets/118745813/9c10e320-e1c6-470d-bab2-dbf0a1555f64
— Reply to this email directly, view it on GitHub https://github.com/danfenghong/IEEE_TNNLS_EGU-Net/issues/14#issuecomment-1848209593, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFL2GZS4A362QZTMAPQ74Y3YIPNITAVCNFSM6AAAAABAC5V3GOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNBYGIYDSNJZGM . You are receiving this because you commented.Message ID: @.***>
是的。 Lee-Carl @.> 于2023年12月9日周六 11:46写道: … 是这个优化问题吧? image.png (view on web) https://github.com/danfenghong/IEEE_TNNLS_EGU-Net/assets/118745813/9c10e320-e1c6-470d-bab2-dbf0a1555f64 — Reply to this email directly, view it on GitHub <#14 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFL2GZS4A362QZTMAPQ74Y3YIPNITAVCNFSM6AAAAABAC5V3GOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNBYGIYDSNJZGM . You are receiving this because you commented.Message ID: @.> 感谢您的回复!尝试了几个最优化方法后,已通过pytorch优化目标函数的方式得到了端元,指标计算也ok。第一次了解这种间接的提取端元的方式(之前都是通过解码器或是方法直接给出的)。再次感谢您的耐心回复!
@Lee-Carl
mizing the objective function with pytorch, and the indicator calculation is OK. This is the first time I understand this indirect way of extracting endmembers (previo
Would you mind sharing the code for extracting the endmembers?
The endmembers are extracted by VCA. The code also includes the endmember extraction.
Atheer Abdullah @.***> 于2023年12月10日周日 02:44写道:
mizing the objective function with pytorch, and the indicator calculation is OK. This is the first time I understand this indirect way of extracting endmembers (previo
Would you mind sharing the code for extracting the endmembers?
— Reply to this email directly, view it on GitHub https://github.com/danfenghong/IEEE_TNNLS_EGU-Net/issues/14#issuecomment-1848611547, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFL2GZRP4FKXRDJLP4BSOMTYISWSFAVCNFSM6AAAAABAC5V3GOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNBYGYYTCNJUG4 . You are receiving this because you commented.Message ID: @.***>
@Lee-Carl
mizing the objective function with pytorch, and the indicator calculation is OK. This is the first time I understand this indirect way of extracting endmembers (previo
Would you mind sharing the code for extracting the endmembers?
OK. However, I saw that Prof Hong replied to you. I suggest you use his method. Currently, my code has only been tested on experiments I've done, not yet on the EGU-Net.Here is my code:
import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
def extract_edm(y, a):
"""
Args:
y (numpy.ndarray): Mixed pixels (L, N).
a (numpy.ndarray): Estimated abundances (P, N).
Returns:
E_solution (numpy.ndarray): Estimated endmembers (L, P).
"""
# Check if GPU is available
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Move data to GPU
Y = torch.from_numpy(y.copy().astype(np.float32)).to(device)
A = torch.from_numpy(a.copy().astype(np.float32)).to(device)
# Initialize endmembers using Xavier initialization and move parameters to GPU
E = nn.Parameter(torch.empty(Y.shape[0], A.shape[0]).to(device))
nn.init.xavier_uniform_(E)
# Define optimizer
optimizer = torch.optim.Adam([E], lr=0.01)
# Perform optimization
for epoch in range(1000):
optimizer.zero_grad() # Clear gradients
# Calculate the mean squared error loss as the objective function
loss = F.mse_loss(Y, torch.matmul(E, A))
loss.backward() # Backpropagation
optimizer.step() # Update parameters
E.data = torch.clamp(E.data, min=0) # Force E to be non-negative
# Get the final estimated endmembers
E_solution = E.data.cpu().numpy() # Move the result back to CPU
return E_solution
The endmembers are extracted by VCA. The code also includes the endmember extraction. Atheer Abdullah @.> 于2023年12月10日周日 02:44写道: … mizing the objective function with pytorch, and the indicator calculation is OK. This is the first time I understand this indirect way of extracting endmembers (previo Would you mind sharing the code for extracting the endmembers? — Reply to this email directly, view it on GitHub <#14 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AFL2GZRP4FKXRDJLP4BSOMTYISWSFAVCNFSM6AAAAABAC5V3GOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTQNBYGYYTCNJUG4 . You are receiving this because you commented.Message ID: @.>
Thank you for your reply, is it the EM or the sub_EM ?
I'm trying to compare your work with my thesis but there are some questions if your don't mind,
If so, it doesn't work, since the Trlabel uses the actual number of endmembers from M.
Would appreciate your replies.
@Lee-Carl
mizing the objective function with pytorch, and the indicator calculation is OK. This is the first time I understand this indirect way of extracting endmembers (previo
Would you mind sharing the code for extracting the endmembers?
OK. However, I saw that Prof Hong replied to you. I suggest you use his method. Currently, my code has only been tested on experiments I've done, not yet on the EGU-Net.Here is my code:
import torch import torch.nn as nn import torch.nn.functional as F import numpy as np def extract_edm(y, a): """ Args: y (numpy.ndarray): Mixed pixels (L, N). a (numpy.ndarray): Estimated abundances (P, N). Returns: E_solution (numpy.ndarray): Estimated endmembers (L, P). """ # Check if GPU is available device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # Move data to GPU Y = torch.from_numpy(y.copy().astype(np.float32)).to(device) A = torch.from_numpy(a.copy().astype(np.float32)).to(device) # Initialize endmembers using Xavier initialization and move parameters to GPU E = nn.Parameter(torch.empty(Y.shape[0], A.shape[0]).to(device)) nn.init.xavier_uniform_(E) # Define optimizer optimizer = torch.optim.Adam([E], lr=0.01) # Perform optimization for epoch in range(1000): optimizer.zero_grad() # Clear gradients # Calculate the mean squared error loss as the objective function loss = F.mse_loss(Y, torch.matmul(E, A)) loss.backward() # Backpropagation optimizer.step() # Update parameters E.data = torch.clamp(E.data, min=0) # Force E to be non-negative # Get the final estimated endmembers E_solution = E.data.cpu().numpy() # Move the result back to CPU return E_solution
Thank you very much! Appreciate it. May I ask what did you pass as the abundance a? is it generated from Pseudo_endmembers_generation.m ?
@atheeraa No, the abundance a is from the results of unmixing methods.
洪老师,您好,我想知道您论文中计算SAD用的预测端元是怎么提取的?