boathit / deepst

0 stars 0 forks source link

Doubt in Learning Destination Proxies #13

Open jayantjain100 opened 3 years ago

jayantjain100 commented 3 years ago

Hi,

I was trying to understand your model better. In an experiment, I removed the traffic component. I used a dummy linear graph, i.e. nodes are 1,2,3,4...n+1 and a single edge between two consecutive nodes (total n edges). The trajectories I trained on were all possible trajectories i.e. between all pairs of nodes for this dummy linear graph(n*(n+1) paths). I set K > n.

But for n = 100 I found that it learnt just 3 uniques proxies, i.e. many nodes were assigned the same one, which was causing confusion at inference time and low accuracy. I expected it to assign each node a different proxy.

It may be a problem with Tau or some other hyperparameter. Could you tell me what value of TAU worked for you or did you have to implement Annealing to make it work?

Thanks, Jayant

boathit commented 3 years ago

Maybe you can try the deterministic method first, that is, generating a soft-assignment using a neural net that directly maps the coordinate to assignment. Actually, I find it works very well on Chengdu dataset.

Besides, remember to normalize your coordinates (making them starting from point (0, 0)) before feeding them into neural net.

class GMMC(nn.Module):
    def __init__(self, D, K, hidden_size, a=0.0, b=1.0, dropout=0.1, device=torch.device('cpu')):
        """
        a, b are the prior knowledge of μ range such that μ ∈ [a, b].
        """
        super(GMMC, self).__init__()
        self.D = D
        self.K = K
        ## todo: maybe use the prior knowledge to initialize M
        ## such that μs are uniformly distributed in the data space
        #self.M = nn.Parameter(torch.randn(K, D)) # μs
        a, b = torch.tensor(a).to(device), torch.tensor(b).to(device)
        self.M = nn.Parameter(Uniform(a, b).sample((K, D))) #μs
        self.logS = nn.Parameter(torch.randn(K, D)) # logσs
        ## π is the parameter of the Categorical distribution
        self.x2logπ = nn.Sequential(MLP(D, hidden_size, K, dropout),
                                    nn.LogSoftmax(dim=1))
        self.d_uniform = Uniform(torch.tensor(0.0).to(device),
                                 torch.tensor(1.0).to(device))

    def forward(self, x, τ=1.0):
        """
        Input:
          x (batch, D)
        Output:
          z (batch, K)
          l (scalar): negative log-likehood
        """
        z = self.encoder(x, τ)
        μ, σ = self.decoder(z)
        l = NLLGauss(μ, σ, x)
        return z, l

    def encoder(self, x, τ):
        """
        Input:
          x (batch, D)
        Output:
          z (batch, K): Soft Gumbel-softmax samples.
        """
        logπ = self.x2logπ(x)

        z = torch.exp(logπ)
        return z

    def decoder(self, z):
        """
        Input:
          z (batch, K)
        Output:
          μ (batch, D)
          σ (batch, D)
        """
        μ = torch.mm(z, self.M)
        σ = torch.exp(torch.mm(z, self.logS))
        return μ, σ

def NLLGauss(μ, σ, x):
    """
    μ (batch, D)
    σ (batch, D)
    x (batch, D)
    """
    return -torch.mean(Normal(μ, σ).log_prob(x))