YuejiangLIU / social-nce-trajectron-plus-plus

[ICCV'21] PyTorch implementation of the Social-NCE applied to Trajectron++.
Other
23 stars 9 forks source link

Is there an implementation with social-LSTM? #3

Closed TimHo0331 closed 2 years ago

TimHo0331 commented 2 years ago

Dear authors, In the paper, the social-nce loss is also applied to the Social-LSTM model. Will this code going to be released? The implementations of trajectron++ and stgcnn are relatively complex. It is better to see how the social-nce loss works on simple model.

Thanks!

YuejiangLIU commented 2 years ago

Our implementation of the social-nce loss applied to Social-LSTM is pretty similar to that applied to trajectron++ and crowdnav. We'll be glad to release it, once our model falls off the Trajnet++ leaderboard.

TimHo0331 commented 2 years ago

Our implementation of the social-nce loss applied to Social-LSTM is pretty similar to that applied to trajectron++ and crowdnav. We'll be glad to release it, once our model falls off the Trajnet++ leaderboard.

Thank you for your reply! Is it possible to release an implementation of social-nce loss applied to a simple model? For example, a very simple LSTM or MLP network without interaction modelling. In trajectron++, the code is based on a CVAE-based model output, which is kind of complex to understand the meaning of different variables...

YuejiangLIU commented 2 years ago

The policy network for imitation learning in the CrowdNav task is fairly light-weighted. The network has a simple encoder and interaction model. Here's the code.

TimHo0331 commented 2 years ago

May I ask a question about the implementation of social-nce applied to trajectron++? In https://github.com/YuejiangLIU/social-nce-trajectron-plus-plus/blob/fb36669c52af964a8afa58304b9b407e394a3105/trajectron/snce/contrastive.py#L76, the shape of variable sim_pos is (batch_size, num_samples, 3), what is the meaning of samples here? Sample 25 latent variables from the latent space? And why average the positive samples, but not the negative samples?

TimHo0331 commented 2 years ago

Another question is, Is the temporal embedding (time_pos & time_neg) necessary? This seems not described in the paper.

YuejiangLIU commented 2 years ago

May I ask a question about the implementation of social-nce applied to trajectron++? In https://github.com/YuejiangLIU/social-nce-trajectron-plus-plus/blob/fb36669c52af964a8afa58304b9b407e394a3105/trajectron/snce/contrastive.py#L76, the shape of variable sim_pos is (batch_size, num_samples, 3), what is the meaning of samples here? Sample 25 latent variables from the latent space? And why average the positive samples, but not the negative samples?

As far as I remember, the Trajectron++ samples 25 latent codes for predictions. These latent codes should all stay away from negative samples. However, given the uncertainty of the future, they do not have to all hit the positive sample. We, therefore, take the average.

YuejiangLIU commented 2 years ago

Another question is, Is the temporal embedding (time_pos & time_neg) necessary? This seems not described in the paper.

The <state, time> forms the event described in our paper. The method was still effective when we dropped the time part, i.e., only draw samples at a fixed time step. But sampling the future events from a temporal window led to better results in our previous experiments.