decisionforce / CoPO

[NeurIPS 2021] Official implementation of paper "Learning to Simulate Self-driven Particles System with Coordinated Policy Optimization".
Apache License 2.0
117 stars 21 forks source link

Visualize PGMap #27

Closed shile1998 closed 8 months ago

shile1998 commented 1 year ago

Hello, I reproduced your code, in addition to the five scenarios in the paper, there is also a PGMap scenario, in terms of success rate, PGMapde success rate is very high, other scenarios have a very low success rate, so I want to visualize PGMap. The model I trained has been converted into .npz . According to the previous requirements, and the five scenarios in the paper can be visualized normally except for the low success rate (which may not be trained well).But visualizing PGMap has a success rate of 0! Each time it collides halfway, it does not match the success rate of training of 0.8

2022-10-16 14-13-57 的屏幕截图

I added the PGMap scene to the vis.py file

Also prompt that variable meta_svo_lookup_table is required.Noting that it was mean and std, I found the progress .csv and added two variables. I would like to ask which step is wrong or what needs to be added to make the success rate normal

2022-10-16 14-24-29 的屏幕截图 2022-10-16 14-24-39 的屏幕截图 2022-10-16 14-27-33 的屏幕截图

pengzhenghao commented 1 year ago

Hi @shile1998 !!

Sorry for late reply! I believe the performance discrepancy is due to the change of the MetaDrive environment. Therefore I launch a new project to rerun all experiments and here is the outcome:

I finished benchmarking the results of various MARL algorithms in MetaDrive MARL environments. Please kindly refer to this page:

https://github.com/metadriverse/metadrive-benchmark/tree/main/MARL

And I also upload latest trained models so you can run it to visualize the behaviors! https://github.com/decisionforce/CoPO#visualization

Thanks!