google-research / robotics_transformer

Apache License 2.0
1.29k stars 148 forks source link

Need Universal Sentence Encoder model for natural language embedding #19

Open destroy314 opened 1 year ago

destroy314 commented 1 year ago

The policy takes a 512-d natural_language_embedding as input. Can I just load it from TF Hub (https://tfhub.dev/google/universal-sentence-encoder/4) and embed my sentence, or would you please share the model checkpoint you have used?

Asad-Shahid commented 10 months ago

Hi, Did you find any workaround for this?

ckennedy2050 commented 10 months ago

I've confirmed the USE encoder on TF Hub @ https://tfhub.dev/google/universal-sentence-encoder/4 does not generate the same natural_language_embedding as in the datasets used (https://docs.google.com/spreadsheets/d/1rPBD77tk60AEIGZrGSODwyyzs5FgCU9Uz3h-3_t2A9g/edit#gid=0). Can anyone share how to generate these embeddings for use with this project? Thank you!

safsin commented 6 months ago

After checking the example for RT1-X, it uses the USE encoder (https://tfhub.dev/google/universal-sentence-encoder-large/5), which generates the same embeddings as that in RT1 dataset.