-
-
System Info
GPU: NVIDIA RTX 4090
TensorRT-LLM 0.13
root@docker-desktop:/llm/tensorrt-llm-0.13.0/examples/chatglm# python3 convert_checkpoint.py --chatglm_version glm4 --model_dir "/llm/other/mode…
-
python predict_downstream_condition.py --ckpt_path model_name_roberta-base_taskname_qqp_lr_3e-05_seed_42_numsteps_2000_sample_Categorical_schedule_mutual_hybridlambda_0.0003_wordfreqlambda_0.0_fromscr…
-
I have glove 300d vectors only.
Change pls code [here](https://github.com/bedapudi6788/deepsegment/blob/59568edba59d08849a15ee82a90abb98eb6cd944/deepsegment/train.py#L109) and [here](https://github.…
-
Hi,
I finally managed to use `get_sequence_output` to get word embeddings after dealing with random embeddings due to dropout, random seed, etc.
However, `get_sequence_output()` doesn't seem to …
-
Thanks for this package; very useful.
Would it make sense to include simple multi-word distance metrics like MOWE (mean/median of word embeddings) etc in this package or is that already available i…
-
Hi, I'm using sentence transformer models like Roberta to embed the text, before I use the text segmentation tool which seems to work fine. However, in the tutorial and description, word embeddings ar…
SB168 updated
10 months ago
-
How to get file "word_embedding": "./embeddings/giga.100.txt" ?
Thanks!
-
Hello, thanks for you wonderful work! I would like to ask how to generate the word embeddings in your paper, can you provide some instructions or code for that?
-
## 0. Paper
@inproceedings{athiwaratkun-wilson-2017-multimodal,
title = "Multimodal Word Distributions",
author = "Athiwaratkun, Ben and
Wilson, Andrew",
booktitle = "Proceedin…
a1da4 updated
3 years ago