-
### Description
This is a question, not an issue.
I want to set up a language model using the 1B words. These are my training parameters:
```
PROBLEM=languagemodel_lm1b32k
USR_DIR=/home/.../t…
-
Use this script: https://github.com/hassonlab/247-pickling/blob/dev/scripts/tfsemb_perplexity.py
and this command:
perp-embeddings:
mkdir -p logs
for conv_id in $(CONV_IDS); do \
python scr…
-
Traceback (most recent call last):
File "/home/runner/gp/main.py", line 3, in
perplexity = Perplexity()
File "/home/runner/gp/.pythonlibs/lib/python3.10/site-packages/perplexity/perplexity…
-
# Problem
Since `token.js` currently integrates with every LLM provider using their javascript SDK, the package size is much larger than necessary. This can impact the performance of backend services…
-
Hi,
Thank you for sharing your work.
The re-produced perplexity for ptb dataset using your code is not matched with the paper. The reproduced is 27.8, while in the paper it is around 9. Please c…
-
### 🚀 The feature, motivation and pitch
The idea here is to use the approach in this paper:
https://arxiv.org/pdf/2406.06443
If possible, to detect if a model has been trained on any of the bench…
-
Hello. Currently, I'm using Perplexity's Pro Search plan (premium). However, when I select Perplexity on ChatHub, I can only access the default free package with ChatGPT 3.5. How can I utilize Perplex…
-
The poe version of Perplexity has limits.
bot: dashboard.perplixity.com
-
I can't seem to find the 'run_lm_perplexity.py' file. Can someone help, please?
Also, could someone provide a good guide on how to run this model locally on PC?
-
When I try to evaluate my model's text generation using the perplexity metric, the batch_size parameters in perplexity._compute(..) was not sufficient, because it tries to tokenize and move the entire…