-
### 🚀 The feature, motivation and pitch
Paper: [Harnessing Explanations: LLM-to-LM Interpreter for Enhanced Text Attributed Graph Representation Learning](https://arxiv.org/pdf/2305.19523) (ICLR 20…
-
I am running llama2 model in wikitext dataset. I just want try some other metrics so I modify the default YAML file(`lm-evaluation-harness/lm_eval/tasks/wikitext/wikitext.yaml`) to the following, just…
-
Hi,
sometimes I’ll get on the GSpro connection status LM, Misread and even after hitting more balls this doesn’t clear and the shorts aren’t showing in GSPro. I’m using the most up-to-date public f…
-
windows 2022 Datacenter
64gig ram
16 core
version 0.2.23
It ran successfully for me on three machines but this machine was not happy after upgrade and llama3 download.
the modal just does not see…
-
**Acceptance Criteria**
- The AC provided applies for LM folder only.
- When user makes Accepted attempt in Web Acceptance Process, system sends out an email to Applicant.
- The email should b…
-
Hello and thanks for your work!
While running bradley-terry-rm/llama3_rm.py the final saved model does not have a lm head as the script is using a AutoModelForSequenceClassification model and not …
-
`libmem.LM_EnumProcess` feels verbose for Python.
Instead, something like this: `libmem.enum_processes` would be more welcome
-
Looking for a way to convert model weights between huggingface and Megatron-LM.
(1): Continual pretraining from pretrained weights from huggingface
(2): Convert Megatron-LM model weights to huggingf…
-
Hi there. I am in the initial setup phase but the LM will not connect. What am I doing wrong?
-
@ilonah22 @COBrogan @mwdunlap2004 I found out that we can create a normal plot output from an `lm` by including an argument indicating the number of the plot that we want to save:
```
jun_model