-
os: windows
I think my environment is ready
use jupyter notebook locally
when i run these:
"from unsloth import FastLanguageModel
import torch
max_seq_length = 8192 # Choose any! We auto sup…
-
Thank you for writing such a wonderful little summary about working with Pytorch. Even I, who already has a lot of experience with PyTorch, enjoyed reading it very much.
In the Jupiter NB `08_02_Py…
-
These are our v0.1 terms for the general vocabulary:
- funder
- institution
- model (alias: algorithm)
- licence_category
- instrument_type
- instrument
- variable
- platform_type (e.g.: "sate…
-
In the admin system - update to current version as at http://vocabulary.odm2.org/
-
hello,
i have some questions about the bert_qg vocab. you just given the preprocessed vocab file but the codes to get vocab and related embed are not in the preprocess.py. would you like to release …
-
import math
import os
import random
import torch
from d2l import torch as d2l
import os
import matplotlib.pyplot as plt
os.environ["KMP_DUPLICATE_LIB_OK"]="TRUE"
#@save
d2l.DATA_HUB['ptb'] …
-
Thank you guys for the amazing effort !
I am trying to use ```gpt2-xl``` (listed in the Supported Models) with multiple GPUs.
However, when I use 2 GPUs, I get
```ValueError: Total number of a…
-
I need some clarity regarding using a language model's word vocabulary from its training data. Is it essential to stick to the exact vocabulary during usage? Your insights would be much appreciated.
-
Hi Zhenghong,
Thanks for providing a useful tool, I have another usage question.
A generation example has been provided in the readme,
```
python GRA.py --data_path='./Data/MIRA/MIRA.csv' --tc…
-
Hi, wei,
Thanks for your public. However, I can not found the code for write vocab.txt or the data of vocab.txt. Can you provide that for me?
Thanks!