-
Thanks for your impressive work. But the dataset given in the Baidu disk doesn't fit the finetuning process. If I want to reprodce the results in your paper, should I download the official SUN RGB-D d…
-
# Title of the Talk: No Code SLM Finetuning with MonsterAPI
## Abstract of the Talk:
Dive into the world of no-code large language model (LLM) finetuning in this informative talk presented by Mons…
-
🦥 Unsloth: Will patch your computer to enable 2x faster free finetuning.
==((====))== Unsloth: Fast Llama patching release 2024.6
\\ /| GPU: NVIDIA A100 80GB PCIe MIG 7g.80gb. Max memory: 7…
-
lightning-cli getinfo
```
{
"id": "035aef5661e1a6e370db60dc0455796800afd5b51fbc12a0a8b34836b15f5d7ef6",
"alias": "TWronald✅",
"color": "15c315",
"num_peers": 9,
"num_pending_chann…
-
That sounds massively interesting, and while we try to run inference and read the paper, should we expect the release of the finetuning code?
-
While executing the file in folder `Olive/examples/llama2` i got the error of
TypeError: LlamaForCausalLM.forward() got an unexpected keyword argument 'past_key_values.0.key'
while executing :
`py…
-
Hi there,
First of all thanks for such an awesome work, I tried it for my custom usecase, it gave me awesome results.
But only 2 classes are missing, I also tried using different custom labels, as t…
-
Finetuning of the Hugging Face models is supported within MindsDB - as [seen in the code here](https://github.com/mindsdb/mindsdb/blob/main/mindsdb/integrations/handlers/huggingface_handler/settings.p…
-
在finetune代码中,部分也加入了loss的计算,想请教下这样相比conditioning language modeling loss有什么特别的好处吗?
-
Hello Xintao,
We found that the direct inferencing based on GFPGAN v1.4 performs pretty well on our own datasets, whilst GFPGAN v1 inferencing is not high-quality.
However, when we tried to fin…