-
### What happened?
Hi guys. I have got a problem after I compile Llama on my machine. It built properly, but when I try to run it, it is looking for a file don't even exist (a model).
Is it normal…
-
Hi there,
I have a very interesting problem here when I want to test the model using my data.
The original dataset has 5835 rows, i.e., 5835 time series and It includes 39 timesteps. I understand…
-
So I finetuned a model using a custom dataset. The output should be in JSON format. All the keys are the same for each output, i.e. structure of the response JSON is the same while values need to be e…
-
1. Create a Clothing Brand pre-sale application
Very essential, but it probably wont leverage Filecoin Virtual Machines advantage which is combining storage and smart contracts.
2. Create a Zoom bot…
-
It takes almost 10 mins to get an answer. How to speed it up ? I am using US Constitution file as demo.
boral updated
6 months ago
-
### Your current environment
```text
Collecting environment information...
PyTorch version: 2.2.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
…
-
I would like to request a feature enhancement for the vectra CLI functionality. Specifically, I would like to have the option to use local Large Language Models (LLMs) instead of relying solely on Ope…
-
I'm still not 100% sure whether to call it llava.cpp or by another name to indicate its future support for other multimodal generation models in the future --maybe multimodal.cpp or lmm.cpp (large mul…
-
Would it be possible to support i-Quants in AutoQuant or are they more demanding to quantize?
-
Part of the [Github Vectorize (Summary)](https://www.notion.so/rndadocs/Github-Vectorize-Summary-fe006094d382427eb1daf746a9055849). **Please read this document before starting.**
The following chan…