-
### Prerequisites
* [X] Put an X between the brackets on this line if you have done all of the following:
* Read about bug reporting in general: https://rspamd.com/doc/faq.html#how-to-report-b…
Ne0os updated
3 months ago
-
對 p, q 來說,只會有兩種情形:
1. p, q 在同側
將最早發現的 p or q 往上傳,其本身即為另一個節點的 lowest ancestor
2. p, q 在不同側
我們只要將 p, q 找到之後往上傳即可,當第一個發現 p, q 在左右兩邊的節點,即為 lowest ancestor
```py
# Definition for a binary tree no…
-
dont want to use openAI
already have ollama installed
-
I noticed that this project uses both models, Davinci-003 and GPT3.5 Turbo. What are their respective roles? As GPT3.5 Turbo is a general-purpose model, can we use only it to complete the working? Bec…
-
Hey, I read your paper and this project looks awesome! I really appreciate you working to build such a thoughtful baseline LLM memory apps. My team and I are interested in using your repository to ben…
-
https://www.infzm.com/contents/267963?source=131
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4637354
-
**Description**
Ragchat currently cant process info from pictures.
**Workflow**
1-Implement functionality to process information by image to text.
2-Handle the picture sentences.
**Acceptance…
-
error line 200, in from_pretrained assert len(keys) == len(sd)
len(keys)
581
len(sd)
629
![image](https://github.com/karpathy/minGPT/assets/11426119/213e704f-e387-4cee-b0d0-2434036a86c9)
…
-
First of all, thank you for great work.
## System info
autoawq==0.1.8
## Details
While I tried to quantize GPT NeoX model, encountered the error below.
```
>>> from awq import AutoAWQForCa…
-
I received the following error when trying to compile an engine for an autogptq quantized Llama-2-13b-chat.
```
Traceback (most recent call last):
File "/root/TensorRT-LLM/examples/llama/build.p…
jFkd1 updated
9 months ago