-
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain.js documentation with the integrated search.
- [X] I used the GitHub search to find a …
-
#vagas
Jobs IT
We are accepting applicants from the following countries: #Argentina, #Brazil, #Colombia and #Mexico.
Senior Full-stack/Back-end Engineer - Node.js, React.js (C1/C2)
💰USD 7k…
-
I tried to run the Cuda server from within a container, but a thread panics:
```
running /workspace/aici/target/release/rllm-cuda --verbose --aicirt /workspace/aici/target/release/aicirt -m micros…
-
2 new models released from Microsoft:
https://huggingface.co/microsoft/Phi-3-medium-4k-instruct/
https://huggingface.co/microsoft/Phi-3-small-8k-instruct/
Medium uses Phi3ForCausalLM and conv…
-
## Feature request
Request the implementation of the following ONNX operators:
* LogSoftmax
* Softmax
* ReduceMax
## Motivation
These operators are common in neural networks of many types;…
-
### Your current environment
I used 0.4.3 version, pip install, cuda vsesion 12.0, A100 GPU
RuntimeError: t == DeviceType::CUDA INTERNAL ASSERT FAILED
### 🐛 Describe the bug
```
INFO 06-02 03…
-
### My environment setup
1st environment (running on ec2 `g6.4xlarge`)
```
[2024-06-01T10:14:23Z] Collecting environment information...
[2024-06-01T10:14:26Z] PyTorch version: 2.3.0+cu121
[2024-0…
khluu updated
1 month ago
-
Users are seeking assistance or guidance on how to properly set up and configure the LLM function to run on Mac systems. They may be facing difficulties in installing dependencies, configuring environ…
-
Hello. Thank for providing vLLM as a great open-source tool for inference and model serving! I was able to build vLLM on a cluster I maintain, but it only appears to work on a single MI210 GPU. Can so…
-
### Your current environment
```
Collecting environment information...
PyTorch version: 2.2.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A
OS: …