-
### Before submitting your bug report
- [X] I believe this is a bug. I'll try to join the [Continue Discord](https://discord.gg/NWtdYexhMs) for questions
- [X] I'm not able to find an [open issue]…
-
**URL**: https://www.phind.com/
**Browser / Version**: Firefox Mobile 125.0
**Operating System**: Android 10
**Tested Another Browser**: Yes Opera
**Problem type**: Site is not usable
**Descrip…
-
### Bug Description
llm = Vllm(
model="/hy-tmp/hub/models--Phind--Phind-CodeLlama-34B-v2/snapshots/949f61e203f91b412efe8f679c798f09f0ff4b0c",
dtype="float16",
tensor_parallel_size=1,
…
-
## What do you do?
6章読む
## TODO
- [x] 6.1 パーティショニングとレプリケーション
- [x] #32
- [x] 6.3 パーティショニングとセカンダリインデックス
- [x] 6.4 パーティショニングとリバランシング
- [x] 6.5 リクエストのルーティング
-
Heres a problem described to Phind:
https://www.phind.com/search?cache=o31uled8qv0ayvnbv5qqk9iy
We tested like this:
Create a new (arbitrary) repo
Start a code space
``` shell
gh ext…
-
Hello,
Founder of Phind here. I'm kindly requesting that you once again remove Phind from your providers list @xtekky. We've previously asked to be removed, and we appreciated your cooperation with…
-
Hi team ,
I am trying out tgpt on cli using phind as the provider.
Is there way or some advice on how to keep a conversation flowing with follow up questions based on the previous interaction(as …
-
Hello,
I'm having issue making inference with AWQ model which give me a CUDA OOM error at loading using VLLM:
llm = LLM(model="/root/Thot/llama_model_weights/quantized/Q4/Phind-CodeLlama-34B-v2-A…
-
### `brew config`
```shell
HOMEBREW_VERSION: 4.3.9-307-g4aae003
ORIGIN: https://github.com/Homebrew/brew
HEAD: 4aae003a1a9caa4a5b22c3c9c23afd6b1ef58a0f
Last commit: 10 hours ago
Core tap JSON:…
-
Description:
Implement unit tests and integration tests to ensure the reliability and correctness of the atomic swap process and treasury bill management system with the new interoperability layer …