-
### What happened?
When I run `llama-cli -m ./Phi-3.5-mini-instruct-Q6_K_L.gguf -p "I believe the meaning of life is" -n 128` I get an error
```
build: 3829 (44f59b43) with Apple clang version 15…
-
An inductor defined as
```
`include "constants.vams"
`include "disciplines.vams"
module inductor(A,B);
inout A, B;
electrical A, B;
branch (A,B) br;
(*desc= "Inductance", type="instance…
-
Kudos for the wonderful work... This looks amazing!
As a newbie I wonder some of the models like Qwen or llava supports images as inputs. How useful it is developing such feature for this applicati…
-
**Describe the bug**
Currently, i use onnxgenai==0.4.0 converted phi_3_5_mini_instruct (fp16 and cuda) and run the infer with onnxgenai on A100 80G.
I observed for some input length around 3000 (800…
-
While I pulled already llama2:7b , I wanted to install llama2 (without the 7b tag). My understanding was that it was the same exact model (same hash), so maybe ollama would install only the metadata f…
-
```cpp
int array[3];
int main() {
int i = 0;
int a = array[1];
int b = array[0];
int c = array[2];
int d = 1919810;
int x = 0;
while(i
-
I have read #16 and follow the equation `x = ARAx' + c`, but I still can't align point clouds to exr points. Here is my code
```
#back projection
depth = depth / 512
n,m = np.meshgrid(np…
-
### Your current environment
vllm ver: 0.6.3.post1
### Model Input Dumps
_No response_
### 🐛 Describe the bug
as suggested by vllm team, I set `tokenizer_mode= "mistral" if llm_name.startswith('…
-
We have the following bril program:
```
@main(v0: int) {
v1_: int = const 0;
v2_: int = const 3;
v3_: int = const 5;
v4_: int = const 1;
v5_: int = id v1_;
v6_: int = id v2_;
v7…
-
This example is taken from the baseball case study in `pymc-examples`. We fit a beta-binomial model to some baseball batting data:
```python
data = pd.read_csv(pm.get_data("efron-morris-75-data.t…