issues
search
containers
/
ramalama
The goal of RamaLama is to make working with AI boring.
MIT License
280
stars
48
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Only do dnf install for cuda images
#441
ericcurtin
closed
2 weeks ago
2
We can now run models via Kompute in podman-machine
#440
ericcurtin
closed
2 weeks ago
2
Bump to v0.0.23
#439
rhatdan
closed
2 weeks ago
1
Closing stderr on podman command is blocking progress information and…
#438
rhatdan
closed
2 weeks ago
1
Bump to v0.0.23
#437
rhatdan
closed
2 weeks ago
1
Run the command by default without stderr
#436
rhatdan
closed
2 weeks ago
4
Make it easier to test-run manually
#435
rhatdan
closed
2 weeks ago
4
Run does not have generate, so remove it
#434
rhatdan
closed
2 weeks ago
1
When running on macOS via ramalama run we were throwing exceptions
#433
ericcurtin
closed
2 weeks ago
2
Attempt to remove OCI Image if removing as Ollama or Huggingface fails
#432
rhatdan
closed
2 weeks ago
1
Ramalama Isnt Directing Stderr correctly
#431
bmahabirbu
closed
2 weeks ago
7
Install llama-cpp-python[server]
#430
ericcurtin
closed
2 weeks ago
3
Fix podman run oci://...
#429
rhatdan
closed
2 weeks ago
0
Remove omlmd as a dependency
#428
ericcurtin
closed
2 weeks ago
2
Check versions match in CI
#427
ericcurtin
closed
2 weeks ago
1
Made run and serve consistent with model exec path. Fixes issue #413
#426
bmahabirbu
closed
2 weeks ago
0
Update ggerganov/whisper.cpp digest to 31aea56
#425
renovate[bot]
closed
2 weeks ago
0
Allow default port to be specified in ramalama.conf file
#424
rhatdan
closed
2 weeks ago
0
Add --generate quadlet/kube to create quadlet and kube.yaml
#423
rhatdan
closed
2 weeks ago
0
❇ Is there a way to define its port and host?
#422
bentito
closed
2 weeks ago
1
Bugfix comma
#421
ericcurtin
closed
2 weeks ago
1
🐛 install script failing on my M2 MacBook Pro
#420
bentito
closed
2 weeks ago
4
Fix nocontainer mode
#419
rhatdan
closed
2 weeks ago
0
Update fedora Docker tag to v42
#418
renovate[bot]
closed
2 weeks ago
2
Fix nocontainer mode
#417
ericcurtin
closed
2 weeks ago
2
Generate MODEL.yaml file locally rather then just to stdout
#416
rhatdan
closed
2 weeks ago
0
Bump to v0.0.22
#415
rhatdan
closed
2 weeks ago
1
Fix mounting of Ollama AI Images into containers.
#414
rhatdan
closed
2 weeks ago
0
Podman Path Mounting Error and Llama-cli Incorrect Path
#413
bmahabirbu
closed
2 weeks ago
4
Split out kube.py from model.py
#412
rhatdan
closed
2 weeks ago
2
Use subpath for OCI Models
#411
rhatdan
closed
2 weeks ago
0
Bump to v0.0.21
#410
rhatdan
closed
3 weeks ago
0
Update ggerganov/whisper.cpp digest to 0377596
#409
renovate[bot]
closed
3 weeks ago
0
Consistency changes
#408
ericcurtin
closed
2 weeks ago
11
reduced the size of the nvidia containerfile
#407
bmahabirbu
closed
3 weeks ago
1
Make quadlets work with OCI images
#406
rhatdan
closed
3 weeks ago
0
Verify pyproject.py and setup.py have same version
#405
rhatdan
closed
3 weeks ago
1
Make minimal change to allow for ramalama to build on EL9
#404
smooge
closed
3 weeks ago
2
Packit: disable osh diff scan
#403
lsm5
closed
3 weeks ago
3
Move /run/model to /mnt/models to match k8s model.car definiton
#402
rhatdan
closed
3 weeks ago
0
Remove huggingface-hub references from spec file
#401
ericcurtin
closed
3 weeks ago
1
Time for removal of huggingface_hub dependancy
#400
ericcurtin
closed
3 weeks ago
3
chore(deps): update ggerganov/whisper.cpp digest to 0377596
#399
renovate[bot]
closed
3 weeks ago
1
chore(deps): update ggerganov/whisper.cpp digest to 4e10afb
#398
renovate[bot]
closed
3 weeks ago
0
Enable containers on macOS to use the GPU
#397
slp
closed
3 weeks ago
2
Mount model. car volumes into container
#396
rhatdan
closed
3 weeks ago
0
Make transport use config
#395
rhatdan
closed
3 weeks ago
0
More debug info
#394
ericcurtin
closed
3 weeks ago
1
Share these paths if they exist
#393
ericcurtin
closed
3 weeks ago
4
Update ggerganov/whisper.cpp digest to 19dca2b
#392
renovate[bot]
closed
3 weeks ago
0
Previous
Next