-
logs:
PS C:\Users\60461> ollama serve
2024/08/07 14:40:41 routes.go:1011: INFO server config env="map[OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST: OLLAMA_KEEP_ALIVE: OLLAMA_LLM_LIBRA…
-
Hi dusty. I used to decode h.264-codec rtsp camera stream with gstreamer python api ( I built the pipeline with nvv4l2decoder and nvvidconv). Here is the detail:
1. The pipeline convert image from …
-
## Goal
The goal of this issue is to lockdown how we want `EpiAware` to ingest observed data, especially in light of #107 .
## Current API
The data `y_t` is an argument to `make_epi_aware` co…
-
### Prerequisite
- [X] I have searched [Issues](https://github.com/open-mmlab/mmpose/issues) and [Discussions](https://github.com/open-mmlab/mmpose/discussions) but cannot get the expected help.
- [X…
-
Hi community,
I have subscribed a 7-day free trial of the Startup Plan and I wish to test CPU optimized inference API on this model: https://huggingface.co/Matthieu/stsb-xlm-r-multilingual-custom
…
-
### Product
Hot Chocolate
### Is your feature request related to a problem?
Consider for example this error:
https://github.com/ChilliCream/graphql-platform/blob/7d21adab765fffad291f22255db1102f…
-
### Describe the issue
Lets' say I have an onnx model that takes an input 1x3x224x224. I want to change the model such that I can do batch inference. The two ways, I could do it is setting the first …
-
## Background
**[Neural Sparse](https://opensearch.org/docs/latest/search-plugins/neural-sparse-search/)** is a semantic search method which is built on native Lucene inverted index. The documents…
-
**Describe the bug**
when using the `code_interpreter`, it will happen the error, the detail see **`Screenshots`**. If I do not use the `code_interpreter`, it will not have this error.
I can not fi…
-
(mimicmotion) PS D:\profile_me\BUPT\baoyan\THUSZ\MimicMotion-main> python inference.py --inference_config configs/test.yaml
Cannot initialize model with low cpu memory usage because `accelerate` was …