-
hi,
i' m using your LMIC library on my node which is a Luigino equipped with Atmega328p 8Mhz 3.3v and a RFM95W (SX1276) : i can send to my gateway the message and look at it in the TTN payload log bu…
-
I was testing LoraSendAndReceive sketch with MKR 1300 WAN as node and a PyCom LoPy as "nano gateway" with The Things Network.
Pycom "nanogateway" / TTN can occur in "join accept" downlink timing prob…
-
### Your current environment
The output of `python collect_env.py`
```text
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A…
-
When I click the "Stable images settings" button, nothing happens. I'm running in a world without any other modules enabled.
The error I'm seeing in the console is:
foundry.js:753 TypeError: An …
-
I presume there is a minimum CPU requirement like needing AVX2, AVX-512, FP16C or something?
Could you document the minimum instruction set and extensions required.
root@1d1c4289f303:/llm-api# p…
-
### Describe the issue as clearly as possible:
When running provided arithmetic grammar example with vLLM, I get an error `TypeError: Error in model execution: argument 'ids': 'list' object cannot …
-
Hi @fox27374 ,
I have the same problem working with my own certificates. After trying it your way, I still get a "Token exchange refused" error. I'm attaching my curl output, and also what happens …
-
### Your question
The debug logs show this for many of my models: "Not loading metadata for MODEL_FILENAME as it lacks a proper header (path='PATH_TO_MODEL')", and also "did not match any of 64 optio…
-
I cannot join our devices with SX1262 chips to the Helium network. The same devices joins without issues to TTN and ChirpStack. Also the same devices configured for EU868 region joins without issues t…
-
### Your current environment
docker image vllm/vllm-openai:v0.6.2 and vllm/vllm-openai:v0.6.3
command:docker run --runtime nvidia --gpus '"device=0,1"' -d -v /data/model/llama:/data/model/llama -p 8…