-
- [x] tłumaczenie na niemiecki
- [x] obsługa wymuszenia parametrem userlang: http://localhost/wlm/#!?userlang=de&c=50.0648:19.9424:13
- [x] domyślny z przeglądarki (fallback na en)
-
When uploading an image for the UK version of WLM2024, the country code used is just the invalid "gb", rather than the more specific "gb-eng" for England (not sure what the codes are for Wales, Scotla…
-
### Summary
The Wiki Loves Monuments image request is still displayed the same after uploading. I might forget and go there again for nothing.
### Steps to reproduce
1. Uploading the image
2…
-
I am using the following simple code
```json
# wlm.json
[
{
"query_concurrency": 15,
"query_group": [],
"query_group_wild_card": 0,
"user_group": [],
"user_group_wild_ca…
-
I am working with a 0.4.0a0 version and the WLM test is failing in repackage_hidden() function. The error message is copied below. I think it has something to do w/ the fact that the Variable API has …
-
### 🐛 Describe the bug
2024-06-22T03:41:52,860 [ERROR] W-9000-bloom7b1_1.0 org.pytorch.serve.wlm.WorkerThread - Number or consecutive unsuccessful inference 2
2024-06-22T03:41:52,861 [ERROR] W-9000-…
-
### 🐛 Describe the bug
About 30~40 seconds after `torchserve --start ...`, it prints some error messages and stops my model.
The client side has not been involved yet.
### Error logs
###…
-
**Describe the bug**
The PyTorch SageMaker endpoint cloudwatch log level is INFO only which cannot be changed without creating a BYO container.
Hence all the access including /ping besides the /i…
-
When I directly run the following command:
**python3 main.py -onnx_model_path ./onnx_model/conv_model.onnx --ifmsize 1 3 32 32 --arch_config_module configs.example_config**
the code return an erro…
-
Good morning,
I'm using the MMS server to host a single model in AWS Sagemaker. The model is loaded on MMS startup with `sagemaker_inference.model_server.start_model_server` with a custom handler_…