-
I am trying to deploy a PyTorch model on Minikube. My manifest is like:
```
kubectl apply -f -
-
First, I created docker container by followed https://github.com/pytorch/serve/tree/master/docker#create-torchserve-docker-image, I leaves all configs default except remove `--rm` in `docker run ...` …
-
[TorchServe](https://github.com/pytorch/serve) will soon support the KServe v2 inference protocol and can support additional types of PyTorch models that Triton does not currently support.
Create …
-
Frameworks: Optimizations to reduce memory used by models.
-
### Describe the feature
Currently boto3 sagemaker sdk supports requests only to main torchserve inference api via [invoke_endpoint](https://boto3.amazonaws.com/v1/documentation/api/latest/referenc…
blatr updated
2 years ago
-
## 🐛 Bug
Following the tracker example app, In fsspec_backend.conf, we can specify :
```
protocol=s3
root_path=s3://my-bucket
key=***
secret=**
```
In order to use Minio, one needs to pass…
-
Just realized about this assert in `BaseSegmentor`:
https://github.com/open-mmlab/mmsegmentation/blob/master/mmseg/models/segmentors/base.py#L85
I am trying to figure out why is this needed ther…
-
### Description
I'm running an ECS cluster on `c6i.xlarge` and `inf1.xlarge` instances.
I noticed there's often a huge difference on the memory shown by ECS agent and `htop` or `free -m `. For e…
-
### Bug description
## Issue Summary:
A segmentation fault (core dumped) error occurs when importing `lightning` module.
### File: test.py
```python
import torch
import lightning as L
from …
-
Hello,
I tried to compile with neuron a multilingual roBERTa model using:
```
# Number of Cores in the Pipeline Mode
neuroncore_pipeline_cores = 8
# Compiling for neuroncore-pipeline-cores=…