-
# Summary
Provide a short summary of the issue. Sections below provide guidance on what
factors are considered important to reproduce an issue.
# Version
3.3.0
# Environment
VS2019
# Step…
-
Currently, Bessel's correction is applied to the running variance estimator in BatchNorm on [this line](https://github.com/pytorch/pytorch/blob/master/torch/lib/THCUNN/BatchNormalization.cu#L196) and …
-
What is the default batch size? Is there a way to change it while running inference?
-
@philschmid
Another one - followed your guide to deploy llama 3 70b
followed your guide to install llama 3 70b using aws sagemaker on inf2.48xlarge with following properties as suggested in yo…
-
## Description:
API conformance suite is OV validation tool checks a plugin conditions from API implementation perspective.
`ov_plugin/OVCheckSetSupportedRWMetricsPropsTests.ChangeCorrectProperties/…
-
Currently this is the slowest part of the process, taking up the majority of the time per tile. However CPU utilization bounces around and never reaches 100% for all cores, and gpu utilization goes be…
-
## Description
Tokens not streaming not working with rolling batch
### Expected Behavior
(what's the expected behavior?)
### Error Message
## How to Reproduce?
(If you developed your own…
-
Hello,
Running DeepEtho on colab and encountered an issue when running inference with the feature extractor. The flow generator and feature extractor trained just fine. Here's the output:
`[2022…
-
I am trying to run the inference of the model for infographic vqa task. The instruction mention the cli command for a dummy task and is as follows:
python -m pix2struct.example_inference \
--gin_…
-
It was trained on 3D runs at 100x100x100. It would be good to work out how to do inference on 3D volumes bigger than this. We could process large 3D runs in batches and stitch the results together. Th…