-
### Describe the issue
Hello,
I have been using onnxruntime on both Linux and Windows. What I observe is that when I run BERT model for inferencing, there is a huge performance difference. On a Li…
-
Implement pre-built statistical models that can analyze match, player, and team-level data.
-
- **Objective:** Improve the Streamlit web application to provide a better user experience and additional functionalities.
- **Tasks:**
- **User Interface Improvements:**
- Redesign the layou…
-
### Because
With the additional data (https://github.com/clamsproject/aapb-annotations/pull/98) and new `data_loader.py` code (#115), I'd like to conduct experiments with new models and see if (and…
-
with running the following command:
```python
python train_net.py --num-gpus 6 --config-file configs/Detic_LbaseCCimg_CLIP_R5021k_640b64_4x_ft4x_max-size.yaml --eval-only MODEL.WEIGHTS trainedmodels…
-
A starting point for consideration is at https://github.com/rgrumbine/ice_scoring
This is organized largely by parameter to be examined -- concentration field, ice edge, ice drift being the most …
-
I have noticed that there is a marked drop in performance somewhere between threshold of 3000 and 5000 in the following model.
Maybe it is due to insufficient memory and thrashing? I am using a MacBo…
-
### Describe the issue
Background: We use the java api of onnxruntime for model reasoning. After each update of the model, we need to reload the model and related files. So we define a method to re…
-
### User Story
As a developer working with the OSCAL server,
I want to refactor the implementation to use the native metaschema-java APIs directly instead of CLI commands,
So that we can improv…
-
### 🚀 The feature, motivation and pitch
**Overview**
The goal of this RFC is to discuss the integration of distributed inference into TorchChat. Distributed inference leverages tensor parallelism …