-
**Is your feature request related to a problem? Please describe.**
Authorization is the process through which an entity is granted permission to access resources or make decisions within a system. …
-
I attempt run this file "/llm2vec/experiments/run_mntp.py",but i am failed. I can get respondant result from the URL by browser
![image](https://github.com/user-attachments/assets/89f3cceb-b794-4649-…
-
Hi,
I'm learning to work with argo+optuna, and yous repository is a **gem**.
Right now I have multiple models being trained (in parallel, using argo workflows) that are later combined and evaluate…
-
Hello Tomas,
On the corner plot of `plot_posterior()`, it might be practical to show with points the parameter values of the models that were available during the fit to help evaluate easily, visua…
-
### Open Task RFP for Privacy preserving machine learning inference using MPC
#### Executive Summary
- Project Overview: In this project, we want to see current state of the privacy preserving m…
tkmct updated
4 weeks ago
-
MNIST Results:
We used a standard variational autoencoder. We trained with inlier digits 1 and 3. Each model was trained with batch size of 128, where each image is an unnormalized grayscale image of…
-
The current stage, we have added regression models. However, the user interface is hard-coded for classification model evaluation in the `result` section. There is a need to redesign this part to make…
-
[WARNING]your gpu arch (8, 6) isn't compiled in prebuilt, may cause invalid device function. available: {(6, 1), (3, 7), (7, 0), (5, 0), (6, 0), (7, 5), (5, 2)}
[Exception|indice_conv|subm]feat=torch…
-
Thanks for releasing the repo, as well as the trajectories for swebench-lite! I am trying to reproduce the results with gpt-4o, but am seeing a fix rate of 59/300, as opposed to the 27.33% reported.
…
-
How to set model precision when evaluating? Such as "fp16" or "loat_in_8bit"
And what's the default precision?
It seems to be "fp16"? Because around 14GB GPU memory is occupied when LLaVA-1.5-7B is …