-
When I run the following command in the 'experiment' folder
`python evaluate_model.py ../../pmlb/datasets/503_wind/503_wind.tsv.gz -ml myalg -results_path ../results_blackbox/503_wind/ -seed 16850 …
-
I installed the following packages
I am using python 3.8 and cuda 11.7. Installing using a requirements file
```
mmcv == 1.4.0
mmcv-full == 1.4.0
mmdet == 2.20.0
```
Then when I run
```python
…
-
@taylor13 here is a placeholder issue to grab this idea, obviously, we'll need to pad out the table below considerably.
Under https://pcmdi.llnl.gov/CMIP6/Guide/dataUsers.html#8-cmip6-organization-…
-
## Is your feature request related to a problem? Please describe
A growing generative AI use case is related to drafting text for a clinician to review and edit prior to saving to the patient's…
-
Hi,
**Why does it take almost twice as long to evaluate the model on 2k samples than it does to train on 17k samples?**
I'm using TuriCreate 5.4 (Python 3.6) on Ubuntu16.04 with Cuda8.0 (410.79)…
-
Tensorflow and keras have metrics that can be used to evaluate how a model is performing. In our application, the metric should evaluate how good the generated captions are.
More info on tensorflow m…
-
@LeeTL1220 has already made substantial progress on this front, but I think we can improve ground truth sets and expand the number and type of evaluation metrics calculated. To start, this will inclu…
-
hello, i reproduce your mlm_run.py code and encounter some problems.
-- 1. the loss appears 0 in the very early stage of training (actually in the epoch1)
-- 2. during the evaluation, the metrics of…
-
Hi, it's a great work!And thanks for releasing the code! But I have a question --how to evaluate on ImageNet? In other words, should I get the FID scores on the whole ImageNet validation set (totally …
-
I found you used different evaluation metrics in 'MuPoTS-3D' and 'Panoptic' dataset. But you didn't release the evaluate code of 'Panoptic' dataset. Is it convenient for you to release the evluate cod…