-
The optimal training dataset doesn't resolve into a data frame if too large. My dataset had 20 variables and 400k rows. I had to reduce to 200k rows.
-
@piergiaj: Thanks for sharing the implementation in pytorch. Seems like in your codes, the only normalisation step performed is center_crop (224px). Don't we need the images to be mean_substracted by …
-
Hello, in _3_label_images.py, does using the number keys [0-9] on the keyboard to label images mean that the number of label categories is limited to 10 or less? And I tried using the arrow keys to go…
-
Hi!
In the function , which name is "def adapt" that indicates inner loop in the maml, the memory in the GPU progressively increase following "num_adaptation_steps" is increase.
Finally, it mak…
-
It seems that the method doesn't use the information of BN layer in the code of BATS. Maybe I miss something. In addition, I'm curious about the selection of hyperparameter, such as lam = 1.05 for Ima…
-
These are the results of the current segnet implementation:
![horse](https://cloud.githubusercontent.com/assets/1780466/25223935/2d0c605e-25bd-11e7-8a0a-cd23f793f32e.png)
![horse-segnetfix](https://…
-
### Search before asking
- [X] I have searched the YOLOv8 [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and fou…
-
I have a new dataset of 128x128 images. Can you provide README instructions on how to preprocess it?
-
Hey! I really love your work and I'm wondering whether you can provide the training data you synthesized from _LibirSpeech clean-360_ dataset? That would help a lot!
-
I had some ideas of feature requests which could greatly improve the usefulness of your tool.
1. Enable GPU mode (seems to only run on CPU as of now)
2. Enable batch mode (seems to assume that the…