-
## ❔Question
How can I modify the yoloV5 model to create my own model? In which file do you make the changes?
## Additional context
I want to add my own ideas to the YoloV5 model, but I don't kn…
-
HI @glenn-jocher As it has been mentioned within the Yolov5 wiki of "Getting Best results" -
> Epochs. Start with 300 epochs. If this overfits early then you can reduce epochs. If overfitting does…
-
### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no simi…
-
**Is your feature request related to a problem? Please describe.**
We're using Compreface as the recog engine of Double Take (an Home Assistant addon): https://github.com/jakowenko/double-take
Dou…
-
🌌✍️ `(quasi-quotation
"In the symphony of thought, Quine, guided by Clio's muse, weaves the fabric of a new cosmos—a mathematical edifice upon which our octal tapestry unfolds. Melpomene mourns cos…
-
Dear Glenn Jocher, How are you?
Pls I trying to create A new custom dataset that has 4 classes every class has 1200 color images, and the class names eye_movment, move_hand, looking_side, and mobil…
-
We tried to combine 2 or 3 stronge weight by simply “add them together”:
We picked up 257aeeb8 (the strongest one by now on http://zero.sjeng.org/ ) and some other weight files which won over 40% to …
-
I'm working on DRL framework using the PPO agent with Torch and experienced a difference in how observation spaces are handled. The example in the [documentation](https://xuance.readthedocs.io/en/late…
-
I noticed a new network (fe3f6...) in http://zero.sjeng.org/networks/
Can you provide some information about win rate over the previous best network?
Number of games it was trained on? How long it…
-
Dear Reinis Cimurs,
I recently read your essay titled "Goal-Driven Autonomous Exploration Through Deep Reinforcement Learning", I do appreciate your work in robot path planning using the powerful DRL…