Closed PraveenMNaik closed 1 year ago
@PraveenMNaik hi,
It looks like you have encountered an error while using the Conv2
module in your configuration file. The error message suggests that the Conv2
object does not have an attribute called bn
.
Based on the code you provided, it seems that you are trying to fuse convolutions using the Conv2
module. However, the error occurs because you are trying to access the bn
attribute in the forward_fuse
method, which does not exist in the Conv2
class.
To resolve this issue, you should modify the forward_fuse
method to only apply the fused convolution and activation, like this:
def forward_fuse(self, x):
"""Apply fused convolution and activation to input tensor."""
return self.act(self.conv(x))
By making this change, the error should be resolved, and you should be able to proceed with your program.
I hope this helps! Let me know if you have any further questions.
@glenn-jocher Thank you for the suggestion . I have modified as per ur suggestion. However, following error, occurred even though Conv2 was defined in common.py and yolo.py
Error occurred:
Traceback (most recent call last):
File "/home/hello/yolov5/train.py", line 642, in
@PraveenMNaik thanks for providing the information.
The error message suggests that the program was not able to find the file called "Conv2".
If you have created a separate file called "Conv2.py" where the Conv2
class is defined, make sure that the file is included in your project directory and accessible by the program.
Also, verify that the file name is spelled correctly and matches the one referenced in your code. Sometimes, typos or incorrect capitalization can lead to such errors.
If you have already checked these possibilities, please provide more details about the file structure and how you are importing and using the Conv2
class, so that I can assist you further.
Let me know if you have any other questions.
Hi @glenn-jocher , Thank you so much it worked.
@PraveenMNaik hi there,
You're welcome! I'm glad to hear that the suggestion worked for you. If you have any further questions or issues, feel free to ask.
The YOLOv5 community and the Ultralytics team are always here to help.
Have a great day!
Hi @glenn-jocher , I have used genetic algorithm for hyperparameter (for augmentation related hyperparameters ) optimization and evolved for 1500 generations. I have got better result. Furthermore, I have used evolved hypeprparameters and run again using those hyperparameters and surprisingly, the model size got reduced from 15MB (before training) to 12.2MB(after training), why is that so?...
@PraveenMNaik that's great to hear that you used the genetic algorithm for hyperparameter optimization and achieved better results! As for the reduction in model size from 15MB to 12.2MB, there could be several factors contributing to this change.
During the training process, the model learns to optimize its parameters based on the given dataset. It's possible that the evolved hyperparameters resulted in a more compact representation of the learned features, leading to a reduction in model size.
Additionally, the genetic algorithm might have influenced the training process in a way that encouraged the model to focus on more relevant features while discarding unnecessary ones. This could also contribute to the smaller model size.
It's important to note that model size alone doesn't necessarily correlate with performance. A smaller model can still achieve high accuracy if it effectively captures the necessary information from the data.
Overall, the reduction in model size could be attributed to the combination of the evolved hyperparameters and the training process itself. Congrats on your results, and feel free to reach out if you have any further questions!
@glenn-jocher , I have come across loss curve that show up large ups and down for validation, but not for train ...why is that so.....my validation dataset has 200 images and train set has 700 images... totally 3 classes used... Is there a separate run time augmentation applied for validation and train or same for both....
@PraveenMNaik hi,
The behavior you described with the loss curve showing large ups and downs for validation but not for the train set can be attributed to several factors.
Firstly, the difference in dataset size between the train set (700 images) and the validation set (200 images) can cause variations in the loss curve. With a smaller sample size, the validation set may not fully capture the overall distribution of the data, resulting in more fluctuation in the loss values.
Secondly, it's important to consider whether there is a separate runtime augmentation applied for the validation and train sets. By default, the YOLOv5 implementation applies data augmentation during training, but not during validation. This difference in augmentation can also contribute to variations in the loss curve.
It's worth noting that such fluctuations in the loss curve are not necessarily a cause for concern. As long as the overall trend of the loss curve is descending, the model is still learning and improving. However, if you observe consistently high losses or other performance issues, further investigation might be needed.
If you have any further questions or concerns, please don't hesitate to ask. We're here to assist you.
Thanks!
@glenn-jocher Referring to this "Secondly, it's important to consider whether there is a separate runtime augmentation applied for the validation and train sets. By default, the YOLOv5 implementation applies data augmentation during training, but not during validation. This difference in augmentation can also contribute to variations in the loss curve.: But my concern is, when we train our dataset, why do both training and validation loss curves show upon completion of training?why not training loss curve for training only....
@PraveenMNaik hi there,
When training a model, both the training and validation loss curves are typically displayed to provide insights into the model's performance. The training loss curve depicts how well the model is fitting the training data over time, while the validation loss curve shows how the model generalizes to unseen validation data.
The inclusion of the validation loss curve is crucial for monitoring the model's ability to generalize beyond the training data. It helps identify overfitting, where the model becomes too specialized to the training data and performs poorly on unseen examples.
While the training loss curve primarily indicates how well the model is learning the training data, it can also indirectly reflect the model's generalization capabilities. If the training loss continues to decrease while the validation loss starts to increase or stagnate, this may indicate overfitting.
Considering both the training and validation loss curves during training helps in understanding the model's performance and highlights the need for potential adjustments to improve generalization.
If you have any further questions or concerns, please don't hesitate to ask. We're here to assist you.
Thanks!
@glenn-jocher , Thank you for the reply. So you mean to say that while training, both train and valid datasets are considered...!!!!
@PraveenMNaik hi [User],
Yes, during training, both the train and validation datasets are considered. The model is trained using the train dataset, and the training loss is computed based on the model's performance on this dataset. The purpose of the validation dataset is to evaluate the model's performance on unseen data and assess its generalization ability. The validation loss is calculated using the validation dataset to provide insights into how well the model is performing on data it has not been trained on.
By considering both the train and valid datasets during training, we can monitor the model's progress, identify overfitting, and make necessary adjustments to improve its generalization capabilities.
Let me know if you have any more questions.
Thanks!
@glenn-jocher , thank you for clear explanation.
@PraveenMNaik hi there,
You're welcome! I'm glad to hear that the explanation was clear. If you have any further questions or need any more assistance, feel free to ask.
Thanks!
Hi @glenn-jocher , I have come across genetic algorithm for hyperparameter optimization. How to represent variable length chromosomes (VLCs) in a GA-based optimization in your code. Pls suggest ur idea
👋 Hello there! We wanted to give you a friendly reminder that this issue has not had any recent activity and may be closed soon, but don't worry - you can always reopen it if needed. If you still have any questions or concerns, please feel free to let us know how we can help.
For additional resources and information, please see the links below:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLO 🚀 and Vision AI ⭐
@PraveenMNaik,
In the context of YOLOv5 codebase, representing variable length chromosomes (VLCs) for genetic algorithm (GA)-based hyperparameter optimization can be accomplished by encoding the different hyperparameters into the chromosomes. These chromosomes can then undergo genetic operations such as mutation and crossover to optimize the hyperparameters.
Here are some ideas on how you can represent VLCs in a GA-based optimization for hyperparameter tuning:
These approaches can be implemented alongside your hyperparameter optimization algorithm and integrated into the YOLOv5 codebase to enable GA-based hyperparameter optimization with variable length chromosomes.
Feel free to experiment with these ideas and adapt them to your specific requirements for hyperparameter optimization using genetic algorithms.
Let me know if you have further questions or need additional insights.
Thanks!
File "C:\Ombrulla\vision\vision-dashboard\src\vision_engine\predictor_v2.py", line 72, in detection results = self.detector_model.track(frame, verbose=False, tracker="bytetrack.yaml", persist=True, conf=0.3, iou=0.5) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Ombrulla\vision\vision-dashboard\venv_gpu\Lib\site-packages\ultralytics\engine\model.py", line 439, in track return self.predict(source=source, stream=stream, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Ombrulla\vision\vision-dashboard\venv_gpu\Lib\site-packages\ultralytics\engine\model.py", line 399, in predict self.predictor.setup_model(model=self.model, verbose=is_cli) File "C:\Ombrulla\vision\vision-dashboard\venv_gpu\Lib\site-packages\ultralytics\engine\predictor.py", line 341, in setup_model self.model = AutoBackend( ^^^^^^^^^^^^ File "C:\Ombrulla\vision\vision-dashboard\venv_gpu\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Ombrulla\vision\vision-dashboard\venv_gpu\Lib\site-packages\ultralytics\nn\autobackend.py", line 141, in init model = model.fuse(verbose=verbose) if fuse else model ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Ombrulla\vision\vision-dashboard\venv_gpu\Lib\site-packages\ultralytics\nn\tasks.py", line 176, in fuse delattr(m, "bn") # remove batchnorm ^^^^^^^^^^^^^^^^ File "C:\Ombrulla\vision\vision-dashboard\venv_gpu\Lib\site-packages\torch\nn\modules\module.py", line 1765, in delattr super().delattr(name) AttributeError: 'Conv' object has no attribute 'bn'
Model summary (fused): 168 layers, 3006038 parameters, 0 gradients, 8.1 GFLOPs
@Ambarish-Ombrulla hi there,
It seems like you're encountering an issue where the model is attempting to delete a batch normalization (bn) attribute from a 'Conv' object that does not have this attribute. This error typically occurs during the model fusion process, which is designed to speed up inference by merging convolutional and batch normalization layers.
To resolve this issue, please ensure that:
.fuse()
method is being called correctly and that the layers you're trying to fuse actually have batch normalization layers associated with them.If you've made custom modifications to the model architecture, double-check that each 'Conv' layer that is supposed to be fused has a corresponding 'bn' attribute. If a 'Conv' layer does not have a 'bn' attribute, the fusion process should not attempt to delete it.
If you're using a standard model from the YOLOv5 repository and haven't made any modifications that would affect the fusion process, consider checking for updates or changes in the repository that might address this issue.
For further assistance, please provide more context or consider opening an issue on the YOLOv5 GitHub repository with the details of the error and the steps to reproduce it.
Thanks!
Search before asking
Question
I have included following module in my new cofiguration file in yaml and replaced conv with conv2 in backbone... Howver, the programs runs for 150 epochs but has produced error during fusion of layers... kindly help... following is the code added: class Conv2(Conv): """Simplified RepConv module with Conv fusing."""
Following id the error occured: Conv2 summary: 165 layers, 1807969 parameters, 0 gradients Class Images Instances P R mAP50 mAP50-95: 0%| | 0/3 [00:00<?, ?it/s] Traceback (most recent call last): File "/home/hello/yolov5/train.py", line 642, in
main(opt)
File "/home/hello/yolov5/train.py", line 531, in main
train(opt.hyp, opt, device, callbacks)
File "/home/hello/yolov5/train.py", line 414, in train
results, , = validate.run(
File "/home/hello/.local/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, kwargs)
File "/home/hello/yolov5/val.py", line 210, in run
preds, train_out = model(im) if compute_loss else (model(im, augment=augment), None)
File "/home/hello/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, *kwargs)
File "/home/hello/yolov5/models/yolo.py", line 273, in forward
return self._forward_once(x, profile, visualize) # single-scale inference, train
File "/home/hello/yolov5/models/yolo.py", line 185, in _forward_once
x = m(x) # run
File "/home/hello/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(args, kwargs)
File "/home/hello/yolov5/models/common.py", line 103, in forward_fuse
return self.act(self.bn(self.conv(x)))
File "/home/hello/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1614, in getattr
raise AttributeError("'{}' object has no attribute '{}'".format(
AttributeError: 'Conv2' object has no attribute 'bn'
Additional
No response