Closed EXJUSTICE closed 3 years ago
π Hello @EXJUSTICE, thank you for your interest in π YOLOv5! Please visit our βοΈ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.
If this is a π Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.
If this is a custom training β Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.
For business inquiries or professional support requests please visit https://www.ultralytics.com or email Glenn Jocher at glenn.jocher@ultralytics.com.
Python 3.8 or later with all requirements.txt dependencies installed, including torch>=1.7
. To install run:
$ pip install -r requirements.txt
YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):
If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit.
@EXJUSTICE I'm sorry I don't understand your question. An image will always have 2 or 3 dimensions, it can never have 4 dimensions, otherwise it would be called a video or a gif.
@glenn-jocher My apologies, I meant 4-channel data. My late night spelling failed me, I have amended the original post. I also noted that the yolo.py pulls in the channels through the .yaml file.
ch = self.yaml['ch'] = self.yaml.get('ch', ch) # input channels
You mentioned in another thread for yolov3 (link) that " cfg settings are ignored, the files are only used for architecture definition. " I should still set a "ch" parameter in the corresponding .yaml to 4?
Thank you in advance.
@EXJUSTICE oh ok. Typically 4 channel images like PNGs with alpha are simply read in with cv2.imread() as 3 channel images, so no changes are required.
You can use ch: 4
in your model.yaml to create a 4 channel model, though you will need to modify the dataloader also in a few places as well.
@glenn-jocher Excuse me sir, may I ask where places I should modify exactly? I have 4 channel png dataset and I would like to train it and detect it in pictures (or maybe 4 channel video? still working on it)
Before training, I got File "D:\Anaconda\yolov3-master\utils\plots.py", line 164, in plot_images mosaic[block_y:block_y + h, block_x:block_x + w, :] = img
ValueError: could not broadcast input array from shape (640,640,4) into shape (640,640,3)
After 299/299 Epoch, it still return ValueError: could not broadcast input array from shape (366,640,4) into shape (366,640,3)
Though it finished training for 300 epochs and save the weight .pt successfully
However, when I try to detect with a 4 channel img, I got File "D:\Users\Ping\anaconda3\envs\YOLOv3\lib\site-packages\torch\nn\modules\conv.py", line 395, in _conv_forward return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Given groups=1, weight of size [32, 4, 3, 3], expected input[1, 3, 640, 640] to have 4 channels, but got 3 channels instead
Below are the changes I made:
I added ch:4
in my custom.yaml;
Change https://github.com/ultralytics/yolov3/blob/1be31704c9c690929e4f6e6d950f40755ef2dcdc/utils/datasets.py#L184 & https://github.com/ultralytics/yolov3/blob/1be31704c9c690929e4f6e6d950f40755ef2dcdc/utils/datasets.py#L641 to cv2.IMREAD_UNCHANGED
;
https://github.com/ultralytics/yolov3/blob/1be31704c9c690929e4f6e6d950f40755ef2dcdc/models/yolo.py#L77 to ch=4
;
change hyper in train.py to hsv_h, hsv_s, hsv_v =(0,0,0)
.
I found that the model.py(the whole python file) is missing in ultralytics/yolov3-master( also missing in yolov5?), therefore I can't change output_filters = [3] # input channels
to 4
Btw, I'm not sure if using YOLOv5 will fix my problem. Thank you very much!
@EXJUSTICE the YOLOv5 dataloader will load 4 ch PNGs as 3 ch, so no modifications are required to train on 4 ch data. I would simply train normally with all default settings.
Thanks for responding. However, my 4th channel is the depth data, not an alpha data. It contains some IR information.
@Hao-Ping well, if you'd really like to use 4 channel inputs then you simply need to work your way through the dataloader updating every instance of 3 channel operations like the HSV augmentation and any other areas you find.
@glenn-jocher Okay I see. Thank you very much!
@Hao-Ping train.py and test.py use the same dataloader, so once you modify it you should be able to train and test correctly.
export.py and detect.py would also need their own updates for 4 ch imagery currently. Unfortunately the channel count is not parameterized in the dataloaders, or generally throughout the YOLOv5 repository. It would be great to make these changes, but it just has not been a priority since market demand for single and n channel imagery is not very large.
@Hao-Ping if you manage to get your changes working well and you've implemented them in a clean parameterized way it might help future users if you could submit a PR.
@Hao-Ping I faced similiar error to you from both train.py and detect.py
RuntimeError: Given groups=1, weight of size [32, 12, 3, 3], expected input[32, 16, 128, 128] to have 12 channels, but got 16 channels instead
Have you had any progress?
@EXJUSTICE Unfortunately no. I change to use https://github.com/AlexeyAB/darknet/issues/2094# and use the yolov4 from Alexeyab/darknet, I modified src/data.c
and src/image_opencv.cpp
. It looks quite alright in my 4 channel .png which contains R,G,B,Depth(data from Intel Realsense D435)
π Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.
Access additional YOLOv5 π resources:
Access additional Ultralytics β‘ resources:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLOv5 π and Vision AI β!
@EXJUSTICE I am also planning to try and work through RGB+Thermal 4 channel images. These are saved as png or if files and have 4 bands of data (RGBThermal). Any chance you have worked through the dataloader modifications? Would you be willing to share where you had to update things to get it to work?
π Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.
Access additional YOLOv5 π resources:
Access additional Ultralytics β‘ resources:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLOv5 π and Vision AI β!
βQuestion
I've read a few threads dealing with the same problem of 4-channel data. I would like to apply this to YoloV5. (link)
I would like to confirm if the way to approach this in YoloV5 is similiar, by modifying line 79 of models/yolo.py:
def __init__(self, cfg='yolov5s.yaml', ch=3, nc=None, anchors=None):
Similarly, I must also modify train.py's line 90 and 97 to initialize the correct number of channels?
model = Model(opt.cfg or ckpt['model'].yaml, ch=3, nc=nc, anchors=hyp.get('anchors')).to(device)
Are there any other lines I should change?
With regards to the load_mosaic method changes in datasets.py, I notice that there is now native 4-channel support, but is there any specific reason why the line at 531 in the yolov3 library was moved to line 665?
img4 = np.full((s * 2, s * 2, img.shape[2]), 114, dtype=np.uint8) # base image with 4 tiles
Additional context