jbwang1997 / OBBDetection

OBBDetection is an oriented object detection library, which is based on MMdetection.
Apache License 2.0
537 stars 113 forks source link

About dataset #26

Open ZacharySha opened 3 years ago

ZacharySha commented 3 years ago

Hi, thanks for your work!!! I didn't find detailed instructions for dataset production in readme. If I want to train my own dataset (PNG and JSON formats), is the data preparation the same as MMdetection?

jbwang1997 commented 3 years ago

The data preparation is almost the same as MMDetection. You can refer custom.py for the data structure.

But, you need to pay attention to some details.

In future updates, I will write the new obb_custom.py for personal datasets.

hust-lidelong commented 3 years ago

The data preparation is almost the same as MMDetection. You can refer custom.py for the data structure.

But, you need to pay attention to some details.

  • The form of data['ann']['bboxes'] and data['ann']['bboxes_ignore'] should be a type of bbox defined in BboxToolkit, you can find the bbox definition in the Usage.md (mention: the angle of obb is Counterclockwise)
  • The pipelines of oriented detectors are different from the originals, you can refer to datasets for details. the RandomRotate need cls key in results, so you may need add your classes in the results, like this.

In future updates, I will write the new obb_custom.py for personal datasets.

I am eager to study new obb_custom.py, thanks for publishing as soon as possible. I want to train my own dataset (JPG and XML formats), can you give more especially guidance? Thanks!

jbwang1997 commented 3 years ago

The data preparation is almost the same as MMDetection. You can refer custom.py for the data structure. But, you need to pay attention to some details.

  • The form of data['ann']['bboxes'] and data['ann']['bboxes_ignore'] should be a type of bbox defined in BboxToolkit, you can find the bbox definition in the Usage.md (mention: the angle of obb is Counterclockwise)
  • The pipelines of oriented detectors are different from the originals, you can refer to datasets for details. the RandomRotate need cls key in results, so you may need add your classes in the results, like this.

In future updates, I will write the new obb_custom.py for personal datasets.

I am eager to study new obb_custom.py, thanks for publishing as soon as possible. I want to train my own dataset (JPG and XML formats), can you give more especially guidance? Thanks!

Do your images need to be split like DOTA dataset? Could you provide the structure of your XMLs?

jbwang1997 commented 3 years ago

Hi, thanks for your work!!! I didn't find detailed instructions for dataset production in readme. If I want to train my own dataset (PNG and JSON formats), is the data preparation the same as MMdetection?

Could you provide the structure of JSON and tell me whether your images need to be split like DOTA dataset?

hust-lidelong commented 3 years ago

The data preparation is almost the same as MMDetection. You can refer custom.py for the data structure. But, you need to pay attention to some details.

  • The form of data['ann']['bboxes'] and data['ann']['bboxes_ignore'] should be a type of bbox defined in BboxToolkit, you can find the bbox definition in the Usage.md (mention: the angle of obb is Counterclockwise)
  • The pipelines of oriented detectors are different from the originals, you can refer to datasets for details. the RandomRotate need cls key in results, so you may need add your classes in the results, like this.

In future updates, I will write the new obb_custom.py for personal datasets.

I am eager to study new obb_custom.py, thanks for publishing as soon as possible. I want to train my own dataset (JPG and XML formats), can you give more especially guidance? Thanks!

Do your images need to be split like DOTA dataset? Could you provide the structure of your XMLs?

my XML annotation is this: xml.txt

and my images do not need to be split.

hust-lidelong commented 3 years ago

The data preparation is almost the same as MMDetection. You can refer custom.py for the data structure. But, you need to pay attention to some details.

  • The form of data['ann']['bboxes'] and data['ann']['bboxes_ignore'] should be a type of bbox defined in BboxToolkit, you can find the bbox definition in the Usage.md (mention: the angle of obb is Counterclockwise)
  • The pipelines of oriented detectors are different from the originals, you can refer to datasets for details. the RandomRotate need cls key in results, so you may need add your classes in the results, like this.

In future updates, I will write the new obb_custom.py for personal datasets.

I am eager to study new obb_custom.py, thanks for publishing as soon as possible. I want to train my own dataset (JPG and XML formats), can you give more especially guidance? Thanks!

Do your images need to be split like DOTA dataset? Could you provide the structure of your XMLs?

my XML annotation is this: xml.txt

and my images do not need to be split.

Could you give some advice? Thanks!

jbwang1997 commented 3 years ago

The data preparation is almost the same as MMDetection. You can refer custom.py for the data structure. But, you need to pay attention to some details.

  • The form of data['ann']['bboxes'] and data['ann']['bboxes_ignore'] should be a type of bbox defined in BboxToolkit, you can find the bbox definition in the Usage.md (mention: the angle of obb is Counterclockwise)
  • The pipelines of oriented detectors are different from the originals, you can refer to datasets for details. the RandomRotate need cls key in results, so you may need add your classes in the results, like this.

In future updates, I will write the new obb_custom.py for personal datasets.

I am eager to study new obb_custom.py, thanks for publishing as soon as possible. I want to train my own dataset (JPG and XML formats), can you give more especially guidance? Thanks!

Do your images need to be split like DOTA dataset? Could you provide the structure of your XMLs?

my XML annotation is this: xml.txt and my images do not need to be split.

Could you give some advice? Thanks!

Your annotations are quite similar to the VOC dataset. I recommend your refer xml_style.py and load the rotated box data in data_info.

hust-lidelong commented 3 years ago

The data preparation is almost the same as MMDetection. You can refer custom.py for the data structure. But, you need to pay attention to some details.

  • The form of data['ann']['bboxes'] and data['ann']['bboxes_ignore'] should be a type of bbox defined in BboxToolkit, you can find the bbox definition in the Usage.md (mention: the angle of obb is Counterclockwise)
  • The pipelines of oriented detectors are different from the originals, you can refer to datasets for details. the RandomRotate need cls key in results, so you may need add your classes in the results, like this.

In future updates, I will write the new obb_custom.py for personal datasets.

I am eager to study new obb_custom.py, thanks for publishing as soon as possible. I want to train my own dataset (JPG and XML formats), can you give more especially guidance? Thanks!

Do your images need to be split like DOTA dataset? Could you provide the structure of your XMLs?

my XML annotation is this: xml.txt and my images do not need to be split.

Could you give some advice? Thanks!

Your annotations are quite similar to the VOC dataset. I recommend your refer xml_style.py and load the rotated box data in data_info.

Thanks very much!

ccccwb commented 2 years ago

The data preparation is almost the same as MMDetection. You can refer custom.py for the data structure. But, you need to pay attention to some details.

  • The form of data['ann']['bboxes'] and data['ann']['bboxes_ignore'] should be a type of bbox defined in BboxToolkit, you can find the bbox definition in the Usage.md (mention: the angle of obb is Counterclockwise)
  • The pipelines of oriented detectors are different from the originals, you can refer to datasets for details. the RandomRotate need cls key in results, so you may need add your classes in the results, like this.

In future updates, I will write the new obb_custom.py for personal datasets.

I am eager to study new obb_custom.py, thanks for publishing as soon as possible. I want to train my own dataset (JPG and XML formats), can you give more especially guidance? Thanks!

Do your images need to be split like DOTA dataset? Could you provide the structure of your XMLs?

my XML annotation is this: xml.txt and my images do not need to be split.

Could you give some advice? Thanks!

Your annotations are quite similar to the VOC dataset. I recommend your refer xml_style.py and load the rotated box data in data_info.

Thanks very much!

请问您这个问题解决了吗,我现在也打算训练自己的xml数据集,可以说说具体怎么做吗

EagleHong commented 2 years ago

The data preparation is almost the same as MMDetection. You can refer custom.py for the data structure. But, you need to pay attention to some details.

  • The form of data['ann']['bboxes'] and data['ann']['bboxes_ignore'] should be a type of bbox defined in BboxToolkit, you can find the bbox definition in the Usage.md (mention: the angle of obb is Counterclockwise)
  • The pipelines of oriented detectors are different from the originals, you can refer to datasets for details. the RandomRotate need cls key in results, so you may need add your classes in the results, like this.

In future updates, I will write the new obb_custom.py for personal datasets.

I am eager to study new obb_custom.py, thanks for publishing as soon as possible. I want to train my own dataset (JPG and XML formats), can you give more especially guidance? Thanks!

Do your images need to be split like DOTA dataset? Could you provide the structure of your XMLs?

my XML annotation is this: xml.txt and my images do not need to be split.

Could you give some advice? Thanks!

Your annotations are quite similar to the VOC dataset. I recommend your refer xml_style.py and load the rotated box data in data_info.

请问[xml_style.py]文件应该怎么使用呢