SIAT-INVS / CarlaFLCAV

federated learning autonomous driving in CARLA simulation
131 stars 29 forks source link

question about FLYolo #1

Open atusi-nakajima opened 2 years ago

atusi-nakajima commented 2 years ago

I have a question about FLYolo I have checked the operation with the provided EXAMPLE rawdata. After that, we would like to try training with a new dataset, in which folder should we place the dataset? (raw_data/pretrain or raw_data/town03 or another location?)

Also, what is the role of the pretrain, town03 and town05 directories under the rawdata directory? (are you using data from all locations for training?)

bearswang commented 2 years ago

Hi, Atusi, thank you very much for your attention. In our experiment, we consider a multi-stage federated learning procedure. Hence we adopt the data from all locations. This can be seen from the script file sim_main.py. (But if you consider a single-stage training, it is okay to adopt data from only one folder by using python carla_pretrain_yolo.py pretrain )

  1. We use the dataset in folder pretrain for generating a pretrained model:

    python carla_pretrain_yolo.py pretrain
  2. We use the dataset in folders town03 and town05 for federated learning:

    python carla_main_FLCAV_yolo.py

    In particualr, this file will first use the dataset in folders town03 and town05 for generating two edge federated models. See lines 52 to 58 in file carla_main_FLCAV_yolo.py. Then it will aggregate the edge federated models into cloud federated models. See lines 60 to 91 in file carla_main_FLCAV_yolo.py.

  3. Note that the folder test constains the dataset for testing. The datasets are example data for running the program. You could generate your own dataset by using FLDatasetTool.

atusi-nakajima commented 2 years ago

Thank you for your clear explanation.

Just to confirm, what I am doing in carla_pretrain_yolo.py pretrain is pre-training with the dataset in the /raw_data/pretrain folder. And what I am doing in carla_main_FLCAV_yolo.py is doing federated learning with 2 cars placed in town03 and 05, is that correct?

bearswang commented 2 years ago

Thank you for your clear explanation. My pleasure..

what I am doing in carla_pretrain_yolo.py pretrain is pre-training with the dataset in the /raw_data/pretrain folder This is correct.

And what I am doing in carla_main_FLCAV_yolo.py is doing federated learning with 2 cars placed in town03 and 05, is that correct? Doing federated learning is correct. But not with 2 cars in town03 and 05. I remember that there are 3 cars in Town03 and 4 cars in Town05.

atusi-nakajima commented 2 years ago

Ah, I see, so in this case, the two towns together have a total of 7 vehicles for federated learning.

atusi-nakajima commented 2 years ago

I have now run carla_main_FLCAV_yolo.py to complete federated learning. How can I run experiments on the dataset in the test folder using the model generated here (./fedmodels/cloud/weights/global.pt)?

bearswang commented 2 years ago

I have now run carla_main_FLCAV_yolo.py to complete federated learning. How can I run experiments on the dataset in the test folder using the model generated here (./fedmodels/cloud/weights/global.pt)?

You could use this command

python3 yolov5/val.py --data raw_data/test/vehicle.tesla.model3_173/yolo_coco_carla.yaml --weights fedmodels/cloud/weights/global.pt

Note that the trained model using the current example dataset may not work well (the number of samples is too small). We will upload a larger dataset soon. You can also generate your own datasets.

If you feel it useful, welcome to star this repo. Thanks!

atusi-nakajima commented 2 years ago

Thank you for telling us about it. I have added a star.

If I do the experiment again with the data set I created, do I need to do the quick start steps 1 and 2 again? (1.Train Yolov5 python3 yolov5/train.py --img 640 --batch 8 --epochs 5 --data raw_data/pretrain/vehicle.tesla.model3_135/yolo_coco_carla.yaml --cfg yolov5/models/yolov5s.yaml --weights yolov5s.pt 2.Test Result Test dataset = test/vehicle.tesla.model3_173 python3 yolov5/detect.py --source 'raw_data/test/vehicle.tesla.model3_173/yolo_dataset/images/train/*.jpg' --weights yolov5s.pt )

If I want to experiment again I create my own dataset and place it in raw_data/town03 or raw_data/town05 ↓ Generate a pre-trained model python carla_pretrain_yolo.py pretrain ↓ Perform federated learning on the home-grown dataset python carla_main_FLCAV_yolo.py ↓ Evaluate the results python3 yolov5/val.py --data raw_data/test/vehicle.tesla.model3_173/yolo_coco_carla.yaml --weights fedmodels/cloud/weights/global.pt

Is this procedure correct? I would appreciate it if you could tell me.

bearswang commented 2 years ago

Thank you for telling us about it. I have added a star.

Thank you very much :)

If I do the experiment again with the data set I created, do I need to do the quick start steps 1 and 2 again? (1.Train Yolov5 python3 yolov5/train.py --img 640 --batch 8 --epochs 5 --data raw_data/pretrain/vehicle.tesla.model3_135/yolo_coco_carla.yaml --cfg yolov5/models/yolov5s.yaml --weights yolov5s.pt 2.Test Result Test dataset = test/vehicle.tesla.model3_173 python3 yolov5/detect.py --source 'raw_data/test/vehicle.tesla.model3_173/yolo_dataset/images/train/*.jpg' --weights yolov5s.pt )

This is a testing procedure. You could skip this step.

If I want to experiment again I create my own dataset and place it in raw_data/town03 or raw_data/town05 ↓ Generate a pre-trained model python carla_pretrain_yolo.py pretrain ↓ Perform federated learning on the home-grown dataset python carla_main_FLCAV_yolo.py ↓ Evaluate the results python3 yolov5/val.py --data raw_data/test/vehicle.tesla.model3_173/yolo_coco_carla.yaml --weights fedmodels/cloud/weights/global.pt

Is this procedure correct? I would appreciate it if you could tell me.

Somewhat correct. You need to create three datasets and place them in raw_data/pretrain, raw_data/town03, raw_data/town05. Then you run python3 sim_main.py (containing two steps: (1) python carla_pretrain_yolo.py pretrain; (2) python carla_main_FLCAV_yolo.py). If you change the folders'names town03 and town05, you need to change the edge_list in line 22 of python carla_main_FLCAV_yolo.py. Finally, evaluate the results by running python3 yolov5/val.py --data raw_data/test/vehicle.tesla.model3_173/yolo_coco_carla.yaml --weights fedmodels/cloud/weights/global.pt

atusi-nakajima commented 2 years ago

I have an additional question about dataset generation.

I have now executed the following command in the directory FLDatasetTool/. Data Recording python3 data_recorder.py (which I terminated with Ctrl+c after a certain amount of time) and Data Labeling KITTI Objects python label_tools/kitti_objects_label.py -r record_(info in my file)

After executing these commands, how do I place them under town03, etc. in the rawdata distributed by FLYOLO? Below is the directory structure under FLDatasetTool.


FLDatasetTool
├── __pycache__
├── config
│   └── kitti_object
├── dataset
│   └── record_2022_0801_0801
│       ├── vehicle.tesla.model3.master
│       │   └── kitti_object
│       │       ├── ImageSets
│       │       └── training
│       │           ├── calib
│       │           ├── image_2
│       │           ├── label_2
│       │           └── velodyne
│       ├── vehicle.tesla.model3_13
│       │   └── kitti_object
│       │       ├── ImageSets
│       │       └── training
│       │           ├── calib
│       │           ├── image_2
│       │           ├── label_2
│       │           └── velodyne
│       ├── vehicle.tesla.model3_19
│       │   └── kitti_object
│       │       ├── ImageSets
│       │       └── training
│       │           ├── calib
│       │           ├── image_2
│       │           ├── label_2
│       │           └── velodyne
│       ├── vehicle.tesla.model3_25
│       │   └── kitti_object
│       │       ├── ImageSets
│       │       └── training
│       │           ├── calib
│       │           ├── image_2
│       │           ├── label_2
│       │           └── velodyne
│       └── vehicle.tesla.model3_7
│           └── kitti_object
│               ├── ImageSets
│               └── training
│                   ├── calib
│                   ├── image_2
│                   ├── label_2
│                   └── velodyne
├── label_tools
│   ├── kitti_object
│   │   └── __pycache__
│   └── yolov5
├── raw_data
│   ├── record_2022_0801_0753
│   │   ├── infra_t_junction
│   │   │   ├── image_2
│   │   │   ├── velodyne_16
│   │   │   ├── velodyne_32
│   │   │   └── velodyne_64
│   │   ├── others.world_0
│   │   ├── vehicle.tesla.model3.master
│   │   │   ├── image_2
│   │   │   ├── image_2_semantic
│   │   │   ├── radar_front
│   │   │   ├── velodyne
│   │   │   └── velodyne_semantic
│   │   ├── vehicle.tesla.model3_13
│   │   │   ├── image_2
│   │   │   ├── image_2_semantic
│   │   │   ├── radar_front
│   │   │   ├── velodyne
│   │   │   └── velodyne_semantic
│   │   ├── vehicle.tesla.model3_19
│   │   │   ├── image_2
│   │   │   ├── image_2_semantic
│   │   │   ├── radar_front
│   │   │   ├── velodyne
│   │   │   └── velodyne_semantic
│   │   ├── vehicle.tesla.model3_25
│   │   │   ├── image_2
│   │   │   ├── image_2_semantic
│   │   │   ├── radar_front
│   │   │   ├── velodyne
│   │   │   └── velodyne_semantic
│   │   └── vehicle.tesla.model3_7
│   │       ├── image_2
│   │       ├── image_2_semantic
│   │       ├── radar_front
│   │       ├── velodyne
│   │       └── velodyne_semantic
│   └── record_2022_0801_0801
│       ├── infra_t_junction
│       │   ├── image_2
│       │   ├── velodyne_16
│       │   ├── velodyne_32
│       │   └── velodyne_64
│       ├── others.world_0
│       ├── vehicle.tesla.model3.master
│       │   ├── image_2
│       │   ├── image_2_semantic
│       │   ├── radar_front
│       │   ├── velodyne
│       │   └── velodyne_semantic
│       ├── vehicle.tesla.model3_13
│       │   ├── image_2
│       │   ├── image_2_semantic
│       │   ├── radar_front
│       │   ├── velodyne
│       │   └── velodyne_semantic
│       ├── vehicle.tesla.model3_19
│       │   ├── image_2
│       │   ├── image_2_semantic
│       │   ├── radar_front
│       │   ├── velodyne
│       │   └── velodyne_semantic
│       ├── vehicle.tesla.model3_25
│       │   ├── image_2
│       │   ├── image_2_semantic
│       │   ├── radar_front
│       │   ├── velodyne
│       │   └── velodyne_semantic
│       └── vehicle.tesla.model3_7
│           ├── image_2
│           ├── image_2_semantic
│           ├── radar_front
│           ├── velodyne
│           └── velodyne_semantic
├── recorder
│   ├── __pycache__
│   └── agents
│       ├── navigation
│       │   └── __pycache__
│       └── tools
│           └── __pycache__
├── test_code
└── utils
    └── __pycache__

134 directories

Any help would be appreciated.

dalqattan commented 1 year ago

Hi, Atusi, thank you very much for your attention. In our experiment, we consider a multi-stage federated learning procedure. Hence we adopt the data from all locations. This can be seen from the script file sim_main.py. (But if you consider a single-stage training, it is okay to adopt data from only one folder by using python carla_pretrain_yolo.py pretrain )

  1. We use the dataset in folder pretrain for generating a pretrained model:
python carla_pretrain_yolo.py pretrain
  1. We use the dataset in folders town03 and town05 for federated learning:
python carla_main_FLCAV_yolo.py

In particualr, this file will first use the dataset in folders town03 and town05 for generating two edge federated models. See lines 52 to 58 in file carla_main_FLCAV_yolo.py. Then it will aggregate the edge federated models into cloud federated models. See lines 60 to 91 in file carla_main_FLCAV_yolo.py.

  1. Note that the folder test constains the dataset for testing. The datasets are example data for running the program. You could generate your own dataset by using FLDatasetTool.

HI

where can I find these two files ? carla_pretrain_yolo.py carla_main_FLCAV_yolo.py

Thanks Duaa