lucastabelini / PolyLaneNet

Code for the paper entitled "PolyLaneNet: Lane Estimation via Deep Polynomial Regression" (ICPR 2020)
https://arxiv.org/abs/2004.10924
MIT License
301 stars 75 forks source link

How to test my own dataset? #17

Closed bulutenesemre closed 4 years ago

bulutenesemre commented 4 years ago

Hello.

I have my own dataset with a video. There is no clear solution for testing with different datasets. How can I test with my video? Must I convert it as .pt format?

Also When I test .pt file with --view argument it only shows the first image result and it doesn't show others.

lucastabelini commented 4 years ago

There's an explanation on how to test the model on your own dataset in #2 .

As for the visualization, if you press any key when the image window is focused it will show the next image.

NSVR57 commented 4 years ago

Hi Tried like this

!pythontest.py --exp_name tusimple --cfg /content/PolyLaneNet/cfgs/testconfig.yml --epoch 2695 But getting the following error

traceback (most recent call last): File "test.py", line 141, in test_dataset = cfg.get_dataset("test") File "/content/PolyLaneNet/lib/config.py", line 23, in get_dataset self.config['datasets'][split]['type'])(**self.config['datasets'][split]['parameters']) KeyError: 'test'

lucastabelini commented 4 years ago

The testconfig.yaml you created is missing the test dataset information. If you show its contents I might be able to help.

NSVR57 commented 4 years ago

this is my textconfig.yaml

# Training settings
seed: 0
exps_dir: 'experiments'
iter_log_interval: 1
iter_time_window: 100
model_save_interval: 1
backup:
model:
  name: PolyRegression
  parameters:
    num_outputs: 35 # (5 lanes) * (1 conf + 2 (upper & lower) + 4 poly coeffs)
    pretrained: true
    backbone: 'resnet50'
    pred_category: false
    curriculum_steps: [0, 0, 0, 0]
loss_parameters:
  conf_weight: 1
  lower_weight: 1
  upper_weight: 1
  cls_weight: 0
  poly_weight: 300
batch_size: 16
epochs: 2695
optimizer:
  name: Adam
  parameters:
    lr: 3.0e-4
lr_scheduler:
  name: CosineAnnealingLR
  parameters:
    T_max: 385

# Testing settings
test_parameters:
  conf_threshold: 0.5

# Dataset settings
datasets:
  train:
    type: LaneDataset
    parameters:
      dataset: tusimple
      split: train
      img_size: [360, 640]
      normalize: true
      aug_chance: 0.9090909090909091 # 10/11
      augmentations:
       - name: Affine
         parameters:
           rotate: !!python/tuple [-10, 10]
       - name: HorizontalFlip
         parameters:
           p: 0.5
       - name: CropToFixedSize
         parameters:
           width: 1152
           height: 648
      root: "/dados/tabelini/datasets/tusimple"

test: &test
    type: LaneDataset
    parameters:
      dataset: nolabel_dataset
      normalize: true # Wheter to normalize the input data. Use the same value used in the pretrained model (all pretrained models that I provided used normalization, so you should leave it as it is)
      augmentations: [] # List of augmentations. You probably want to leave this empty for testing
      img_h: 360 # The height of your test images (they shoud all have the same size)
      img_w: 640 # The width of your test images
      img_size: [360, 640] # Yeah, this parameter is duplicated for some reason, will fix this when I get time (feel free to open a pull request :))
      max_lanes: 5 # Same number used in the pretrained model. If you use a model pretrained on TuSimple (most likely case), you'll use 5 here
      root:  "/home/ubuntu/" # Path to the directory containing your test images. The loader will look recursively for image files in this directory
      img_ext: ".jpeg" # Test images extension (e.g., .png, .jpg)"

  # val = test
val:
  <<: *test
``
lucastabelini commented 4 years ago

The test key seems to be at the same level of indentation as datasets, but it should be at the same level as train (train and test are keys inside datasets).

NSVR57 commented 4 years ago

Thanks, That worked. do we have any config example where we can pass a video instead of pics

lucastabelini commented 4 years ago

No. What you can do is extract the frames from the video and pass to the network, using tools such as ffmpeg.