jamesjg / FoodSAM

FoodSAM: Any Food Segmentation
Apache License 2.0
134 stars 13 forks source link

how to train the dataset #2

Closed snamper closed 1 year ago

snamper commented 1 year ago

can i upload my photo and label to the dataset?how to do,thanks

jamesjg commented 1 year ago

Hi, thanks for your interest in our work!

Our FoodSAM model utilizes the existing SAM and semantic segmentation models in a zero-shot manner without additional training. You can directly use the demo shell scripts to run inference for image segmentation on a single image or your own datasets by preparing it in the format described in the Installation.md guide.

Our model does not include a training component. If you want better semantic segmentation results, you can try training on your own dataset using the training code from the original semantic segmentation repository. Then run inference with your newly trained model and configs.

Alternatively, you could train any semantic segmentation model on your data, and save the predictions in the expected format - arg.output/image_name/pred_mask.png. Then comment out the semantic segmentation section in our semantic.py or panoptic.py scripts. Finally, you can run the demo shell to get segmentation results based on your own semantic predictions.