Open AsukaCamellia opened 2 months ago
Thank you so much for your interest in our work! You can use run_eval.py to evaluate our results. Our method is training-free and uses only a pretrained SD1-5 model. To evaluate:
RegionDrag/
├── assets/
├── utils/
├── drag_data/
│ ├── dragbench-dr/
│ └── dragbench-sr/
├── README.md
├── UI_GUIDE.md
├── requirements.txt
├── run_eval.py
└── ui.py
python run_eval.py --data_dir drag_data/dragbench-sr/
python run_eval.py --data_dir drag_data/dragbench-dr/
Thank you so much for your interest in our work! You can use run_eval.py to evaluate our results. Our method is training-free and uses only a pretrained SD1-5 model. To evaluate:
- Install RegionDrag and download the DragBench datasets (SR and DR) by following the instructions in our README.
- Ensure your file structure matches:
RegionDrag/ ├── assets/ ├── utils/ ├── drag_data/ │ ├── dragbench-dr/ │ └── dragbench-sr/ ├── README.md ├── UI_GUIDE.md ├── requirements.txt ├── run_eval.py └── ui.py
Run the evaluation:
- For DragBench-SR:
python run_eval.py --data_dir drag_data/dragbench-sr/
- For DragBench-DR:
python run_eval.py --data_dir drag_data/dragbench-dr/
Thanks!
Thank you so much for your interest in our work! You can use run_eval.py to evaluate our results. Our method is training-free and uses only a pretrained SD1-5 model. To evaluate:
- Install RegionDrag and download the DragBench datasets (SR and DR) by following the instructions in our README.
- Ensure your file structure matches:
RegionDrag/ ├── assets/ ├── utils/ ├── drag_data/ │ ├── dragbench-dr/ │ └── dragbench-sr/ ├── README.md ├── UI_GUIDE.md ├── requirements.txt ├── run_eval.py └── ui.py
Run the evaluation:
- For DragBench-SR:
python run_eval.py --data_dir drag_data/dragbench-sr/
- For DragBench-DR:
python run_eval.py --data_dir drag_data/dragbench-dr/
Hi, thank you for your assistance. I've got the results, but there are several points that need further attention.
Thank you for your valuable feedback. Here are some explanations that may help:
Thanks for such great work! Can I get the MD and LPIPS by just running the run_eval.py? Could you provide a more detailed description of training and evaluation in readme?