Open yansir-X opened 4 years ago
Hi @Yangsir-X,
Before using eval.py
, you need to merge the results for each rate-distortion tradeoff with merge_csv.py
.
For example:
python merge_csv.py ../eval/eval_64_fused.csv -i ../eval/eval_64_00*.csv
Then, you can use plot_results.py
on the fused CSV files which contain the csv_file
column.
Hello Mr. Quach, _python merge_csv.py ../eval/eval_64_fused.csv -i ../eval/eval_6400*.csv I got it: eval_64_00*.csv is just one .csv file, which is the output from the eval.py step.
Could you pls take a look at the next error, from executing _plot_results.py ../figs/rd -i ../eval/eval_64_new2_fused.csv ../eval/eval_mpegfused.csv -t Proposed Anchor:
File "plot_results.py", line 69, in
This is the last barracle towards me finishing all the steps. Would be very grateful for any hint! Thanks in advance!
If you only decompressed a few, you should adjust the code accordingly.
For example, the seqs
variable specifies the different point clouds.
You can use pdb
or ipdb
to debug the script step by step.
In that way, you can find out why this error occurs in your particular situation.
Hi, I did't do the Fusing MPEG results step, so I think that's the problem. From the READ.ME:
_./evaluate_compression -i ~/data/datasets/msft -o ~/data/datasets/cwi-pcl-codec/msft_9 -q 1 -b 9 -g 1 --intra_frame_quality_csv ~/data/datasets/cwi-pcl-codec/msft_9_intra.csv --predictive_quality_csv ~/data/datasets/cwi-pcl-codec/msft9_pred.csv python fuse_eval_mpeg.py ../eval/eval_mpeg_9.csv ~/code/cwi-pcl-codec/build/apps/evaluate_compression/msft_9_intra.csv ../eval/eval_mpeg_9fused.csv
What does ___./evaluatecompression_ do? Is it a python file? Bz I can't find it under src file. And what does the ~/code/cwi-pcl-codec/build... part do? And in which step is /eval/eval_mpeg_9.csv file generated?
Thanks again for your help!
So in the eval.py step: _Example with original, compressed and decompressed datasets: python eval.py ../data/msft "*/.ply" ../data/msft_dec_000005 ../../geo_dist/build/pc_error --decompressed_suffix .bin.ply --compressed_dir ../data/msft_bin_000005 --compressed_suffix .bin --output_file ../eval/eval_64_000005.csv
Example with only original and decompressed datasets: python eval.py ../data/msft "*/.ply" ../msft_9 ../../geo_dist/build/pc_error --output_file ../eval/eval_mpeg_9.csv
The two are both needed, right? I thought you just need to execute one of them.
evaluate_compression
is found after building https://github.com/mauriceqch/cwi-pcl-codec.
It produces a CSV containing compression results. (~/data/datasets/cwi-pcl-codec/msft_9_intra.csv
)
Specifically, we are interested in the bitrate.
./evaluate_compression -i ~/data/datasets/msft -o ~/data/datasets/cwi-pcl-codec/msft_9 -q 1 -b 9 -g 1 --intra_frame_quality_csv ~/data/datasets/cwi-pcl-codec/msft_9_intra.csv --predictive_quality_csv ~/data/datasets/cwi-pcl-codec/msft9_pred.csv
Then, we use eval.py
to evaluate the results from the PCL codec. (../eval/eval_mpeg_9.csv
in your example)
python eval.py ../data/msft "**/*.ply" ../msft_9 ../../geo_dist/build/pc_error --output_file ../eval/eval_mpeg_9.csv
And, we fuse the two resulting CSVs with fuse_eval_mpeg.py
. (../eval/eval_mpeg_9_fused.csv
in your example)
python fuse_eval_mpeg.py ../eval/eval_mpeg_9.csv ~/data/datasets/cwi-pcl-codec/msft_9_intra.csv ../eval/eval_mpeg_9_fused.csv
After doing this for each octree level, you will have several CSV files for PCL.
In the same manner, with each lambda, you will have several CSV files for the deep model.
You can then produce two merged CSVs with merge_csv.py
: one for PCL and one for you model.
python merge_csv.py ../eval/eval_64_fused.csv -i ../eval/eval_64_00*.csv
python merge_csv.py ./eval_mpeg_fused.csv -i ../eval/eval_mpeg_*_fused.csv
Finally, plot_results.py
can be used on these two final CSV files to plot the results.
python plot_results.py ../figs/rd -i ../eval/eval_64_fused.csv ../eval/eval_mpeg_fused.csv -t Proposed Anchor
Note, that you may have to adapt the paths to suit your particular setup.
Dear Mr. Quach, after running eval.py, with original, compressed and decompressed datasets, i get the error: KeyError: "['csv_file'] not in index" Then I manually added 'csv_file' column to the generated .csv file.
Then I tryied to execute python plot_results.py ...., but got error: plot_results.py:70: RankWarning: Polyfit may be poorly conditioned bdrate = metrics.bdrate(points_for_bdrate[1], points_for_bdrate[0]) Traceback (most recent call last): File "plot_results.py", line 70, in
bdrate = metrics.bdrate(points_for_bdrate[1], points_for_bdrate[0])
File "/home/yang/pcc_geo_cnn-master/src/metrics.py", line 104, in bdrate
poly2 = numpy.polyfit(psnr2, log_rate2, 3)
File "<__array_function__ internals>", line 6, in polyfit
File "/home/yang/miniconda3/envs/tfcpu/lib/python3.5/site-packages/numpy/lib/polynomial.py", line 601, in polyfit
raise TypeError("expected non-empty vector for x")
TypeError: expected non-empty vector for x
So the question:
Thanks in advance! Best