In the folder ./data
, run tar xfvz sample.tgz
to get:
sample_0.npy
to sample_7.npy
);test.txt
used for loading sample data;compressed_0.npy
to compressed_7.npy
);decompressed_0.npy
to decompressed_7.npy
);sample_metric_results.csv
)The cached network output and expected reconstruction error file are used for benchmarking.
.pth
files are saved in ./checkpoints
.python test.py
../results
.python train.py
One can modified the parameters inside train.py
.neuralcompressor
: python setup.py develop --user
python neuralcompress/utils/bcae_scriptor.py --checkpoint_path checkpoints --epoch 2000 --save_path torchscript/
--checkpoint_path
: The path to the checkpoints.--epoch
: The epoch of the pretrained checkpoints to load.--save_path
: The path to save the scripted encoder and decoder.--prefix
: Prefix to the filename of the scripted encoder and decoder | default=bcae.Produce compressed codes of each input TPC frame.
Usage examples:
python inference.py --data_size 8 --batch_size 4 --partition test --random --checkpoint_path ./checkpoints/ --epoch 2000 --save_path inference_results --half
Parameters:
--data_path
: The path to data.--device
: Choose from {cuda,cpu}. The device to run the inference | default=cuda.--data_size
: Number of frames to load | default=1.--batch_size
: Batch size | default=1.--partition
: Choose from {train,valid,test} partition from which to load the data | default=test.--random
: Whether to get a random sample.--checkpoint_path
: The path to the pretrained checkpoints.--epoch
: The epoch of the pretrained checkpoints to load.--save_path
: The path to save output tensor.--half
: Whether to save the output with half precision.--prefix
: Output file prefix | default=output.Usage example:
python benchmark/gpu_inference.py checkpoints/encoder_2000.pt
.python benchmark/gpu_inference checkpoints/encoder_2000.pt --with_loader --data_root ./data
.--half_precision
and --num_workers 8
.Parameters:
positional arguments:
checkpoint
: The path to the encoder pt file.
optional arguments:
--num_runs
: Number of runs to calculate the run-time. (default=10)
--benchmark
: If used, set torch.backends.cudnn.benchmark to True.
--with_loader
: If used, use TPC dataloader. Use randomly generated data if otherwise
--data_root
: Path to data, when loading data with TPC dataloader (--with_loader). (default=None)
--data_size
: Number of frames to load. (default=1)
--batch_size
: Batch size. (default=1)
--pin_memory
: If used, the dataloader will copy Tensors into CUDA pinned memory before returning them.
--num_workers
: Number of subprocesses to use for data loading. 0 means that the data will be loaded in the main process. (default=0)
--prefetch_factor
: Number of samples loaded in advance by each worker. (default=2)
--result_fname
: Result filename. (default=result.csv)
--half_precision
: If used, run inference with half precision (float16)
@INPROCEEDINGS{huang2021bcae,
author={Huang, Yi and Ren, Yihui and Yoo, Shinjae and Huang, Jin},
booktitle={2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA)},
title={Efficient Data Compression for 3D Sparse TPC via Bicephalous Convolutional Autoencoder},
year={2021},
volume={},
number={},
pages={1094-1099},
doi={10.1109/ICMLA52953.2021.00179}
}