mohit1997 / DeepZip

NN based lossless compression
MIT License
146 stars 24 forks source link

Can the author give a program running on pycharm? There is too much software for this package to run. #11

Closed zane-star-bot closed 4 years ago

zane-star-bot commented 4 years ago

This package contains makefile and script files. I've been debugging for a long time and haven't run it. Can the author give a program running on pycharm? There is too much software for this package to run.

mohit1997 commented 4 years ago

Which OS are you using? Using the instructions provided, it should be fairly straightforward to run this on a linux based OS. For PyCharm, you can try installing the libraries mentioned manually or using the following command

pip install --force -r req.txt

Attached file req.txt

zane-star-bot commented 4 years ago

Thank you for your reply. Your reply is very good and helpful to me. Dear paper author, I use windows 10 operating system and pycharm. I think your paper is excellent. I'm not familiar with makefile format and shell script, and I can't understand “run_experiments.sh” and other contents. I would be grateful if you could use Python files instead of shell scripts and makefiles. This is also conducive to the in-depth study of this direction. It is helpful for your paper to be reproduced by many researchers.

zane-star-bot commented 4 years ago

I reinstalled the CentOS system on my computer. There are still one or two operational problems. Can you give me some guidance. Error message prompt.txt

mohit1997 commented 4 years ago

Please install cuda and cudnn library on your OS. Do you have a GPU on your system? Otherwise, you can use the nogpu branch.

zane-star-bot commented 4 years ago

i plan to use the nogpu branch.. I changed tensorflow GPU = = 1.8 to tensorflow CPU 2.2.0rc1. I failed to install NVIDIA docker. Is NVIDIA docker necessary?
This document contains error information. error.txt

mohit1997 commented 4 years ago

This software is built with TensorFlow 1.8. Can you try using the alternative installation (newly added on no GPU branch) on the no GPU branch. Let me know if it doesn't work. The alternative installation doesn't require NVIDIA docker.

zane-star-bot commented 4 years ago

Dear author, I made some mistakes while running the program. I hope to get your help. i plan to use the nogpu branch.I use the Ubuntu operating system. Tensorflow-cpu-1.8.0 is too old. I spent a day looking for many websites (including pypi) and couldn't find this version. Here is the version number I can find.{ (base) wangyanbo@wangyanbo-virtual-machine:~/下载/DeepZip-master$ pip install tensorflow== -i https://mirrors.aliyun.com/pypi/simple/ Looking in indexes: https://mirrors.aliyun.com/pypi/simple/ ERROR1.13.0r: Could not find a version that satisfies the requirement tensorflow== (from versions: 1.13.0rc1, 1.13.0rc2, 1.13.1, 1.13.2, 1.14.0rc0, 1.14.0rc1, 1.14.0, 1.15.0rc0, 1.15.0rc1, 1.15.0rc2, 1.15.0rc3, 1.15.0, 1.15.2, 2.0.0a0, 2.0.0b0, 2.0.0b1, 2.0.0rc0, 2.0.0rc1, 2.0.0rc2, 2.0.0, 2.0.1, 2.1.0rc0, 2.1.0rc1, 2.1.0rc2, 2.2.2.0rc0, 2.2.0rc1, 2.2.0rc2, 2.2.0rc3, 2.2.0rc4) ERROR: No matching distribution found for tensorflow==}

I changed tensorflow GPU = = 1.8 to tensorflow CPU 1.13.0rc1. This is my installation file.{ pip install --upgrade pip pip install \ tensorflow==1.13.0rc1 -i https://mirrors.aliyun.com/pypi/simple/ pip install tqdm pip install \ keras==2.2.2 \ argparse \ pandas \ h5py \ "numpy<1.17" \ setuptools==41.0.0 \ scipy \ scikit-learn }

The error message is as follows: { ERROR: keras 2.2.2 has requirement keras-applications==1.0.4, but you'll have keras-applications 1.0.8 which is incompatible. ERROR: keras 2.2.2 has requirement keras-preprocessing==1.0.2, but you'll have keras-preprocessing 1.1.0 which is incompatible. }

mohit1997 commented 4 years ago

I created another bash file with tensorflow 1.13 and keras 2.2.4 which seems to resolve the error. Try using the following.

pip install tensorflow==1.13.0rc1 -i https://mirrors.aliyun.com/pypi/simple/
pip install tqdm
pip install \
keras==2.2.4 \
argparse \
pandas \
h5py \
"numpy<1.17" \
setuptools==41.0.0 \
scipy \
scikit-learn
zane-star-bot commented 4 years ago

Dear author, I have run the following command. There are still some mistakes here. What is the filename of the compressed file? Where is it? Can you spare some time for guidance?

I used the another bash file with tensorflow 1.13 and keras 2.2.4 which seems to resolve the error.{ pip install tensorflow==1.13.0rc1 -i https://mirrors.aliyun.com/pypi/simple/ pip install tqdm pip install \ keras==2.2.4 \ argparse \ pandas \ h5py \ "numpy<1.17" \ setuptools==41.0.0 \ scipy \ scikit-learn}

(base) wangyanbo@wangyanbo-virtual-machine:~/下载/DeepZip-master/src$ ./run_experiments.sh biLSTM 0 error.txt

mohit1997 commented 4 years ago

Are you using the noGPU branch? The error is because the model is using CuDNNRNN instead of RNN. CuDNNRNN is supported only for GPUs. You basically have to use this model.py file https://github.com/mohit1997/DeepZip/blob/noGPU/src/models.py

The files to be compressed are to be put in data/files_to_be_compressed. You have to create the directory data/files_to_be_compressed and put files in there.

zane-star-bot commented 4 years ago

I downloaded this version.(https://github.com/mohit1997/DeepZip/blob/noGPU/src/models.py). An error occurred while executing this command(‘make bash BACKEND=tensorflow DATA=/path/to/data/’).

These commands(```bash cd DeepZip python3 -m venv tf source tf/bin/activate bash install.sh

```bash
cd data
./run_parser.sh

cd src
./run_experiments.sh biLSTM
```) are executed normally.

(base) wangyanbo@wangyanbo-virtual-machine:~/下载/DeepZip-noGPU/docker$ make bash BACKEND=tensorflow DATA=/path/to/data/
docker build -t keras --build-arg python_version=3.6  -f Dockerfile .
Sending build context to Docker daemon  8.704kB
Step 1/18 : ARG cuda_version=9.0
Step 2/18 : ARG cudnn_version=7
Step 3/18 : FROM nvidia/cuda:${cuda_version}-cudnn${cudnn_version}-devel
 ---> 5aafb863776b
Step 4/18 : RUN apt-get update && apt-get install -y --no-install-recommends       bzip2       p7zip-full       g++       git       graphviz       libgl1-mesa-glx       libhdf5-dev       openmpi-bin       time       wget &&     rm -rf /var/lib/apt/lists/*
 ---> Running in be38262e83f4
Get:1 http://archive.ubuntu.com/ubuntu xenial InRelease [247 kB]
Get:2 http://security.ubuntu.com/ubuntu xenial-security InRelease [109 kB]
Ign:3 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64  InRelease
Get:4 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [109 kB]
Get:5 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages [1101 kB]
Get:6 http://archive.ubuntu.com/ubuntu xenial-backports InRelease [107 kB]
Get:7 http://archive.ubuntu.com/ubuntu xenial/main amd64 Packages [1558 kB]
Get:8 http://security.ubuntu.com/ubuntu xenial-security/restricted amd64 Packages [12.7 kB]
Get:9 http://security.ubuntu.com/ubuntu xenial-security/universe amd64 Packages [624 kB]
Get:10 http://security.ubuntu.com/ubuntu xenial-security/multiverse amd64 Packages [6680 B]
Get:11 http://archive.ubuntu.com/ubuntu xenial/restricted amd64 Packages [14.1 kB]
Get:12 http://archive.ubuntu.com/ubuntu xenial/universe amd64 Packages [9827 kB]
Get:13 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64  InRelease [169 B]
Err:13 https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64  InRelease
  Clearsigned file isn't valid, got 'NOSPLIT' (does the network require authentication?)
Get:14 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64  Release [169 B]
Get:15 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64  Release.gpg [169 B]
Get:16 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64  Packages [254 kB]
Get:12 http://archive.ubuntu.com/ubuntu xenial/universe amd64 Packages [9827 kB]
Get:12 http://archive.ubuntu.com/ubuntu xenial/universe amd64 Packages [9827 kB]
Get:17 http://archive.ubuntu.com/ubuntu xenial/multiverse amd64 Packages [176 kB]
Get:18 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages [1470 kB]
Get:19 http://archive.ubuntu.com/ubuntu xenial-updates/restricted amd64 Packages [13.1 kB]
Get:20 http://archive.ubuntu.com/ubuntu xenial-updates/universe amd64 Packages [1029 kB]
Get:21 http://archive.ubuntu.com/ubuntu xenial-updates/multiverse amd64 Packages [19.7 kB]
Get:22 http://archive.ubuntu.com/ubuntu xenial-backports/main amd64 Packages [7942 B]
Get:23 http://archive.ubuntu.com/ubuntu xenial-backports/universe amd64 Packages [8807 B]
Fetched 9302 kB in 17min 40s (8769 B/s)
Reading package lists...
E: Failed to fetch https://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64/InRelease  Clearsigned file isn't valid, got 'NOSPLIT' (does the network require authentication?)
E: Some index files failed to download. They have been ignored, or old ones used instead.
The command '/bin/sh -c apt-get update && apt-get install -y --no-install-recommends       bzip2       p7zip-full       g++       git       graphviz       libgl1-mesa-glx       libhdf5-dev       openmpi-bin       time       wget &&     rm -rf /var/lib/apt/lists/*' returned a non-zero code: 100
make: *** [Makefile:17:build] 错误 100
(base) wangyanbo@wangyanbo-virtual-machine:~/下载/DeepZip-noGPU/docker$ 
mohit1997 commented 4 years ago

Do not use docker for now. Can you clone the noGPU branch and then update the contents of the file install.sh inside DeepZip directory with

pip install tensorflow==1.13.0rc1 -i https://mirrors.aliyun.com/pypi/simple/
pip install tqdm
pip install \
keras==2.2.4 \
argparse \
pandas \
h5py \
"numpy<1.17" \
setuptools==41.0.0 \
scipy \
scikit-learn

Then execute the commands below while you are inside DeepZip

python3 -m venv tf
source tf/bin/activate
bash install.sh

Once you can successfully run these commands, the code should work. Let me know if you get stuck.

zane-star-bot commented 4 years ago

**_

_I execute the commands below while I am inside DeepZip { python3 -m venv tf source tf/bin/activate bash install.sh } Why does the prompt "cmp: ../data/compressed/xor20/biLSTM.reconstructed.txt: 没有那个文件或目录"?(No the file or directory? )

Is it working now? How long does it take to run?_

_**

(base) wangyanbo@wangyanbo-virtual-machine:~/下载/DeepZip-noGPU$ cd data/ (base) wangyanbo@wangyanbo-virtual-machine:~/下载/DeepZip-noGPU/data$ ls compressed gzipped_test_data processed_files dat_to_np.py logs_data run_fasta_preprocess.sh files_to_be_compressed parse_new.py run_parser.sh final_log.csv parse_wiki.py trained_models (base) wangyanbo@wangyanbo-virtual-machine:~/下载/DeepZip-noGPU/data$ ./run_parser.sh filename: files_to_be_compressed/xor20.txt xor20 files_to_be_compressed/xor20.txt 118 {'\n': 0, 'b': 1, 'a': 2} {0: '\n', 1: 'b', 2: 'a'} [[2] [2] [2] [2] [2] [2] [2] [2] [2] [2]] aaaaaaaaaa


(base) wangyanbo@wangyanbo-virtual-machine:~/下载/DeepZip-noGPU/data$ cd ../ (base) wangyanbo@wangyanbo-virtual-machine:~/下载/DeepZip-noGPU$ cd src/ (base) wangyanbo@wangyanbo-virtual-machine:~/下载/DeepZip-noGPU/src$ ./run_experiments.sh biLSTM Requirement already satisfied: tqdm in /home/wangyanbo/下载/yes/lib/python3.7/site-packages (4.42.1) ../data/processed_files/xor20.npy xor20 ../data/processed_files/xor20.param.json cmp: ../data/compressed/xor20/biLSTM.reconstructed.txt: 没有那个文件或目录(No the file or directory? ) 2 cmp: ../data/compressed/xor20/biLSTM.reconstructed.txt: 没有那个文件或目录(No the file or directory? ) continuing Starting training ... Using TensorFlow backend. Namespace(data='../data/processed_files/xor20.npy', gpu='0', log_file='../data/logs_data/xor20/biLSTM.log.csv', model_name='biLSTM', name='../data/trained_models/xor20/biLSTM.hdf5') Traceback (most recent call last): File "trainer.py", line 90, in X,Y = generate_single_output_data(arguments.data,batch_size, sequence_length) File "trainer.py", line 64, in generate_single_output_data Y = onehot_encoder.transform(Y) File "/home/wangyanbo/下载/yes/lib/python3.7/site-packages/sklearn/preprocessing/_encoders.py", line 390, in transform X_int, X_mask = self._transform(X, handle_unknown=self.handle_unknown) File "/home/wangyanbo/下载/yes/lib/python3.7/site-packages/sklearn/preprocessing/_encoders.py", line 102, in _transform X_list, n_samples, n_features = self._check_X(X) File "/home/wangyanbo/下载/yes/lib/python3.7/site-packages/sklearn/preprocessing/_encoders.py", line 43, in _check_X X_temp = check_array(X, dtype=None) File "/home/wangyanbo/下载/yes/lib/python3.7/site-packages/sklearn/utils/validation.py", line 586, in check_array context)) ValueError: Found array with 0 sample(s) (shape=(0, 1)) while a minimum of 1 is required. Starting Compression ... Using TensorFlow backend. WARNING:tensorflow:From /home/wangyanbo/下载/yes/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. Traceback (most recent call last): File "compressor.py", line 199, in main() File "compressor.py", line 161, in main predict_lstm(X, Y, Y_original, timesteps, batch_size, alphabet_size, args.model_name) File "compressor.py", line 66, in predict_lstm model.load_weights(args.model_weights_file) File "/home/wangyanbo/下载/yes/lib/python3.7/site-packages/keras/engine/network.py", line 1157, in load_weights with h5py.File(filepath, mode='r') as f: File "/home/wangyanbo/下载/yes/lib/python3.7/site-packages/h5py/_hl/files.py", line 408, in init swmr=swmr) File "/home/wangyanbo/下载/yes/lib/python3.7/site-packages/h5py/_hl/files.py", line 173, in make_fid fid = h5f.open(name, flags, fapl=fapl) File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper File "h5py/h5f.pyx", line 88, in h5py.h5f.open OSError: Unable to open file (unable to open file: name = '../data/trained_models/xor20/biLSTM.hdf5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0) Command exited with non-zero status 1 Command being timed: "python compressor.py -data ../data/processed_files/xor20.npy -data_params ../data/processed_files/xor20.param.json -model ../data/trained_models/xor20/biLSTM.hdf5 -model_name biLSTM -output ../data/compressed/xor20/biLSTM.compressed -batch_size 1000" User time (seconds): 1.66 System time (seconds): 0.37 Percent of CPU this job got: 114% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:01.77 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 240316 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 47060 Voluntary context switches: 20 Involuntary context switches: 38 Swaps: 0 File system inputs: 0 File system outputs: 16 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 1 Using TensorFlow backend. Traceback (most recent call last): File "decompressor.py", line 181, in main() File "decompressor.py", line 150, in main f = open(args.input_file_prefix+'.combined','rb') FileNotFoundError: [Errno 2] No such file or directory: '../data/compressed/xor20/biLSTM.compressed.combined' Command exited with non-zero status 1 Command being timed: "python decompressor.py -output ../data/compressed/xor20/biLSTM.reconstructed.txt -model ../data/trained_models/xor20/biLSTM.hdf5 -model_name biLSTM -input_file_prefix ../data/compressed/xor20/biLSTM.compressed -batch_size 1000" User time (seconds): 1.05 System time (seconds): 0.38 Percent of CPU this job got: 120% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:01.19 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 226464 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 44199 Voluntary context switches: 20 Involuntary context switches: 32 Swaps: 0 File system inputs: 0 File system outputs: 8 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 1 cmp: ../data/compressed/xor20/biLSTM.reconstructed.txt: 没有那个文件或目录 ../data/processed_files/xor30.npy xor30 ../data/processed_files/xor30.param.json cmp: ../data/compressed/xor30/biLSTM.reconstructed.txt: 没有那个文件或目录 2 cmp: ../data/compressed/xor30/biLSTM.reconstructed.txt: 没有那个文件或目录 continuing Starting training ... Using TensorFlow backend. Namespace(data='../data/processed_files/xor30.npy', gpu='0', log_file='../data/logs_data/xor30/biLSTM.log.csv', model_name='biLSTM', name='../data/trained_models/xor30/biLSTM.hdf5') 2 WARNING:tensorflow:From /home/wangyanbo/下载/yes/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. WARNING:tensorflow:From /home/wangyanbo/下载/yes/lib/python3.7/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version. Instructions for updating: Use tf.cast instead. Epoch 1/20 2020-05-09 14:03:12.430945: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA 2020-05-09 14:03:12.434095: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3092735000 Hz 2020-05-09 14:03:12.434361: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x55f6235e2f00 executing computations on platform Host. Devices: 2020-05-09 14:03:12.434397: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): , 1895680/9999872 [====>.........................] - ETA: 1:19:58 - loss: 0.0278

After a few hours of running, After running the program, the prompt is as follows.

9022720/9999872 [==========================>...] - ETA: 10:00 - loss: 1.7198e-09022848/9999872 [==========================>...] - ETA: 10:00 - loss: 1.7198e-09022976/9999872 [==========================>...] - ETA: 9:59 - loss: 1.7198e-079999872/9999872 [==============================] - 6166s 617us/step - loss: 1.7198e-07

Epoch 00004: loss did not improve from 0.00000 Epoch 00004: early stopping Starting Compression ... Using TensorFlow backend. WARNING:tensorflow:From /home/wangyanbo/下载/yes/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. 2020-05-09 20:58:10.827826: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA 2020-05-09 20:58:10.830938: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3092735000 Hz 2020-05-09 20:58:10.831226: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x5595ce0add80 executing computations on platform Host. Devices: 2020-05-09 20:58:10.831260: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): , Command being timed: "python compressor.py -data ../data/processed_files/xor30.npy -data_params ../data/processed_files/xor30.param.json -model ../data/trained_models/xor30/biLSTM.hdf5 -model_name biLSTM -output ../data/compressed/xor30/biLSTM.compressed -batch_size 1000" User time (seconds): 3022.32 System time (seconds): 1441.57 Percent of CPU this job got: 329% Elapsed (wall clock) time (h:mm:ss or m:ss): 22:34.40 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 870612 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 8 Minor (reclaiming a frame) page faults: 49935937 Voluntary context switches: 12164801 Involuntary context switches: 5195782 Swaps: 0 File system inputs: 768 File system outputs: 8048 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 Using TensorFlow backend. WARNING:tensorflow:From /home/wangyanbo/下载/yes/lib/python3.7/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. 2020-05-09 21:20:44.320667: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA 2020-05-09 21:20:44.323857: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3092735000 Hz 2020-05-09 21:20:44.324091: I tensorflow/compiler/xla/service/service.cc:150] XLA service 0x55aa6c9daae0 executing computations on platform Host. Devices: 2020-05-09 21:20:44.324124: I tensorflow/compiler/xla/service/service.cc:158] StreamExecutor device (0): , {'0': 'b', '1': 'a'} [0 0 1 1 1 0 1 1 1 0] Command being timed: "python decompressor.py -output ../data/compressed/xor30/biLSTM.reconstructed.txt -model ../data/trained_models/xor30/biLSTM.hdf5 -model_name biLSTM -input_file_prefix ../data/compressed/xor30/biLSTM.compressed -batch_size 1000" User time (seconds): 2604.93 System time (seconds): 1839.16 Percent of CPU this job got: 333% Elapsed (wall clock) time (h:mm:ss or m:ss): 22:13.06 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 656676 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 2 Minor (reclaiming a frame) page faults: 57928112 Voluntary context switches: 12174593 Involuntary context switches: 4876835 Swaps: 0 File system inputs: 264 File system outputs: 27552 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 cmp: ../data/files_to_be_compressed/xor30.txt: 没有那个文件或目录 (base) wangyanbo@wangyanbo-virtual-machine:~/下载/DeepZip-noGPU/src$

mohit1997 commented 4 years ago

Were you able to obtain the compressed file? Update: I read your log. Seems like the code is working. The file /data/compressed/xor30/biLSTM.reconstructed.txt is your compressed output. Please feel free to ask any questions that you have over here. I am afraid, it would not be possible to do this over Teamviewer.

Also, make sure the file you are trying to compress has around 200 characters. Otherwise, the compressor is not able to create batches out of the data.