FrankWork / fudan_mtl_reviews

TensorFlow implementation of the paper `Adversarial Multi-task Learning for Text Classification`
178 stars 40 forks source link

UnrecognizedFlagError!! #1

Open yzho0907 opened 6 years ago

yzho0907 commented 6 years ago

Traceback (most recent call last): File "src/main.py", line 7, in from inputs import util File "/home/young/Downloads/fudan_mtl_reviews-master/src/inputs/util.py", line 50, in def write_vocab(vocab, vocab_file=FLAGS.vocab_file): File "/home/young/Downloads/ENTER/envs/deeplearning/lib/python3.5/site-packages/tensorflow/python/platform/flags.py", line 84, in getattr wrapped(_sys.argv) File "/home/young/Downloads/ENTER/envs/deeplearning/lib/python3.5/site-packages/absl/flags/_flagvalues.py", line 630, in call name, value, suggestions=suggestions) absl.flags._exceptions.UnrecognizedFlagError: Unknown command line flag 'build_data'

any idea how to fix it?

wangzhihuia commented 6 years ago

python src/main.py build_data

yzho0907 commented 6 years ago

thx, but missing embed300.trim.npy?

wangzhihuia commented 6 years ago

I am also trying to find solutions to solve this problem...

yzho0907 commented 6 years ago

is the embed300.trim.npy a word2vec?

FrankWork commented 6 years ago

embed300.trim.npy is trimmed from google news word2vec. The origin file is too large, so I didn't upload it.

FrankWork commented 6 years ago

you can use --word_dim=50 to use pre-trained 50 dim senna embeddings

yzho0907 commented 6 years ago

thank u very much. i would like to try to change the data-set to a Chinese data-set, a lot of work to do!

xiaoleihuang commented 6 years ago

@wangzhihuia @FrankWork I tried the both commands below:

  1. python src/main.py build_data --word_dim=300;
  2. python src/main.py --word_dim=300 --build_data;

I still got the following error: absl.flags._exceptions.UnrecognizedFlagError: Unknown command line flag 'word_dim'

Could you help me? Thank you!

Solved Changed the settings in the main() codes:

  1. Change all 300 dims into 50
  2. Create two directories: saved_models in the project root folder; in data folder, create generated'; in thesaved_modelsfolder createfudan-mtl-adv`
linxiaoby commented 6 years ago

I remove all the flags.DEFINE in util.py, and it works. Maybe it is because only one file can has flags?

p-null commented 5 years ago

Hi, I tried every the method above. I still got the Unknown command line flag error. Can someone provide a step-by-step procedure to run this program?

ps. I have also tried the exactly what author said in the readme but its not working.

yzho0907 commented 5 years ago

plz give the full error and Traceback

chenting0324 commented 5 years ago

Hi, how can I use a Chinese data set? How to train the file "embed300.trim.npy"

yzho0907 commented 5 years ago

.npy file is actually a numpy type file, once u traind a 300d word embedding u can save it to a numpy type by using np.save() but more suggest that u use gensim. Using Chinese data set i highly suggest u follow author's training data format and run the code. there is a another repo https://github.com/andyweizhao/capsule_text_classification gained a higher score.