Open JafferWilson opened 7 years ago
I increased the RAM to 480 GB.. still the pre-process show process killed. Is it possible for you to make the pre-processed data available in the repository?
Can you please answer my queries, it will help for sure. Waiting for your reply.
I confirm the issue. @JafferWilson did you find a way to make it run?
@fievelk Yes. The way it is shown in the Read.me file. It is the same way I ran the code.
@JafferWilson Sorry, I did not formulate my question correctly. Running the code using the instructions in the README still produces these memory issues and the process gets killed. Did you manage to fix the problem somehow?
@fievelk Well No... I do not understand why the process is taking so much of Memory. As I have mentioned in the issues what experiment I did and still empty handed.
@JafferWilson Please copy the following code instead of the one given here for module load_bin_vec(fname,vocab). This should resolve the issue.
def load_bin_vec(fname, vocab): """ Loads 300x1 word vecs from Google (Mikolov) word2vec """ word_vecs = {} with open(fname, "rb") as f: header = f.readline() vocab_size, layer1_size = map(int, header.split()) binary_len = np.dtype(theano.config.floatX).itemsize * layer1_size for line in xrange(vocab_size): word = [] ch = f.read(1) if ch == ' ': word = ''.join(word) break if ch != '\n': word.append(ch) if tuple(word) in vocab: word_vecs[tuple(word)] = np.fromstring(f.read(binary_len), dtype=theano.config.floatX) else: f.read(binary_len) return word_vecs
dear @JafferWilson can you slove the problem by using this code?
@naikzinal Sure I will. Just having another problems to solve. As soon free I will.
" name 'load_bin_vec' is not defined" i found that error after changing code can you please help me thank you
Attached is the entire file with changed code. Please use this and then check. You must have made some naming error.
shivi mishra
On 6 November 2017 at 22:47, naikzinal notifications@github.com wrote:
" name 'load_bin_vec' is not defined" i found that error after changing code can you please help me thank you
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/SenticNet/personality-detection/issues/3#issuecomment-342219669, or mute the thread https://github.com/notifications/unsubscribe-auth/AS2vTArnRjrSNrT3BwHuuT7smyaC2uVkks5szz8igaJpZM4PVrL7 .
dear,@chaisme i solved my naming error but, i still have a memory issue. and i don't get any attachment from you. if you are able to run code then can you please send me your process_data.py file. if you can please send me. and what system requirement is needed for run this code? thank you
Here is the file attached in txt format. Please convert this to python script. No new system requirements needed except the ones already mentioned in the README. process_data.txt
dear @chaisme ,Thank you for rly. i will try as soon as i can. and here is my eamil_id naikzinal69@gmail.com you can mail me on that id. thank you
@naikzinal Why you want it on your email, where as you can download it from here always? Or you can download it now and then upload it on your side.
@naikzinal @JafferWilson I have uploaded the txt file in the above comment. Use it as a python script.
dear @JafferWilson actually i changed the code bt still i have memory isseue thats why i asked for file.now i can run my code.
Initially showed process killed but ran perfectly using the code of @chaisme . Thank you very much.
Hi there,
I am trying to run this app and I seem to get stuck at the training phase:
python conv_net_train.py -static -word2vec 2
loading data... data loaded!
model architecture: CNN-static
using: word2vec vectors
[('image shape', 153, 300), ('filter shape', [(200, 1, 1, 300), (200, 1, 2, 300), (200, 1, 3, 300)]), ('hidden_units', [200, 200, 2]), ('dropout', [0.5, 0.5, 0.5]), ('batch_size', 50), ('non_static', False), ('learn_decay', 0.95), ('conv_non_linear', 'relu'), ('non_static', False), ('sqr_norm_lim', 9), ('shuffle_batch', True)]
... training
When I interrupt the kernel I get:
Traceback (most recent call last):
File "conv_net_train.py", line 476, in <module>
activations=[Sigmoid])
File "conv_net_train.py", line 221, in train_conv_net
cost_epoch = train_model(minibatch_index)
File "/anaconda3/envs/py27/lib/python2.7/site-packages/theano/compile/function_module.py", line 903, in __call__
self.fn() if output_subset is None else\
File "/anaconda3/envs/py27/lib/python2.7/site-packages/theano/scan_module/scan_op.py", line 963, in rval
r = p(n, [x[0] for x in i], o)
File "/anaconda3/envs/py27/lib/python2.7/site-packages/theano/scan_module/scan_op.py", line 952, in p
self, node)
File "theano/scan_module/scan_perform.pyx", line 397, in theano.scan_module.scan_perform.perform (/Users/jennan/.theano/compiledir_Darwin-16.7.0-x86_64-i386-64bit-i386-2.7.15-64/scan_perform/mod.cpp:4490)
File "/anaconda3/envs/py27/lib/python2.7/site-packages/theano/scan_module/scan_op.py", line 961, in rval
def rval(p=p, i=node_input_storage, o=node_output_storage, n=node,
KeyboardInterrupt
Any help would be greatly appreciated!!
File "conv_net_train.py", line 147, in train_conv_net train_set_x = datasets[0][rand_perm] MemoryError
Please someone help I need the soln asap
Hi there,
I am trying to run this app and I seem to get stuck at the training phase:
python conv_net_train.py -static -word2vec 2 loading data... data loaded! model architecture: CNN-static using: word2vec vectors [('image shape', 153, 300), ('filter shape', [(200, 1, 1, 300), (200, 1, 2, 300), (200, 1, 3, 300)]), ('hidden_units', [200, 200, 2]), ('dropout', [0.5, 0.5, 0.5]), ('batch_size', 50), ('non_static', False), ('learn_decay', 0.95), ('conv_non_linear', 'relu'), ('non_static', False), ('sqr_norm_lim', 9), ('shuffle_batch', True)] ... training
When I interrupt the kernel I get:
Traceback (most recent call last): File "conv_net_train.py", line 476, in <module> activations=[Sigmoid]) File "conv_net_train.py", line 221, in train_conv_net cost_epoch = train_model(minibatch_index) File "/anaconda3/envs/py27/lib/python2.7/site-packages/theano/compile/function_module.py", line 903, in __call__ self.fn() if output_subset is None else\ File "/anaconda3/envs/py27/lib/python2.7/site-packages/theano/scan_module/scan_op.py", line 963, in rval r = p(n, [x[0] for x in i], o) File "/anaconda3/envs/py27/lib/python2.7/site-packages/theano/scan_module/scan_op.py", line 952, in p self, node) File "theano/scan_module/scan_perform.pyx", line 397, in theano.scan_module.scan_perform.perform (/Users/jennan/.theano/compiledir_Darwin-16.7.0-x86_64-i386-64bit-i386-2.7.15-64/scan_perform/mod.cpp:4490) File "/anaconda3/envs/py27/lib/python2.7/site-packages/theano/scan_module/scan_op.py", line 961, in rval def rval(p=p, i=node_input_storage, o=node_output_storage, n=node, KeyboardInterrupt
Any help would be greatly appreciated!!
Same here. Anyone regarding this, your recommendation would be highly appreciated.
Hello, I am trying to run your repository. I tried with 16 GB, 32 GB, 40 GB, 120 GB RAM systems. I do not understand, the pre-process is taken a lot of memory. At 120 GB RAM, first time I can across Memory related issue. Else everytime I tried the process got killed.
Kindly, let me know what was your configuration for running this process. Kindly, add the details of your system so that it will be helpful to me while running the repository.