karpathy / neuraltalk2

Efficient Image Captioning code in Torch, runs on GPU
5.49k stars 1.26k forks source link

prepro.py crashes on Unicode captions #49

Open gwern opened 8 years ago

gwern commented 8 years ago

Expanding my tag/image dataset further from Danbooru, my preprocessing step began to crash with the error:

Traceback (most recent call last):  File "prepro.py", line 241, in <module>
main(params)   File "prepro.py", line 162, in main
prepro_captions(imgs)   File 
"prepro.py", line 43, in prepro_captions
txt = str(s).lower().translate(None, string.punctuation).strip().split() UnicodeEncodeError: 'ascii' codec can't encode 
character u'\xd7' in position 21: ordinal not in range(128)

While no useful information is printed about which tag/JSON entry caused the problem, my guess is that one of the tags has some Unicode in it (probably a Japanese word or emoji) and neuraltalk2/prepro.py, like char-rnn, makes an ASCII-only assumption.

Using the first suggestion I found on StackOverflow, I tried tossing in some sort of iconv-like conversion step which renders Unicode in a longer ASCII form (I think that's what it does, anyway):

@@ -34,13 +34,13 @@ import numpy as np
 from scipy.misc import imread, imresize

 def prepro_captions(imgs):
-  
+
   # preprocess all the captions
   print 'example processed tokens:'
   for i,img in enumerate(imgs):
     img['processed_tokens'] = []
     for j,s in enumerate(img['captions']):
-      txt = str(s).lower().translate(None, string.punctuation).strip().split()
+      txt = s.encode('ascii', errors='backslashreplace').lower().translate(None, string.punctuation).strip().split()
       img['processed_tokens'].append(txt)
       if i < 10 and j == 0: print txt

Seems to work. Maybe some version of that could be added?

karpathy commented 8 years ago

ahhh, unicode hassles... thanks for reporting this. I'd rather find which part of the code assumes ascii and fix it to work with unicodes. I'm on vacation right now but when I get back I could try to hunt around a bit. The problem we had with this strategy (and the thing to watch out for) in char-rnn is that unicode support made the code much slower, and the memory footprint much larger, etc. It's not clear if we'd run into the same problems here.

gwern commented 8 years ago

I would guess not, because isn't that the point of parsing the captions into individual words and then creating a word->bit mapping vocabulary? Actually, I've been wondering for a while: did you ever try putting char-rnn rather than a word-level rnn on top of neuraltalk2, or is Coco not big enough to train that too?

karpathy commented 8 years ago

Good point, neuraltalk2 works mostly on level of indices and the main script isn't too aware of the mappings. I also wrote the code specifically in such a way that character-level extensions would be easy in the future. I haven't actually tried this for lack of time, but in principle this should require a very small change in the prepro python file, where instead of splitting sentence tokens by space you split by each character. The only thing to worry about is that by default the max_seq_length is 16, but in this case you'd want that to be quite a bit higher. But in principle nothing preventing it, and the train script should be indifferent, all it sees are images and sequences of integers.