Closed tuyunbin closed 8 years ago
It was later found out that there are white spaces in naming the folder, which caused the problem.
-Xinlei
On Sat, Sep 24, 2016 at 9:14 AM, tuynbin notifications@github.com wrote:
hello,Two days ago, I downloaded your code ,It is normal when computed bleu,but when computed meteor,I meet the trouble as followed: computing METEOR score... Traceback (most recent call last): File "/home/tuyunbin/vedio caption/arctic-capgen-vid-master/metrics.py", line 203, in test_cocoeval() File "/home/tuyunbin/vedio caption/arctic-capgen-vid-master/metrics.py", line 199, in test_cocoeval valid_score, test_score = score_with_cocoeval(samples_valid, samples_test, engine) File "/home/tuyunbin/vedio caption/arctic-capgen-vid-master/metrics.py", line 92, in score_with_cocoeval valid_score = scorer.score(gts_valid, samples_valid, engine.valid_ids) File "/home/tuyunbin/vedio caption/arctic-capgen-vid-master/cocoeval.py", line 42, in score score, scores = scorer.compute_score(gts, res) File "/home/tuyunbin/vedio caption/arctic-capgen-vid- master/pycocoevalcap/meteor/meteor.py", line 37, in compute_score stat = self._stat(res[i][0], gts[i]) File "/home/tuyunbin/vedio caption/arctic-capgen-vid- master/pycocoevalcap/meteor/meteor.py", line 55, in _stat self.meteor_p.stdin.write('{}\n'.format(score_line)) IOError: [Errno 32] Broken pipe I follow the readme way to add that line to 45-line position, but the result is still the same, and now I am very anxious.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/tylin/coco-caption/issues/17, or mute the thread https://github.com/notifications/unsubscribe-auth/ACTwEZkBXLutNpW3iIJApnR5RC2j-t-kks5qtSJQgaJpZM4KFo9y .
I don't use the space in the path, but the problem still happened...
{'reflen': 42989, 'guess': [43141, 39089, 35037, 30985], 'testlen': 43141, 'correct': [24003, 7902, 2445, 759]}
ratio: 1.00353578822
Traceback (most recent call last):
File "train_new.py", line 59, in <module>
main()
File "train_new.py", line 37, in main
solver.train()
File "/home/thisray/Code/lab7/Lab7_run_HERE/core/solver.py", line 172, in train
scores = evaluate(data_path='./data', split='val', get_scores=True)
File "/home/thisray/Code/lab7/Lab7_run_HERE/core/bleu.py", line 55, in evaluate
final_scores = score(ref, hypo)
File "/home/thisray/Code/lab7/Lab7_run_HERE/core/bleu.py", line 20, in score
score,scores = scorer.compute_score(ref,hypo)
File "../coco_caption/pycocoevalcap/meteor/meteor.py", line 38, in compute_score
stat = self._stat(res[i][0], gts[i])
File "../coco_caption/pycocoevalcap/meteor/meteor.py", line 57, in _stat
self.meteor_p.stdin.write('{}\n'.format(score_line))
IOError: [Errno 32] Broken pipe
and if I don't add shell=True
in subprocess.Popen()
, it would show another error.
self.meteor_p = subprocess.Popen(self.meteor_cmd, \
cwd=os.path.dirname(os.path.abspath(__file__)), stdin=subprocess.PIPE, \
stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
Can you provide a small piece of examples that you used for evaluation?
Ender X. Chen
Language Technologies Institute, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA
On Fri, Apr 21, 2017 at 10:45 AM, thisray notifications@github.com wrote:
I don't use the space in the path, but the problem still happened...
{'reflen': 42989, 'guess': [43141, 39089, 35037, 30985], 'testlen': 43141, 'correct': [24003, 7902, 2445, 759]} ratio: 1.00353578822 Traceback (most recent call last): File "train_new.py", line 59, in
main() File "train_new.py", line 37, in main solver.train() File "/home/thisray/Code/lab7/Lab7_run_HERE/core/solver.py", line 172, in train scores = evaluate(data_path='./data', split='val', get_scores=True) File "/home/thisray/Code/lab7/Lab7_run_HERE/core/bleu.py", line 55, in evaluate final_scores = score(ref, hypo) File "/home/thisray/Code/lab7/Lab7_run_HERE/core/bleu.py", line 20, in score score,scores = scorer.compute_score(ref,hypo) File "../coco_caption/pycocoevalcap/meteor/meteor.py", line 38, in compute_score stat = self._stat(res[i][0], gts[i]) File "../coco_caption/pycocoevalcap/meteor/meteor.py", line 57, in _stat self.meteor_p.stdin.write('{}\n'.format(score_line)) IOError: [Errno 32] Broken pipe
and if I don't add shell=True in subprocess.Popen(), it would show another error.
self.meteor_p = subprocess.Popen(self.meteor_cmd, \ cwd=os.path.dirname(os.path.abspath(file)), stdin=subprocess.PIPE, \ stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True)
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/tylin/coco-caption/issues/17#issuecomment-296210722, or mute the thread https://github.com/notifications/unsubscribe-auth/ACTwEa_KAjFKAWfflREWKq0CmNr5brB6ks5ryMDtgaJpZM4KFo9y .
final_scores = score(ref, hypo)
ref
and hypo
are both dict {}
ref: {0: [u'a shoe rack with some shoes and a dog sleeping on them .',
u'a small dog is curled up on top of the shoes .',
u'various slides and other footwear rest in a metal basket outdoors .',
u'a dog sleeping on a show rack in the shoes .',
u'this wire metal rack holds several pairs of shoes and sandals .'],
1: [...], ... }
hypo: {0: [u'a large white plate with a white plate with a white plate .'],
1: [u'a clock tower with a clock on a table .'],
2: [u'a man is standing in a skateboard in a city .'], ... }
I upload them Here thanks!
Wait. are these coco captions? What is the score function you have? Are you using COCOEval?
Sorry, I don't explain clearly. The final_scores = score(ref, hypo)
is from Here.
The score function: (/show-attend-and-tell/core/bleu.py)
sys.path.append('../coco_caption')
from pycocoevalcap.bleu.bleu import Bleu
from pycocoevalcap.rouge.rouge import Rouge
from pycocoevalcap.cider.cider import Cider
from pycocoevalcap.meteor.meteor import Meteor
def score(ref, hypo):
scorers = [
(Bleu(4),["Bleu_1","Bleu_2","Bleu_3","Bleu_4"]),
(Meteor(),"METEOR"),
(Rouge(),"ROUGE_L"),
(Cider(),"CIDEr")
]
final_scores = {}
for scorer,method in scorers:
score,scores = scorer.compute_score(ref,hypo) # problem here
if type(score)==list:
for m,s in zip(method,score):
final_scores[m] = s
else:
final_scores[method] = score
return final_scores
And in meteor.py: (/coco_caption/pycocoevalcap/meteor/meteor.py)
def compute_score(self, gts, res):
assert(gts.keys() == res.keys())
imgIds = gts.keys()
scores = []
eval_line = 'EVAL'
self.lock.acquire()
for i in imgIds:
assert(len(res[i]) == 1)
stat = self._stat(res[i][0], gts[i]) # problem here
eval_line += ' ||| {}'.format(stat)
self.meteor_p.stdin.write('{}\n'.format(eval_line))
for i in range(0,len(imgIds)):
scores.append(float(self.meteor_p.stdout.readline().strip()))
score = float(self.meteor_p.stdout.readline().strip())
self.lock.release()
return score, scores
def _stat(self, hypothesis_str, reference_list):
# SCORE ||| reference 1 words ||| reference n words ||| hypothesis words
hypothesis_str = hypothesis_str.replace('|||','').replace(' ',' ')
score_line = ' ||| '.join(('SCORE', ' ||| '.join(reference_list), hypothesis_str))
self.meteor_p.stdin.write('{}\n'.format(score_line)) # problem here
return self.meteor_p.stdout.readline().strip()
The IOError: [Errno 32] Broken pipe
problem happened on meteor.py
, so I asked my problem here.. If its my misunderstanding, I'm very sorry that I wasted your time.
thanks!
You might want to compare the code here and the code there. You may also want to replace the jar file to debug.
It might be a problem of java. Try this:
def __init__(self):
self.meteor_cmd = ['java', '-jar', '-Xmx2G', METEOR_JAR, '-', '-', '-stdio', '-l', 'en', '-norm']
self.meteor_p = subprocess.Popen(' '.join(self.meteor_cmd),\ # change this line
cwd=os.path.dirname(os.path.abspath(__file__)), \
stdin=subprocess.PIPE, \
stdout=subprocess.PIPE, \
stderr=subprocess.PIPE,\
shell=True)
# Used to guarantee thread safety
self.lock = threading.Lock()
Yes, I tried some commands about java (but I forgot is it about install or not) and it can work now. Thanks a lot!
@BruceJing2017 i tried doing what you said. But still I am getting a Broken Pipe error.
self.meteor_p.stdin.write('{}\n'.format(score_line))
At this line.
I have exactly same problem and finally find the reason. For my case, it simply because I process meteor.compute_score twice. After the first run meteor_p has been killed so the second run is fail. To solve this I just delete the line "self.meteor_p.kill()". This works for me. Or you may delete meteor scorer first and init a new one in second time.
I also have the problem, who can help me? Thanks very much.
File "train.py", line 179, in train
score = eval(epoch)
File "train.py", line 245, in eval
result = utils.eval_meteor(reference, candidate)
File "/home/wangqianlong/model/WEAN-master/data/utils.py", line 99, in eval_meteor
score, scores = Meteor().compute_score(ref, hypo)
File "/home/wangqianlong/model/WEAN-master/data/script/meteor.py", line 38, in compute_score
stat = self._stat(res[i][0], gts[i])
File "/home/wangqianlong/model/WEAN-master/data/script/meteor.py", line 59, in _stat
self.meteor_p.stdin.flush()
BrokenPipeError: [Errno 32] Broken pipe
I have also encountered with this problem, solution are as follows:
remove all temp files at "pycocoevalcap/tokenizer". It's something like this:
pycocoevalcap/tokenizer/tmpB9Jv0J
pycocoevalcap/tokenizer/tmpoo7WGG
meet the same problem i installed java, removed temp file, and changed codes mentioned above, but still can not run a simple demo
the code mainly comes from the example in this repo `from pycocotools.coco import COCO
from pycocoevalcap.eval import COCOEvalCap
dataDir='.'
dataType='val2014'
algName = 'fakecap'
annFile='%s/annotations/captions_%s.json'%(dataDir,dataType)
subtypes=['results', 'evalImgs', 'eval']
[resFile, evalImgsFile, evalFile]= \
['%s/results/captions%s%s_%s.json'%(dataDir,dataType,algName,subtype) for subtype in subtypes]
coco = COCO(annFile)
cocoRes = coco.loadRes(resFile)
cocoEval = COCOEvalCap(coco, cocoRes)
cocoEval.params['image_id'] = cocoRes.getImgIds()
cocoEval.evaluate()
cocoEval = COCOEvalCap(coco, cocoRes)
for metric, score in cocoEval.eval.items():
print('%s: %.3f'%(metric, score))`
and always has the error: `Traceback (most recent call last):
File "cap_eval.py", line 20, in
cocoEval.evaluate()
File "/home/Up-Down-Captioner/external/coco-caption/pycocoevalcap/eval.py", line 52, in evaluate
score, scores = scorer.compute_score(gts, res)
File "/home/Up-Down-Captioner/external/coco-caption/pycocoevalcap/meteor/meteor.py", line 39, in compute_score
stat = self._stat(res[i][0], gts[i])
File "/home/Up-Down-Captioner/external/coco-caption/pycocoevalcap/meteor/meteor.py", line 58, in _stat
self.meteor_p.stdin.write('{}\n'.format(score_line))
IOError: [Errno 32] Broken pipe`
I had the same problem before as well (but I was using Python 3 version from this repo). My fix was adding the data file when running java cmd. Hope this helps.
self.meteor_cmd = ['java', '-jar', '-Xmx2G', METEOR_JAR, '-', '-', '-stdio', '-l', 'en', '-norm', '-a', 'data/paraphrase-en.gz']
I encountered same bug,and try to delete parameter 'shell = True'
in the function self.meteor_p = subprocess.Popen(.....)
, fixed this bugs!!
I use https://github.com/salaniz/pycocoevalcap, a python3 version coco-caption evaluation code.
I have meet the same error, solutions
1 First we should install the java environment with following codes: sudo apt-get install default-jre sudo apt-get install default-jdk 2 Then ['java', '-jar', '-Xmx2G', METEOR_JAR, '-', '-', '-stdio', '-l', 'en', '-norm', '-a', 'data/paraphrase-en.gz'] can be changed into ['/usr/bin/java', '-jar', '-Xmx2G', METEOR_JAR, '-', '-', '-stdio', '-l', 'en', '-norm', '-a', 'data/paraphrase-en.gz'] where '/usr/bin/java' is our installed Java library file in the first stage. 3 Last, delete parameter 'shell=True' Fixed the error!
@ltp1995 Thanks for sharing your solution! It helps me a lot!!!
' '.join(self.meteor_cmd),\ # change this line
changing the second line of init might be essential to this problem.
another way is to download paraphrase-en.gz
this file https://github.com/salaniz/pycocoevalcap/blob/ad63453cfab57a81a02b2949b17a91fab1c3df77/meteor/data/paraphrase-en.gz
and change the cmd like @dahuang37, Mainly adding these two parameters'-a', 'data/paraphrase-en.gz'
I download coco caption two days ago. Unfortunately,I meet a trouble when I run metricsl.py.It is normal when computed bleu, but when computed the meteor,I meet a error as followed: computing METEOR score... Traceback (most recent call last): File "metrics.py", line 203, in
test_cocoeval()
File "metrics.py", line 199, in test_cocoeval
valid_score, test_score = score_with_cocoeval(samples_valid, samples_test, engine)
File "metrics.py", line 92, in score_with_cocoeval
valid_score = scorer.score(gts_valid, samples_valid, engine.valid_ids)
File "/home/tuyunbin/vedio caption/arctic-capgen-vid-master/cocoeval.py", line 42, in score
score, scores = scorer.compute_score(gts, res)
File "/home/tuyunbin/vedio caption/arctic-capgen-vid-master/pycocoevalcap/meteor/meteor.py", line 37, in compute_score
stat = self._stat(res[i][0], gts[i])
File "/home/tuyunbin/vedio caption/arctic-capgen-vid-master/pycocoevalcap/meteor/meteor.py", line 55, in _stat
self.meteor_p.stdin.write('{}\n'.format(score_line))
IOError: [Errno 32] Broken pipe
Finally,I followed the Li Yao's readme way to add this 'self.meteor_p.kill()' at 45 line of meteor.py ,but the result is still the same, and now I am very anxious.