Closed lucaskyle closed 6 years ago
output = net(img)
f = output.data
f1,f2 = f[0],f[2]
cosdistance = f1.dot(f2)/(f1.norm()*f2.norm()+1e-5)
predicts.append('{}\t{}\t{}\t{}\n'.format(name1,name2,cosdistance,sameflag))
thresholds = np.arange(-1.0, 1.0, 0.005) predicts = np.array(map(lambda line:line.strip('\n').split(), predicts))
so the predicts should be 'name' 'name', 'cosdistance', 'sameflag'
well the predicts should be like this:
list(predicts) [['Bob_Petrino/Bob_Petrino_0001.jpg', 'Jorge_Castaneda/Jorge_Castaneda_0001.jpg', '0.04254274520116782', '0'], ['Boris_Becker/Boris_Becker_0004.jpg', 'Julianna_Margulies/Julianna_Margulies_0002.jpg', '0.048035090090769915', '0'], ['Brad_Russ/Brad_Russ_0001.jpg', 'Hana_Urushima/Hana_Urushima_0001.jpg', '-0.1515970166208275', '0'], ['Brad_Russ/Brad_Russ_0001.jpg', 'Romeo_Gigli/Romeo_Gigli_0001.jpg', '-0.13148588639861383', '0'], ['Brawley_King/Brawley_King_0001.jpg', 'Tom_Glavine/Tom_Glavine_0002.jpg', '-0.00020600937409563652', '0']]
it cant be turned to np.array...
since train and test r lists , it may cause error in predicts[train] and predicts[test]
i guess i already resolved this issue
in line118:
predicts.append([cosdistance,sameflag])
i think no need name1 and name2. we just want cosdistance and sameflag.
also change predicts transmission acordingly:
#predicts = np.array(map(lambda line:line.strip('\n').split(), predicts))
predicts = np.array(predicts)
also change eval_acc function:
def eval_acc(threshold, diff):
y_true = []
y_predict = []
for d in diff:
#same = 1 if float(d[2]) > threshold else 0
same = 1 if float(d[0]) > threshold else 0
y_predict.append(same)
y_true.append(int(d[1]))
y_true = np.array(y_true)
y_predict = np.array(y_predict)
accuracy = 1.0*np.count_nonzero(y_true==y_predict)/len(y_true)
return accuracy
then the ruslt shows that:
.... ...... [0.05441524965777927, 0], [-0.17700425096084055, 0], [-0.14528739662365814, 0], [-0.10722236475071757, 0], [0.3416709929455772, 0], [0.23105623732690572, 0], [-0.25443852490545055, 0], [0.023177715131848114, 0], [0.026524619619201373, 0], [-0.10628620901062928, 0], [0.018253371490979137, 0], [-0.11412927327255193, 0], [-0.04342111809755596, 0], [0.03978524932105523, 0], [-0.0014472138919220476, 0], [-0.07763557135802249, 0], [0.12778506107615656, 0], [-0.1299918638482079, 0], [-0.01089255930374345, 0], [0.0943561084032917, 0], [0.028360601019968693, 0], [0.08148606970478921, 0], [0.10827495161033096, 0], [-0.03770835983375998, 0], [-0.1286146354470268, 0], [0.09094291719599147, 0], [0.018469312018340612, 0], [-0.1608788875466522, 0], [0.1967229919927477, 0], [0.03006551928739799, 0], [-0.1309213791034503, 0], [-0.04285069074612436, 0], [0.012086415167601562, 0], [-0.0370381525337785, 0], [0.015709869572385418, 0], [0.040746616401546774, 0], [0.2102324329629568, 0], [0.0876415493647156, 0], [-0.009929733700495961, 0], [-0.220100696572903, 0], [0.10893759584409003, 0], [0.022187559640820635, 0], [0.08796702999255375, 0], [-0.01939710212388122, 0], [0.08707620432005646, 0], [0.22164245358974083, 0], [0.08770906994937265, 0], [0.10456715028803207, 0], [-0.10862228752071619, 0], [-0.0754996680249234, 0], [0.02721522105834219, 0], [0.07367423839860206, 0], [0.2937012660543901, 0], [-0.04808411096313683, 0], [-0.1046030566385561, 0], [0.20276917231162536, 0], [-0.04150968887122491, 0], [0.08756627290077033, 0], [-0.001577560494122782, 0], [-0.030135966966249456, 0], [-0.06254094777869669, 0], [-0.07475125966008972, 0], [-0.05365599278380549, 0], [-0.17477099436006455, 0], [0.035059533448178255, 0], [0.07760844338563103, 0], [0.040347394772513416, 0], [0.30534310667088904, 0], [0.05255858190709909, 0], [-0.0961931966419039, 0], [0.11102862051267677, 0], [-0.14043680597511315, 0], [0.23086316627076037, 0], [-0.0024583895625043505, 0], [0.06176045707626435, 0], [0.0011050481002523739, 0], [-0.07077880242738366, 0], [-0.10592840271827882, 0], [0.009406076467263262, 0], [-0.10104863364200718, 0], [-0.1982082936485523, 0], [-0.00600873387721625, 0], [0.07097073390453831, 0], [-0.05492983079319864, 0], [-0.043029525635198206, 0], [-0.04028042693852431, 0], [0.05322423870971959, 0], [0.15219897195560095, 0], [0.19730833786905425, 0], [0.04227508919413888, 0], [-0.07574410084831439, 0], [-0.01799832066962064, 0], [0.1450990670504653, 0], [-0.01601596503495868, 0], [0.2685847054756568, 0], [-0.08086274787785164, 0], [-0.0641511565268584, 0], [0.037554112319306374, 0], [0.05022439440919511, 0], [-0.0037991026641006416, 0], [-0.08008517353595476, 0], [0.10075134658299756, 0], [0.043784584185371664, 0], [-0.02527230188913465, 0], [-0.07900555429160469, 0], [0.12491901472624652, 0]] LFWACC=0.9915 std=0.0054 thd=0.3085
0.9915 good work!!!! :)
predicts = np.array(predicts)
Why is this error occurring for me?
Traceback (most recent call last):
File "lfw_eval.py", line 127, in
hey there, when i tried to run lfw_eval.py, i set my path to lfw dataset. then i got an error like this:
Traceback (most recent call last): File "lfw_eval.py", line 128, in
best_thresh = find_best_threshold(thresholds, predicts[train])
IndexError: too many indices for array
is there any idea for this issue? may be sth wrong in channels which cause many arrays cant be matched well when inputing ? i got raw dataset of LFW(downloaded from websites) with ur landmark txt.