emrahbasaran / SPReID

Code for our CVPR 2018 paper - Human Semantic Parsing for Person Re-identification
MIT License
141 stars 32 forks source link

can you share your evaluate code?I got very low result #23

Closed faustismarck closed 5 years ago

faustismarck commented 5 years ago

I used this codehttps://github.com/layumi/Person_reID_baseline_pytorch/blob/master/evaluate_gpu.py I changed the evaluate part in main.py to

def get_id(img_path,camera_id,labels):
    a, b = img_path.split('\n')
    image_path, label = a.split(' ')
    name = image_path.split('/')[-1]
    cam = name.split('_')[1][1]
    if label[0:2]=='-1':
        labels.append(-1)
    else:
        labels.append(int(label))
    camera_id.append(int(cam))
    return camera_id, labels

def Evaluation():
    # Creat data generator
    batch_tuple = MultiprocessIterator(
        DataChef.ReID10D(args, args.project_folder + '/evaluation_list/' + args.eval_split + '.txt',
                         image_size=args.scales_tr[0]),
        args.minibatch, n_prefetch=2, n_processes=args.nb_processes, shared_mem=20000000, repeat=False, shuffle=False)
    # Keep the log in history
    history = {args.dataset: {'features': []}}
    for dataBatch in batch_tuple:
        dataBatch = list(zip(*dataBatch))
        # Prepare batch data
        IMG = np.array_split(np.array(dataBatch[0]), len(Model), axis=0)
        LBL = np.array_split(np.array(dataBatch[1]), len(Model), axis=0)
        # Forward
        for device_id, img, lbl in zip(range(len(Model)), IMG, LBL):
            Model[device_id](img, lbl, args.dataset,train=False)
        # Aggregate reporters from all GPUs
        reporters = []
        for i in range(len(Model)):
            reporters.append(Model[i].reporter)
            Model[i].reporter = {}  # clear reporter
        # History
        for reporter in reporters:
            for k in reporter[args.dataset].keys():
                history[args.dataset][k].append(reporter[args.dataset][k])
    features = np.concatenate(history[args.dataset]['features'], axis=0)
    F = open(args.project_folder + '/evaluation_list/' + args.eval_split + '.txt').readlines()
    camera_id = []
    labels = []
    for f in F:
        cam, label = get_id(f,camera_id,labels)
    # storing features to an outputfile
    if args.eval_split=='market_gallery':
        result = {'gallery_f': features, 'gallery_label': label,'gallery_cam': cam}
        scipy.io.savemat('pytorch_result_gallery.mat', result)
    else:
        result ={'query_f': features, 'query_label': label,'query_cam': cam}
        scipy.io.savemat('pytorch_result_query.mat', result)

I only have one gpu-gtx1080,and i just can get five of the train-10d dataset .In training part ,I just set the max_iter to 65000 so the loss is 0.03. I got the result blow Rank@1:0.490796 Rank@5:0.763955 Rank@10:0.846793 mAP:0.329347 How can i get higher accuracy?Please give me some advices.

emrahbasaran commented 5 years ago

Hi,

We used the evaluation scripts published by authors of the datasets. You can find them by visiting the project pages of the datasets.

In all our experiments, 10 datasets mentioned in the paper were used. Therefore, you should follow the same settings mentioned in the paper to get higher accuracy.

faustismarck commented 5 years ago

Hi,

We used the evaluation scripts published by authors of the datasets. You can find them by visiting the project pages of the datasets.

In all our experiments, 10 datasets mentioned in the paper were used. Therefore, you should follow the same settings mentioned in the paper to get higher accuracy.

Thank you!