svishwa / crowdcount-mcnn

Single Image Crowd Counting via MCNN (Unofficial Implementation)
MIT License
499 stars 179 forks source link

train own dataset #33

Open niuniu111 opened 5 years ago

niuniu111 commented 5 years ago

hello,I want to train own dataset,but I donot make gt, Looking forward to help,Thanks

Avalon7 commented 5 years ago

It seems that you need to use some labelling tools to label your own dataset first. Some labelling tools can generate ground truth value for each image. And then you need to modify the code to fit your generated ground truth value. That's my own understanding, and I am trying to use my own dataset as wel.

tongpinmo commented 4 years ago

@Avalon7 i want to know how to get the style of GT_IMG_x.mat in ground_truth file, the format of arrayl in GT_IMG_x.mat is strange ,for GT_IMG_2.mat : ('image_info: ', array([[array([[(array([[222.5446098 , 472.86475499], [ 6.63269208, 709.19046582], [ 64.82506957, 694.96566243], ..., [714.76278388, 418.8815984 ], [519.37417249, 234.74573427], [523.46608059, 238.83764236]]), array([[707]], dtype=uint16))]], dtype=[('location', '0'), ('number', '0')])]], dtype=object))

I have the point gt file(x,y) for each image,but how to get the strange array like above? have you ever make it,thank you

luotianhang commented 3 years ago

i think it is a disaster to make you own dataset

pasquale90 commented 3 years ago

@Avalon7 A simple hack that one can do is that you can load a prototype .mat file from the original dataset using from scipy.io import loadmat,savemat prototype = loadmat(prototype_path) This will give you a dictionary. Then you iterate through the prototype-dict using a for loop and replace it's contents with your corresponding GT dots for each individual sample of your dataset using the following sample code :

def convert_to_mat(prototype,dots,storeMpath):

    for i,(k,v) in enumerate(prototype.items()):
       #print(i,' Key\n',k,'\n\n',i,' Value\n',type(v))

        #change prototypes values
        if (i==3):
            #make some prints first to understand on how to format your dots
            v[0][0][0][0][0] = np.array(dots,np.float64)#will replace the coordinates in the mat file
            v[0][0][0][0][1][0][0] = len(dots)#will replace an additional value of the #annotations included

    savemat(storeMpath,prototype)

In conclusion , call convert_to_mat method for each individual sample of your dataset(that contains x #of dots)