rusty1s / embedded_gcnn

Embedded Graph Convolutional Neural Networks (EGCNN) in TensorFlow
MIT License
78 stars 13 forks source link

SLIC #4

Open tonyandsunny opened 4 years ago

tonyandsunny commented 4 years ago

Hi! Amazing work! When using a algorithm such as SLIC to obtain a super pixel block, it seems that the super pixel block of each picture is different. And the number of obtained super pixel blocks is not equal to the value set by the parameter "num_segments". I can not understand it!

rusty1s commented 4 years ago

I believe SLIC does only guarantee equal number of superpixels in the skimage implementation if you set enforce_connectivity to False.

tonyandsunny commented 4 years ago

Yes!I just try it. They can get the equal number of superpixels when set it to false. But the number of superpixels obtained is not equal to the 'num_segments=100' (I just set the parameter).

rusty1s commented 4 years ago

So you have equal number of superpixels, but there are less than 100?

tonyandsunny commented 4 years ago

Yes! And when the 'num_segments=100', the equal number of superpixels is 121 more than 100. Why the obtained superpixels is not 100?

rusty1s commented 4 years ago

I am not really sure, sorry. Maybe you can consult the sckit-image authors for help

tonyandsunny commented 4 years ago

OK! Thank you very much!

tonyandsunny commented 4 years ago

Is this code a Splineconv model? Sorry, I am not familiar with this code and Splineconv.

rusty1s commented 4 years ago

Not really, it is my masters thesis. You can find the SplineConv implementation here.

tonyandsunny commented 4 years ago

I try use' embedded_gcnn code' to get the cifar10 dataset. And I modified it to the 'pytorch_geometric dataset' format. Then I run the mnist_graculs.py (loading the cifar10 dataset) and the result is just about 0.43. Maybe I was wrong. Could you give some advices?

rusty1s commented 4 years ago

This is actually quite hard to say. It could be anything from wrong data handling to bad hyperparameters.

tonyandsunny commented 4 years ago

while num_left > 0:

            min_batch = min(batch_size, num_left)
            images, labels = dataset.next_batch(min_batch, shuffle=False)
            num_left -= min_batch
            if batch_num == 0:
                label_array = labels
            else:
                label_array = np.concatenate((label_array, labels), axis=0)
            for i in xrange(labels.shape[0]):
                #data = preprocess_algorithm(images[i])
                features, node_slice, edge_index, edge_slice, pos = preprocess_algorithm(images[i])
                #edge_slice_num = edge_slice #每张图片中边的数目
                #node_slice_num = node_slice#每张图片中节点的数目
                if batch_num == 0 and i==0:
                    features_array = features
                    pos_array = pos
                    edge_index_array = edge_index
                else:
                    features_array = np.concatenate((features_array,features),axis=0)
                    pos_array = np.concatenate((pos_array, pos), axis=0)
                    edge_index_array = np.concatenate((edge_index_array, edge_index), axis=1)

                node_slice_list.append(node_slice+node_slice_list[-1])
                edge_slice_list.append(edge_slice+edge_slice_list[-1])
                #data = (features_array, node_slice_list, edge_index_array, edge_slice_list)
                j += 1
                max_index = torch.from_numpy(edge_index)[0,:].max()
                size = torch.from_numpy(pos).size(0) - 1
                assert max_index == size
                assert edge_slice == edge_index.shape[1]
            #_save(data_dir, self._names[j], data)
            _print_status(data_dir,
                          100 * (1 - num_left / dataset.num_examples))
            batch_num+=1

        _print_status(data_dir, 100)
        data = (features_array, node_slice_list, edge_index_array, edge_slice_list,pos_array)

        if isinstance(data, np.ndarray):
            data = (data, label_array)
        else:
            data = data + (label_array,)
        np.save(data_dir, data)

        #torch.save((self.data, self.slices), path)
        print()

The above is the code I modified after dataset.py. In order to be able to change the data format of pytorch-geometric, I feel that there should be no big problem. One thing is that when I use the slic algorithm to get a super-pixel block, the number of super-pixel nodes obtained for each image is different, so I define node_slice to record the super-pixel block of each image.